1
|
Mohamed Selim A, Barz M, Bhatti OS, Alam HMT, Sonntag D. A review of machine learning in scanpath analysis for passive gaze-based interaction. Front Artif Intell 2024; 7:1391745. [PMID: 38903158 PMCID: PMC11188426 DOI: 10.3389/frai.2024.1391745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 05/15/2024] [Indexed: 06/22/2024] Open
Abstract
The scanpath is an important concept in eye tracking. It refers to a person's eye movements over a period of time, commonly represented as a series of alternating fixations and saccades. Machine learning has been increasingly used for the automatic interpretation of scanpaths over the past few years, particularly in research on passive gaze-based interaction, i.e., interfaces that implicitly observe and interpret human eye movements, with the goal of improving the interaction. This literature review investigates research on machine learning applications in scanpath analysis for passive gaze-based interaction between 2012 and 2022, starting from 2,425 publications and focussing on 77 publications. We provide insights on research domains and common learning tasks in passive gaze-based interaction and present common machine learning practices from data collection and preparation to model selection and evaluation. We discuss commonly followed practices and identify gaps and challenges, especially concerning emerging machine learning topics, to guide future research in the field.
Collapse
Affiliation(s)
- Abdulrahman Mohamed Selim
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
| | - Michael Barz
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
- Applied Artificial Intelligence, University of Oldenburg, Oldenburg, Germany
| | - Omair Shahzad Bhatti
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
| | - Hasan Md Tusfiqur Alam
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
| | - Daniel Sonntag
- German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Saarbrücken, Germany
- Applied Artificial Intelligence, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
2
|
Gao S, Wang L. How flight experience impacts pilots' decision-making and visual scanning pattern in low-visibility approaches: preliminary evidence from eye tracking. ERGONOMICS 2024:1-17. [PMID: 38254322 DOI: 10.1080/00140139.2023.2298992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 12/19/2023] [Indexed: 01/24/2024]
Abstract
The visual approach is the most accident-prone phase of a flight, especially in low-visibility conditions. This preliminary study aimed to examine the effects of flight experience on pilots' decision-making and visual scanning pattern in low-visibility approaches. Twenty pilots were separated into two groups based on their flight experience and completed the high- and low-visibility approaches in balanced order using a high-fidelity flight simulator. Pilots' mental workload and visual scanning patterns were recorded via an eye tracker. The results showed that, compared to less flight-experienced pilots (20%, 3/15), experienced pilots (80%, 4/5) were more likely to make go-around decisions in the low-visibility approaches. Furthermore, they exhibited a more flexible and adaptable visual scanning pattern by quickly shifting their attention, as evidenced by decreased fixations and increased saccades. These findings suggest that the integration of visual scanning strategy and training solution with a marginally meteorological approach may enhance decision-making safety for novice pilots.
Collapse
Affiliation(s)
- Shan Gao
- College of Safety Science and Engineering, Civil Aviation University of China, Tianjin, China
| | - Lei Wang
- College of Safety Science and Engineering, Civil Aviation University of China, Tianjin, China
| |
Collapse
|
3
|
Jeong I, Kudo K, Kaneko N, Nakazawa K. Esports experts have a wide gaze distribution and short gaze fixation duration: A focus on League of Legends players. PLoS One 2024; 19:e0288770. [PMID: 38165975 PMCID: PMC10760684 DOI: 10.1371/journal.pone.0288770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 07/04/2023] [Indexed: 01/04/2024] Open
Abstract
This study investigated the specific gaze control ability of expert players and low-skill players of League of Legends (LoL). Eleven expert and nine low-skill players were divided according to their official ranking. Then, the gaze movement of each participant when performing each task (e.g., easy task and moderate task) while competing against a computer artificial intelligence system was recorded. Experts were found to have a significantly wide horizontal gaze distribution. Additionally, experts had a consistently short gaze fixation duration during the moderate task. These results suggest that a wide horizontal gaze distribution allows experts to obtain information from a wider area. Additionally, the consistently short fixation duration of the experts indicated that they need only a short period to assess information, which is advantageous because large amounts of information need to be processed within a limited time while playing. This specific gaze control ability could be an important factor that contributes to the superior performance of expert LoL players.
Collapse
Affiliation(s)
- Inhyeok Jeong
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| | - Kazutoshi Kudo
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- Graduate School of Interdisciplinary Information Studies, The University of Tokyo, Tokyo, Japan
| | - Naotusgu Kaneko
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| | - Kimitaka Nakazawa
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
4
|
Moretti L, Dinu R, Di Mascio P. Collision risk assessment between aircraft and obstacles in the areas surrounding airports. Heliyon 2023; 9:e18378. [PMID: 37539140 PMCID: PMC10395640 DOI: 10.1016/j.heliyon.2023.e18378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 05/16/2023] [Accepted: 07/15/2023] [Indexed: 08/05/2023] Open
Abstract
The topographical features of a site and the anthropogenic artefacts inside and outside the airport boundaries influence the infrastructure use. Objects penetrating the obstacle limitation surfaces (OLS) or standing outside those surfaces have to be mapped and risk-assessed because they could be a hazard to air navigation. This study aims to quantify the risk of collision between aircraft and obstacles in the airspace. There are no available procedures in the literature: the authors supposed that the obstacle type and the examined OLS affect the collision risk. The proposed risk values and amplification factors derive from interviews with technicians. The methodology has been implemented in an existing airport with 589 penetrating obstacles: the results highlight that 69.8% of obstacles imply a negligible risk, and 3.7% require further analyses by the competent aviation authority. In this study, buildings and pylons penetrating the Transitional Surface are the most hazardous obstacles.
Collapse
|
5
|
Cooper D, Wiggins MW, Main LC, Wills JA, Doyle T. Cue utilisation is partially related to performance on an urban operations course but not experience. APPLIED ERGONOMICS 2023; 110:104024. [PMID: 37080083 DOI: 10.1016/j.apergo.2023.104024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/24/2023] [Accepted: 04/04/2023] [Indexed: 05/03/2023]
Abstract
INTRODUCTION Decision making in use of force relies on accurate cue identification to inform appropriate response. This research was designed to test the relationship between cue utilisation and performance prior to, and following participation in an urban operations course (UOC). METHODS A total of 37 participants were assessed on cue utilisation measures, course outcome and between group changes following course participation. RESULTS A significant main effect was evident for Cue utilisation and administration, (p = 0.005), but not training group, (p = 0.54), nor between groups and point of administration, (p = 0.410). No main effect was evident between groups and training outcome, (p = 0.11). However, there was a main effect for point of administration, (p = 0.02) and training outcome and point of administration (p = 0.02). CONCLUSION Although cue utilisation is an essential component of perception-action tasks, cues may be more specific to the relevant training environment with limited transfer to the operational context.
Collapse
Affiliation(s)
| | | | - Luana C Main
- Deakin University, Institute for Physical Activity & Nutrition (IPAN), Geelong, VIC, Australia
| | | | - Tim Doyle
- Macquarie University, Sydney, NSW, Australia.
| |
Collapse
|
6
|
Harris DJ, Wilson MR, Jones MI, de Burgh T, Mundy D, Arthur T, Olonilua M, Vine SJ. An investigation of feed-forward and feedback eye movement training in immersive virtual reality. J Eye Mov Res 2023; 15:10.16910/jemr.15.3.7. [PMID: 38978970 PMCID: PMC11229047 DOI: 10.16910/jemr.15.3.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/10/2024] Open
Abstract
The control of eye gaze is critical to the execution of many skills. The observation that task experts in many domains exhibit more efficient control of eye gaze than novices has led to the development of gaze training interventions that teach these behaviours. We aimed to extend this literature by i) examining the relative benefits of feed-forward (observing an expert's eye movements) versus feed-back (observing your own eye movements) training, and ii) automating this training within virtual reality. Serving personnel from the British Army and Royal Navy were randomised to either feed-forward or feed-back training within a virtual reality simulation of a room search and clearance task. Eye movement metrics - including visual search, saccade direction, and entropy - were recorded to quantify the efficiency of visual search behaviours. Feed-forward and feed-back eye movement training produced distinct learning benefits, but both accelerated the development of efficient gaze behaviours. However, we found no evidence that these more efficient search behaviours transferred to better decision making in the room clearance task. Our results suggest integrating eye movement training principles within virtual reality training simulations may be effective, but further work is needed to understand the learning mechanisms.
Collapse
Affiliation(s)
- David J Harris
- School of Public Health and Sport Sciences, University of Exeter, UK
| | - Mark R Wilson
- School of Public Health and Sport Sciences, University of Exeter, UK
| | - Martin I Jones
- Defence Science and Technology Laboratory, Salisbury, UK
| | | | | | - Tom Arthur
- School of Public Health and Sport Sciences, University of Exeter, UK
| | | | - Samuel J Vine
- School of Public Health and Sport Sciences, University of Exeter, UK
| |
Collapse
|
7
|
Ayala N, Zafar A, Kearns S, Irving E, Cao S, Niechwiej-Szwedo E. The effects of task difficulty on gaze behaviour during landing with visual flight rules in low-time pilots. J Eye Mov Res 2023; 16:10.16910/jemr.16.1.3. [PMID: 37965286 PMCID: PMC10643002 DOI: 10.16910/jemr.16.1.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023] Open
Abstract
Eye movements have been used to examine the cognitive function of pilots and understand how information processing abilities impact performance. Traditional and advanced measures of gaze behaviour effectively reflect changes in cognitive load, situational awareness, and expert-novice differences. However, the extent to which gaze behaviour changes during the early stages of skill development has yet to be addressed. The current study investigated the impact of task difficulty on gaze behaviour in low-time pilots (N=18) while they completed simulated landing scenarios. An increase in task difficulty resulted in longer fixation of the runway, and a reduction in the stationary gaze entropy (gaze dispersion) and gaze transition entropy (sequence complexity). These findings suggest that pilots' gaze became less complex and more focused on fewer areas of interest when task difficulty increased. Additionally, a novel approach to identify and track instances when pilots restrict their attention outside the cockpit (i.e., gaze tunneling) was explored and shown to be sensitive to changes in task difficulty. Altogether, the gaze-related metrics used in the present study provide valuable information for assessing pilots gaze behaviour and help further understand how gaze contributes to better performance in low-time pilots.
Collapse
Affiliation(s)
| | | | | | | | - Shi Cao
- University of Waterloo Ontario, Canada
| | | |
Collapse
|
8
|
Xu Q, Wang M, Wang H, Liu B, You X, Ji M. Cognitive Style and Flight Experience Influence on Confirmation Bias in Lost Procedures. Aerosp Med Hum Perform 2022; 93:618-626. [DOI: 10.3357/amhp.6026.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
BACKGROUND: Accident analysis and empirical research have shown that the decision-making process of pilots after becoming lost is adversely affected by confirmation bias; this constitutes a serious threat to aviation safety. However, the underlying mechanism of confirmation bias
in the context of lost procedures are still unclear.METHODS: This study used scenario-based map-reading tasks to conduct two experiments to explore the mechanism of confirmation bias in the lost procedures. In Experiment 1, 34 undergraduate students and 28 flying cadets were enrolled
in a formal experiment to examine the effects of verbal-imagery cognitive style, experience level, and their interaction on confirmation bias. In Experiment 2, we further explored the influence of strategy as a core component of experience on confirmation bias with 26 flying cadets.RESULTS:
The study found that individuals were subject to confirmation bias in lost procedures. Visualizers (M = 0.78, SD = 0.75) were almost twice as likely to select the disconfirmatory features than verbalizers (M = 0.37, SD = 0.49). Visualizers exhibited a lower degree of confirmation bias than
verbalizers, and experience helps verbalizers to reduce their degree of confirmation bias. The protective effect of experience mainly lies in individuals’ choice of strategy.DISCUSSION: Future aviation safety campaigns could be aimed at adopting a candidate selection process
that focuses more on psychological attributes by testing for cognitive style, and enriching individual experience through adequate training. Such measures would reduce confirmation bias.Xu Q, Wang M, Wang H, Liu B, You X, Ji M. Cognitive style and flight experience influence on confirmation
bias in lost procedures. Aerosp Med Hum Perform. 2022; 93(8):618–626.
Collapse
|
9
|
Jiang G, Chen H, Wang C, Xue P. Transformer Network Intelligent Flight Situation Awareness Assessment Based on Pilot Visual Gaze and Operation Behavior Data. INT J PATTERN RECOGN 2022. [DOI: 10.1142/s0218001422590157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Situational awareness is the ability of pilots to master flight status, which is of great significance to aviation flight safety and flight effect. According to the information processing model, the pilot’s main steps of processing information are feeling, perception and execution. There are many problems in situation awareness analysis guided by visual gaze, such as large analysis deviation and high delay due to various influencing factors and complex characteristics. In order to solve this problem, this paper proposes a situation awareness assessment method based on artificial intelligence neural network and integrating visual gaze and flight control. First, this paper carries out simulated flight training experiments for flight cadets, and collects the data of eye movement, line of sight tracking, flight control and flight parameters of pilot cadets. Then, aiming at the flight subjects, a situation awareness analysis method based on events is established, and the situation awareness state in the experiment is evaluated and analyzed through the flight parameter data. Then, the visual gaze and flight control data are sliced in the unit of situational awareness events, and the data set is constructed. Finally, this paper designs a multi-channel sequence data classification and analysis model based on transformer, in which the situation awareness characteristics of visual gaze and operation behavior are analyzed through the attention mechanism. The experimental results show that the accuracy of situation awareness classification of the designed neural network model to the experimental data set is 96%, and can classify and evaluate the pilot’s situation awareness state in 5[Formula: see text]s.
Collapse
Affiliation(s)
- Guangyi Jiang
- School of Mechatronic Engineering, Xi’an Technological University, No. 2 Xuefu Middle Road, Xi’an, Shaanxi, P. R. China
| | - Hua Chen
- School of Mechatronic Engineering, Xi’an Technological University, No. 2 Xuefu Middle Road, Xi’an, Shaanxi, P. R. China
| | - Changyuan Wang
- School of Computer Science and Engineering, Xi’an Technological University, No.2 Xuefu Middle Road, Xi’an, Shaanxi, P. R. China
| | - Pengxiang Xue
- School of Computer Science and Engineering, Xi’an Technological University, No.2 Xuefu Middle Road, Xi’an, Shaanxi, P. R. China
| |
Collapse
|
10
|
Visual Landing Based on the Human Depth Perception in Limited Visibility and Failure of Avionic Systems. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4320101. [PMID: 35498171 PMCID: PMC9054408 DOI: 10.1155/2022/4320101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 03/21/2022] [Indexed: 11/21/2022]
Abstract
This paper introduces a novel visual landing system applicable to the accurate landing of commercial aircraft utilizing human depth perception algorithms, named a 3D Model Landing System (3DMLS). The 3DMLS uses a simulation environment for visual landing in the failure of navigation aids/avionics, adverse weather conditions, and limited visibility. To simulate the approach path and surrounding area, the 3DMLS implements both the inertial measurement unit (IMU) and the digital elevation model (DEM). While the aircraft is in the instrument landing system (ILS) range, the 3DMLS simulates more details of the environment in addition to implementing the DOF depth perception algorithm to provide a clear visual landing path. This path is displayed on a multifunction display in the cockpit for pilots. As the pilot's eye concentrates mostly on the runway location and touch-down point, “the runway” becomes the center of focus in the environment simulation. To display and evaluate the performance of the 3DMLS and depth perception, a landing auto test is also designed and implemented to guide the aircraft along the runway. The flight path is derived simultaneously by comparison of the current aircraft and the runway position. The Unity and MATLAB software are adopted to model the 3DMLS. The accuracy and the quality of the simulated environment in terms of resolution, the field of view, frame per second, and latency are confirmed based on FSTD's visual requirements. Finally, the saliency map toolbox shows that the depth of field (DOF) implementation increases the pilot's concentration resulting in safe landing guidance.
Collapse
|
11
|
Deniel J, Navarro J. Gaze behaviours engaged while taking over automated driving: a systematic literature review. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2022. [DOI: 10.1080/1463922x.2022.2036861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Jonathan Deniel
- Laboratoire d’Étude des Mécanismes Cognitifs (EMC), Université Lumière Lyon 2, Bron, France
| | - Jordan Navarro
- Laboratoire d’Étude des Mécanismes Cognitifs (EMC), Université Lumière Lyon 2, Bron, France
- Institut Universitaire de France, Paris, France
| |
Collapse
|
12
|
Improving Airline Pilots’ Visual Scanning and Manual Flight Performance through Training on Skilled Eye Gaze Strategies. SAFETY 2021. [DOI: 10.3390/safety7040070] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Poor cockpit monitoring has been identified as an important contributor to aviation accidents. Improving pilots’ monitoring strategies could therefore help to enhance flight safety. During two different sessions, we analyzed the flight performance and eye movements of professional airline pilots in a full-flight simulator. In a pre-training session, 20 pilots performed a manual approach scenario as pilot flying (PFs) and were classified into three groups according to their flight performance: unstabilized, standard, and most accurate. The unstabilized pilots either under- or over-focused various instruments. Their number of visual scanning patterns was lower than those of pilots who managed to stabilize their approach. The most accurate pilots showed a higher perceptual efficiency with shorter fixation times and more fixations on important primary flight instruments. Approximately 10 months later, fourteen pilots returned for a post-training session. They received a short training program and performed a similar manual approach as during the pre-training session. Seven of them, the experimental group, received individual feedback on their own performance and visual behavior (i.e., during the pre-training session) and a variety of data obtained from the most accurate pilots, including an eye-tracking video showing efficient visual scanning strategies from one of the most accurate pilots. The other seven, the control group, received general guidelines on cockpit monitoring. During the post-training session, the experimental group had better flight performance (compared to the control group), and its visual scanning strategies became more similar to those of the most accurate pilots. In summary, our results suggest that cockpit monitoring underlies manual flight performance and that it can be improved using a training program based mainly on exposure to eye movement examples from highly accurate pilots.
Collapse
|