101
|
Hayashi K, Aono S, Fujiwara M, Shiro Y, Ushida T. Difference in eye movements during gait analysis between professionals and trainees. PLoS One 2020; 15:e0232246. [PMID: 32353030 PMCID: PMC7192381 DOI: 10.1371/journal.pone.0232246] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 04/11/2020] [Indexed: 11/18/2022] Open
Abstract
INTRODUCTION Observational gait analysis is a widely used skill in physical therapy. Meanwhile, the skill has not been investigated using objective assessments. The present study investigated the differences in eye movement between professionals and trainees, while observing gait analysis. METHODS The participants included in this study were 26 professional physical therapists and 26 physical therapist trainees. The participants, wearing eye tracker systems, were asked to describe gait abnormalities of a patient as much as possible. The eye movement parameters of interest were fixation count, average fixation duration, and total fixation duration. RESULTS The number of gait abnormalities described was significantly higher in professionals than in trainees, overall and in limbs of the patient. The fixation count was significantly higher in professionals when compared to trainees. Additionally, the average fixation duration and total fixation duration were significantly shorter in professionals. Conversely, in trunks, the number of gait abnormalities and eye movements showed no significant differences between groups. CONCLUSIONS Professionals require shorter fixation durations on areas of interest than trainees, while describing a higher number of gait abnormalities.
Collapse
Affiliation(s)
- Kazuhiro Hayashi
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
- Department of Rehabilitation, Aichi Medical University Hospital, Nagakute, Japan
| | - Shuichi Aono
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
- Department of Pain Data Management, Aichi Medical University, Nagakute, Japan
| | - Mitsuhiro Fujiwara
- Department of Rehabilitation, Kamiiida Rehabilitation Hospital, Nagoya, Japan
| | - Yukiko Shiro
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
- Department of Physical Therapy, Faculty of Rehabilitation Sciences, Nagoya Gakuin University, Nagoya, Japan
| | - Takahiro Ushida
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
| |
Collapse
|
102
|
de Koning BB, Rop G, Paas F. Learning from split-attention materials: Effects of teaching physical and mental learning strategies. CONTEMPORARY EDUCATIONAL PSYCHOLOGY 2020. [DOI: 10.1016/j.cedpsych.2020.101873] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
103
|
Li X, Younes R, Bairaktarova D, Guo Q. Predicting Spatial Visualization Problems' Difficulty Level from Eye-Tracking Data. SENSORS (BASEL, SWITZERLAND) 2020; 20:E1949. [PMID: 32244360 PMCID: PMC7180473 DOI: 10.3390/s20071949] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 03/25/2020] [Accepted: 03/26/2020] [Indexed: 11/16/2022]
Abstract
The difficulty level of learning tasks is a concern that often needs to be considered in the teaching process. Teachers usually dynamically adjust the difficulty of exercises according to the prior knowledge and abilities of students to achieve better teaching results. In e-learning, because there is no teacher involvement, it often happens that the difficulty of the tasks is beyond the ability of the students. In attempts to solve this problem, several researchers investigated the problem-solving process by using eye-tracking data. However, although most e-learning exercises use the form of filling in blanks and choosing questions, in previous works, research focused on building cognitive models from eye-tracking data collected from flexible problem forms, which may lead to impractical results. In this paper, we build models to predict the difficulty level of spatial visualization problems from eye-tracking data collected from multiple-choice questions. We use eye-tracking and machine learning to investigate (1) the difference of eye movement among questions from different difficulty levels and (2) the possibility of predicting the difficulty level of problems from eye-tracking data. Our models resulted in an average accuracy of 87.60% on eye-tracking data of questions that the classifier has seen before and an average of 72.87% on questions that the classifier has not yet seen. The results confirmed that eye movement, especially fixation duration, contains essential information on the difficulty of the questions and it is sufficient to build machine-learning-based models to predict difficulty level.
Collapse
Affiliation(s)
- Xiang Li
- School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074, China;
| | - Rabih Younes
- Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA
| | - Diana Bairaktarova
- Department of Engineering Education, Virginia Tech, Blacksburg, VA 24061, USA;
| | - Qi Guo
- International School, Beijing University of Posts and Telecommunications, Beijing 100876, China;
| |
Collapse
|
104
|
Witkowski M, Tomczak E, Łuczak M, Bronikowski M, Tomczak M. Fighting Left Handers Promotes Different Visual Perceptual Strategies than Right Handers: The Study of Eye Movements of Foil Fencers in Attack and Defence. BIOMED RESEARCH INTERNATIONAL 2020; 2020:4636271. [PMID: 32420345 PMCID: PMC7201802 DOI: 10.1155/2020/4636271] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 11/15/2019] [Accepted: 11/23/2019] [Indexed: 11/17/2022]
Abstract
Left handers have long held the edge over right handers in one-on-one interactive combat sports. Particularly in fencing, top rankings show a relatively strong overrepresentation of left handers over right handers. Whether this can be attributed to perceptual strategies used by fencers in their bouts remains to be established. This study aims to verify whether right-handed fencers assess their opponents' behaviour based on different perceptual strategies when fencing a left vs. right hander. Twelve top-level (i.e., Olympic fencers, Junior World Team Fencing Champions, and top Polish senior foil fencers) right-handed female foil fencers (aged 16-30 years) took part in the study. They performed a total of 40 actions: 10 repetitions of offensive actions (attack) and 10 repetitions of defensive actions (defence), each type of action performed under 2 conditions (right- vs. left-handed opponent). While the participants were fencing, their eye movements were being recorded with a remote eye-tracker (SMI ETG 2.0). Both in their offensive and defensive actions, the fencers produced more fixations to the armed hand and spent more time observing the armed hand in duels with a left-handed (vs. right-handed) opponent. In defence, it was also the guard that attracted more fixations and gained a longer observation time in bouts with a left hander. In duels with a right-handed opponent, a higher number of fixations in attack and in defence, and longer observation times in defence were found for the upper torso. The results may point to different perceptual strategies employed in bouts with left- vs. right-handed individuals. The findings from this study may help to promote the implementation of specialized perceptual training programmes in foil fencing.
Collapse
Affiliation(s)
| | - Ewa Tomczak
- Faculty of English, Adam Mickiewicz University, Poznań, Poland
| | - Maciej Łuczak
- Poznań University of Physical Education, Poznań, Poland
| | | | | |
Collapse
|
105
|
Klostermann A, Vater C, Kredel R, Hossner EJ. Perception and Action in Sports. On the Functionality of Foveal and Peripheral Vision. Front Sports Act Living 2020; 1:66. [PMID: 33344989 PMCID: PMC7739830 DOI: 10.3389/fspor.2019.00066] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 12/09/2019] [Indexed: 01/08/2023] Open
Abstract
An optimal coupling between perception and action is crucial for successful performance in sports. In basketball, for example, a stable fixation onto the basket helps to gain precise visual information of the target to successfully throw a basketball into the basket. In basketball-defense situations, however, opposing players cutting to the basket can be detected by using peripheral vision as less precise information are sufficient to mark this player. Those examples elucidate that to solve a given task foveal and peripheral vision can be used to acquire the necessary information. Following this reasoning, the current state of our framework will be presented that allows one to predict the functionality of one or the other or both depending on the current situation and task demands. In more detail, for tasks that require high motor precision like in far-aiming tasks, empirical evidence suggests that stable foveal fixations facilitate inhibitory processes of alternative action parameterization over movement planning and control. However, more complex situations (i.e., with more than one relevant information source), require peripheral vision to process relevant information by positioning gaze at a functional location which might actually be in free space between the relevant information sources. Based on these elaborations, we will discuss complementarities, the role of visual attention as well as practical implications.
Collapse
Affiliation(s)
| | - Christian Vater
- Institute of Sport Science, University of Bern, Bern, Switzerland
| | - Ralf Kredel
- Institute of Sport Science, University of Bern, Bern, Switzerland
| | | |
Collapse
|
106
|
What Went Wrong for Bad Solvers during Thematic Map Analysis? Lessons Learned from an Eye-Tracking Study. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2019. [DOI: 10.3390/ijgi9010009] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Thematic map analysis is a complex and challenging task that might result in map user failure for many reasons. In the study reported here, we wanted to search for differences between successful and unsuccessful map users, focusing—unlike many similar studies—on strategies applied by users who give incorrect answers. In the eye-tracking study, followed by a questionnaire survey, we collected data from 39 participants. The eye-tracking data were analyzed both qualitatively and quantitatively to compare participants’ strategies from various perspectives. Unlike the results of some other studies, it turned out that unsuccessful participants show some similarities that are consistent across most analyzed tasks. The main issues that characterize bad solvers relate to improper use of the thematic legend, the inability to focus on relevant map layout elements, as well as on adequate map content. Moreover, they differed in the general problem-solving approach used as they, for example, tended to choose fast, less cautious, strategies. Based on the collected results, we developed tips that could help prevent unsuccessful participants ending with an incorrect answer and therefore be beneficial in map use education.
Collapse
|
107
|
Dynamic task observation: A gaze-mediated complement to traditional action observation treatment? Behav Brain Res 2019; 379:112351. [PMID: 31726070 DOI: 10.1016/j.bbr.2019.112351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 10/22/2019] [Accepted: 11/08/2019] [Indexed: 11/21/2022]
Abstract
Action observation elicits changes in primary motor cortex known as motor resonance, a phenomenon thought to underpin several functions, including our ability to understand and imitate others' actions. Motor resonance is modulated not only by the observer's motor expertise, but also their gaze behaviour. The aim of the present study was to investigate motor resonance and eye movements during observation of a dynamic goal-directed action, relative to an everyday one - a reach-grasp-lift (RGL) action, commonly used in action-observation-based neurorehabilitation protocols. Skilled and novice golfers watched videos of a golf swing and an RGL action as we recorded MEPs from three forearm muscles; gaze behaviour was concurrently monitored. Corticospinal excitability increased during golf swing observation, but it was not modulated by expertise, relative to baseline; no such changes were observed for the RGL task. MEP amplitudes were related to participants' gaze behaviour: in the RGL condition, target viewing was associated with lower MEP amplitudes; in the golf condition, MEP amplitudes were positively correlated with time spent looking at the effector or neighbouring regions. Viewing of a dynamic action such as the golf swing may enhance action observation treatment, especially when concurrent physical practice is not possible.
Collapse
|
108
|
Lewis MM. Cognitive Load, Anxiety, and Performance During a Simulated Subarachnoid Block. Clin Simul Nurs 2019. [DOI: 10.1016/j.ecns.2019.07.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
109
|
Mason B, Rau MA, Nowak R. Cognitive Task Analysis for Implicit Knowledge About Visual Representations With Similarity Learning Methods. Cogn Sci 2019; 43:e12744. [PMID: 31529528 DOI: 10.1111/cogs.12744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/18/2019] [Accepted: 04/24/2019] [Indexed: 11/27/2022]
Abstract
Visual representations are prevalent in STEM instruction. To benefit from visuals, students need representational competencies that enable them to see meaningful information. Most research has focused on explicit conceptual representational competencies, but implicit perceptual competencies might also allow students to efficiently see meaningful information in visuals. Most common methods to assess students' representational competencies rely on verbal explanations or assume explicit attention. However, because perceptual competencies are implicit and not necessarily verbally accessible, these methods are ill-equipped to assess them. We address these shortcomings with a method that draws on similarity learning, a machine learning technique that detects visual features that account for participants' responses to triplet comparisons of visuals. In Experiment 1, 614 chemistry students judged the similarity of Lewis structures and in Experiment 2, 489 students judged the similarity of ball-and-stick models. Our results showed that our method can detect visual features that drive students' perception and suggested that students' conceptual knowledge about molecules informed perceptual competencies through top-down processes. Furthermore, Experiment 2 tested whether we can improve the efficiency of the method with active sampling. Results showed that random sampling yielded higher accuracy than active sampling for small sample sizes. Together, the experiments provide the first method to assess students' perceptual competencies implicitly, without requiring verbalization or assuming explicit visual attention. These findings have implications for the design of instructional interventions that help students acquire perceptual representational competencies.
Collapse
Affiliation(s)
- Blake Mason
- Department of Electrical and Computer Engineering, University of Wisconsin
| | - Martina A Rau
- Department of Educational Psychology, University of Wisconsin
| | - Robert Nowak
- Department of Electrical and Computer Engineering, University of Wisconsin
| |
Collapse
|
110
|
DeCouto B, Robertson CT, Lewis D, Mann DTY. The speed of perception: the effects of over-speed video training on pitch recognition in collegiate softball players. Cogn Process 2019; 21:77-93. [PMID: 31489521 DOI: 10.1007/s10339-019-00930-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 08/23/2019] [Indexed: 11/26/2022]
Abstract
During interceptive motor tasks, experts demonstrate distinct visual search behavior (from novices) that is reflective of information extraction from optimal environmental cues, which subsequently aids anticipatory movements. While some forms of visual training have been employed in sport, over-speed video training is rarely applied to perceptual-cognitive sport contexts. The purpose of the present study was to determine whether over-speed video training can enhance visual information processing and augment visual behavior for a pitch-recognition task. Twelve collegiate softball players were recruited for the study. A between-subjects, repeated measures design was implemented to assess changes in participants' pitch recognition on a video-based occlusion task after one of two training interventions: (A) over-speed video training (n = 6) or (B) regular video training (n = 6). Both training interventions required individuals to view 400 videos of different pitches over the span of 10 days. The over-speed group viewed the videos at gradually increasing video speeds (+ 0.05 × each day). Performance (i.e., identifying pitch type and location), quiet-eye duration (i.e., total QE, QE-early and QE-late) and cortical activation (i.e., alpha wave activity/asymmetry; F3/F4 and P7/P8) were measured during the pitch-recognition tasks. Results showed significant performance improvements across groups, but no differences between groups. Both interventions were associated with a reduction in alpha wave activity for P8, an increase in alpha activity for F3, and a significant increase in QE-late. An increase in QE-late was associated with a decrease in P7/P8 alpha asymmetry and improvements in pitch-type recognition. Consistent with the extant literature, our results support the importance of a later QE offset for successful performance on perceptual tasks, potentially extending to perceputal-motor tasks. Although participants in the over-speed condition did not experience significantly larger improvements in performance than controls, this study highlights the association between QE and brain activity reflective of expertise.
Collapse
Affiliation(s)
- Brady DeCouto
- Department of Kinesiology, Jacksonville University, Jacksonville, FL, 32211, USA.
| | | | - Doug Lewis
- Department of Psychology, Jacksonville University, Jacksonville, FL, 32211, USA
| | - Derek T Y Mann
- Department of Kinesiology, Jacksonville University, Jacksonville, FL, 32211, USA
| |
Collapse
|
111
|
Fewer fixations of longer duration? Expert gaze behavior revisited. GERMAN JOURNAL OF EXERCISE AND SPORT RESEARCH 2019. [DOI: 10.1007/s12662-019-00616-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
112
|
Williams LH, Drew T. What do we know about volumetric medical image interpretation?: a review of the basic science and medical image perception literatures. Cogn Res Princ Implic 2019; 4:21. [PMID: 31286283 PMCID: PMC6614227 DOI: 10.1186/s41235-019-0171-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 05/19/2019] [Indexed: 11/26/2022] Open
Abstract
Interpretation of volumetric medical images represents a rapidly growing proportion of the workload in radiology. However, relatively little is known about the strategies that best guide search behavior when looking for abnormalities in volumetric images. Although there is extensive literature on two-dimensional medical image perception, it is an open question whether the conclusions drawn from these images can be generalized to volumetric images. Importantly, volumetric images have distinct characteristics (e.g., scrolling through depth, smooth-pursuit eye-movements, motion onset cues, etc.) that should be considered in future research. In this manuscript, we will review the literature on medical image perception and discuss relevant findings from basic science that can be used to generate predictions about expertise in volumetric image interpretation. By better understanding search through volumetric images, we may be able to identify common sources of error, characterize the optimal strategies for searching through depth, or develop new training and assessment techniques for radiology residents.
Collapse
|
113
|
Waite S, Grigorian A, Alexander RG, Macknik SL, Carrasco M, Heeger DJ, Martinez-Conde S. Analysis of Perceptual Expertise in Radiology - Current Knowledge and a New Perspective. Front Hum Neurosci 2019; 13:213. [PMID: 31293407 PMCID: PMC6603246 DOI: 10.3389/fnhum.2019.00213] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 06/07/2019] [Indexed: 12/14/2022] Open
Abstract
Radiologists rely principally on visual inspection to detect, describe, and classify findings in medical images. As most interpretive errors in radiology are perceptual in nature, understanding the path to radiologic expertise during image analysis is essential to educate future generations of radiologists. We review the perceptual tasks and challenges in radiologic diagnosis, discuss models of radiologic image perception, consider the application of perceptual learning methods in medical training, and suggest a new approach to understanding perceptional expertise. Specific principled enhancements to educational practices in radiology promise to deepen perceptual expertise among radiologists with the goal of improving training and reducing medical error.
Collapse
Affiliation(s)
- Stephen Waite
- Department of Radiology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Arkadij Grigorian
- Department of Radiology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Robert G. Alexander
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Neurology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Physiology/Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Stephen L. Macknik
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Neurology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Physiology/Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Marisa Carrasco
- Department of Psychology and Center for Neural Science, New York University, New York, NY, United States
| | - David J. Heeger
- Department of Psychology and Center for Neural Science, New York University, New York, NY, United States
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Neurology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Physiology/Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| |
Collapse
|
114
|
Prieto-Pinto L, Lara-Díaz MF, Garzón-Orjuela N, Herrera D, Páez-Canro C, Reyes JH, González-Gordon L, Jiménez-Murcia V, Eslava-Schmalbach J. Effectiveness assessment of maternal and neonatal health video clips in knowledge transfer using neuromarketing tools: A randomized crossover trial. PLoS One 2019; 14:e0215561. [PMID: 31067282 PMCID: PMC6505891 DOI: 10.1371/journal.pone.0215561] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Accepted: 04/04/2019] [Indexed: 11/18/2022] Open
Abstract
Audiovisual educational material has been used effectively as a knowledge translation strategy in patient education. Given the need to impact maternal mortality rates, 12 video clips related to maternal and neonatal health information were designed based on the results of a previous systematic review (SR). The content was formulated based on clinical practice guideline recommendations and validated following a formal consensus methodology. This study evaluated the effectiveness of knowledge transfer from the 12 video clips in terms of attention, emotional response, and recall by using neuroscience tools. In a randomized cross-over trial, 155 subjects (pregnant women, non-pregnant women, and men) received random sequences of 13 video clips, including a control video clip. Participants’ attention levels were evaluated through eye tracking, their emotional reactions were monitored by electrodermal activity and pupillary diameter, and their recall was tested via a questionnaire. An analysis was performed to evaluate differences in the groups and between the video clips and the control clip using variance analysis models that considered period, sequence, and carry-over effects. Results revealed that fixation length was greater in women than in men, while the greatest emotional effects occurred in men. All three groups had good recall results, without any significant differences between them. Although the sequencing did influence attentional processes, no carry-over effect was demonstrated. However, a differential effect was noted among video clips in all three outcomes, that is, when adjusted for group, level of education, and having had children. The control clip generated less attention, emotional reaction, and recall than the experimental video clips. The video clips about maternal and neonatal health were shown to be effective in the transference and comprehension of information. Therefore, cognitive neuroscience techniques are useful in evaluating knowledge translation strategies through audiovisual formats.
Collapse
Affiliation(s)
- Laura Prieto-Pinto
- Hospital Universitario Nacional de Colombia, Facultad de Medicina, Universidad Nacional de Colombia, Bogotá, Colombia
- * E-mail:
| | - María Fernanda Lara-Díaz
- Departament of Human Communication, Facultad de Medicina, Universidad Nacional de Colombia, Bogotá, Colombia
| | - Nathaly Garzón-Orjuela
- Hospital Universitario Nacional de Colombia, Facultad de Medicina, Universidad Nacional de Colombia, Bogotá, Colombia
- Technology Development Center, Sociedad Colombiana de Anestesiología y Reanimación–S.C.A.R.E., Bogotá, Colombia
| | - Dayanne Herrera
- Technology Development Center, Sociedad Colombiana de Anestesiología y Reanimación–S.C.A.R.E., Bogotá, Colombia
- Department of Psychology, Facultad de Ciencias Humanas, Universidad Nacional de Colombia, Bogotá, Colombia
| | - Carol Páez-Canro
- Hospital Universitario Nacional de Colombia, Facultad de Medicina, Universidad Nacional de Colombia, Bogotá, Colombia
| | - Jorge Humberto Reyes
- Technology Development Center, Sociedad Colombiana de Anestesiología y Reanimación–S.C.A.R.E., Bogotá, Colombia
| | - Lina González-Gordon
- Hospital Universitario Nacional de Colombia, Facultad de Medicina, Universidad Nacional de Colombia, Bogotá, Colombia
| | - Viviana Jiménez-Murcia
- Technology Development Center, Sociedad Colombiana de Anestesiología y Reanimación–S.C.A.R.E., Bogotá, Colombia
| | - Javier Eslava-Schmalbach
- Hospital Universitario Nacional de Colombia, Facultad de Medicina, Universidad Nacional de Colombia, Bogotá, Colombia
- Technology Development Center, Sociedad Colombiana de Anestesiología y Reanimación–S.C.A.R.E., Bogotá, Colombia
| |
Collapse
|
115
|
Król ME, Król M. A novel machine learning analysis of eye-tracking data reveals suboptimal visual information extraction from facial stimuli in individuals with autism. Neuropsychologia 2019; 129:397-406. [PMID: 31071324 DOI: 10.1016/j.neuropsychologia.2019.04.022] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 04/24/2019] [Accepted: 04/25/2019] [Indexed: 12/28/2022]
Abstract
We propose a new method of quantifying the utility of visual information extracted from facial stimuli for emotion recognition. The stimuli are convolved with a Gaussian fixation distribution estimate, revealing more information in those facial regions the participant fixated on. Feeding this convolution to a machine-learning emotion recognition algorithm yields an error measure (between actual and predicted emotions) reflecting the quality of extracted information. We recorded the eye-movements of 21 participants with autism and 23 age-, sex- and IQ-matched typically developing participants performing three facial analysis tasks: free-viewing, emotion recognition, and brow-mouth width comparison. In the emotion recognition task, fixations of participants with autism were positioned on lower areas of the faces and were less focused on the eyes compared to the typically developing group. Additionally, the utility of information extracted by them in the emotion recognition task was lower. Thus, the emotion recognition deficit typical in autism can be at least partly traced to the earliest stage of face processing, i.e. to the extraction of visual information via eye-fixations.
Collapse
Affiliation(s)
- Magdalena Ewa Król
- Wrocław Faculty of Psychology, SWPS University of Social Sciences and Humanities in Wrocław, Wrocław, Poland.
| | - Michał Król
- School of Social Sciences, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
116
|
Brunyé TT, Nallamothu BK, Elmore JG. Eye-tracking for assessing medical image interpretation: A pilot feasibility study comparing novice vs expert cardiologists. PERSPECTIVES ON MEDICAL EDUCATION 2019; 8:65-73. [PMID: 30977060 PMCID: PMC6468026 DOI: 10.1007/s40037-019-0505-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
INTRODUCTION As specialized medical professionals such as radiologists, pathologists, and cardiologists gain education and experience, their diagnostic efficiency and accuracy change, and they show altered eye movement patterns during medical image interpretation. Existing research in this area is limited to interpretation of static medical images, such as digitized whole slide biopsies, making it difficult to understand how expertise development might manifest during dynamic image interpretation, such as with angiograms or volumetric scans. METHODS A two-group (novice, expert) comparative pilot study examined the feasibility and utility of tracking and interpreting eye movement patterns while cardiologists viewed video-based coronary angiograms. A non-invasive eye tracking system recorded cardiologists' (n = 8) visual behaviour while they viewed and diagnosed a series of eight angiogram videos. Analyses assessed frame-by-frame video navigation behaviour, eye fixation behaviour, and resulting diagnostic decision making. RESULTS Relative to novices, expert cardiologists demonstrated shorter and less variable video review times, fewer eye fixations and saccadic eye movements, and less time spent paused on individual video frames. Novices showed repeated eye fixations on critical image frames and regions, though these were not predictive of accurate diagnostic decisions. DISCUSSION These preliminary results demonstrate interpretive decision errors among novices, suggesting they identify and process critical diagnostic features, but sometimes fail to accurately interpret those features. Results also showcase the feasibility of tracking and understanding eye movements during video-based coronary angiogram interpretation and suggest that eye tracking may be valuable for informing assessments of competency progression during medical education and training.
Collapse
Affiliation(s)
- Tad T. Brunyé
- Center for Applied Brain & Cognitive Sciences, Tufts University, Medford, MA USA
| | | | - Joann G. Elmore
- Department of Medicine, University of Washington, Seattle, WA USA
| |
Collapse
|
117
|
Li STK, Chung STL, Hsiao JH. Music-reading expertise modulates the visual span for English letters but not Chinese characters. J Vis 2019; 19:10. [PMID: 30952161 DOI: 10.1167/19.4.10] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Recent research has suggested that the visual span in stimulus identification can be enlarged through perceptual learning. Since both English and music reading involve left-to-right sequential symbol processing, music-reading experience may enhance symbol identification through perceptual learning particularly in the right visual field (RVF). In contrast, as Chinese can be read in all directions, and components of Chinese characters do not consistently form a left-right structure, this hypothesized RVF enhancement effect may be limited in Chinese character identification. To test these hypotheses, here we recruited musicians and nonmusicians who read Chinese as their first language (L1) and English as their second language (L2) to identify music notes, English letters, Chinese characters, and novel symbols (Tibetan letters) presented at different eccentricities and visual field locations on the screen while maintaining central fixation. We found that in English letter identification, significantly more musicians achieved above-chance performance in the center-RVF locations than nonmusicians. This effect was not observed in Chinese character or novel symbol identification. We also found that in music note identification, musicians outperformed nonmusicians in accuracy in the center-RVF condition, consistent with the RVF enhancement effect in the visual span observed in English-letter identification. These results suggest that the modulation of music-reading experience on the visual span for stimulus identification depends on the similarities in the perceptual processes involved.
Collapse
Affiliation(s)
- Sara T K Li
- Department of Psychology, University of Hong Kong, Hong Kong, SAR
| | - Susana T L Chung
- School of Optometry, University of California Berkeley, Berkeley, CA, USA
| | - Janet H Hsiao
- Department of Psychology, University of Hong Kong, Hong Kong, SAR
| |
Collapse
|
118
|
Kok EM. Eye tracking: the silver bullet of competency assessment in medical image interpretation? PERSPECTIVES ON MEDICAL EDUCATION 2019; 8:63-64. [PMID: 30949975 PMCID: PMC6468029 DOI: 10.1007/s40037-019-0506-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Affiliation(s)
- Ellen M Kok
- Department of Education, Faculty of Social Sciences, Utrecht University, Utrecht, The Netherlands.
| |
Collapse
|
119
|
Moore LJ, Harris DJ, Sharpe BT, Vine SJ, Wilson MR. Perceptual-cognitive expertise when refereeing the scrum in rugby union. J Sports Sci 2019; 37:1778-1786. [DOI: 10.1080/02640414.2019.1594568] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Lee J. Moore
- Department for Health, University of Bath, Bath, UK
| | - David J. Harris
- College of Life and Environmental Sciences, University of Exeter, Exeter, UK
| | - Ben T. Sharpe
- Department of Sport and Exercise Sciences, University of Chichester, Chichester, UK
| | - Samuel J. Vine
- College of Life and Environmental Sciences, University of Exeter, Exeter, UK
| | - Mark R. Wilson
- College of Life and Environmental Sciences, University of Exeter, Exeter, UK
| |
Collapse
|
120
|
Brunyé TT, Drew T, Weaver DL, Elmore JG. A review of eye tracking for understanding and improving diagnostic interpretation. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:7. [PMID: 30796618 PMCID: PMC6515770 DOI: 10.1186/s41235-019-0159-2] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Accepted: 02/01/2019] [Indexed: 12/29/2022]
Abstract
Inspecting digital imaging for primary diagnosis introduces perceptual and cognitive demands for physicians tasked with interpreting visual medical information and arriving at appropriate diagnoses and treatment decisions. The process of medical interpretation and diagnosis involves a complex interplay between visual perception and multiple cognitive processes, including memory retrieval, problem-solving, and decision-making. Eye-tracking technologies are becoming increasingly available in the consumer and research markets and provide novel opportunities to learn more about the interpretive process, including differences between novices and experts, how heuristics and biases shape visual perception and decision-making, and the mechanisms underlying misinterpretation and misdiagnosis. The present review provides an overview of eye-tracking technology, the perceptual and cognitive processes involved in medical interpretation, how eye tracking has been employed to understand medical interpretation and promote medical education and training, and some of the promises and challenges for future applications of this technology.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, 200 Boston Ave., Suite 3000, Medford, MA, 02155, USA.
| | - Trafton Drew
- Department of Psychology, University of Utah, 380 1530 E, Salt Lake City, UT, 84112, USA
| | - Donald L Weaver
- Department of Pathology and University of Vermont Cancer Center, University of Vermont, 111 Colchester Ave., Burlington, VT, 05401, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine at UCLA, University of California at Los Angeles, 10833 Le Conte Ave., Los Angeles, CA, 90095, USA
| |
Collapse
|
121
|
Kim Y, Chang T, Park I. Visual Scanning Behavior and Attention Strategies for Shooting Among Expert Versus Collegiate Korean Archers. Percept Mot Skills 2019; 126:530-545. [PMID: 30773998 DOI: 10.1177/0031512519829624] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study analyzed differences in visual scanning behavior and resistance to distractions between Olympic and collegiate archers. The experiment required the participants to watch a test film comprising six stages corresponding to the phases of an archery performance. The recording emulated the archer's point of view. During initial phases of shooting, Olympic archers demonstrated more frequent and longer fixations than did their collegiate counterparts, whereas during the later phases of shooting, the groups' visual scanning patterns did not differ significantly. In a second experiment within this study, auditory and visual distractors led Olympic archers to exhibit fewer fixations of longer duration and less eye movement, regardless of the type of distraction. Thus, in each experiment, Korean national-team archers modified their attentional strategies more efficiently than collegiate archers, expanding and narrowing their focused attention based on task demands. These findings provide fundamental information on the nature of expert shooters' visual scanning patterns and have implications for developing training protocols for aspiring athletes.
Collapse
Affiliation(s)
- Youngsook Kim
- 1 Department of Sport Science, Korea Institute of Sport Science, Seoul, South Korea
| | - Taiseok Chang
- 2 Department of Sport Science, Sungkyunkwan University, Suwon, South Korea
| | - Inchon Park
- 3 Department of Health and Kinesiology, Texas A&M University, College Station, TX, USA
| |
Collapse
|
122
|
Tag-based information access in image collections: insights from log and eye-gaze analyses. Knowl Inf Syst 2019. [DOI: 10.1007/s10115-019-01343-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
123
|
Klostermann A. Especial skill vs. quiet eye duration in basketball free throw: Evidence for the inhibition of competing task solutions. Eur J Sport Sci 2019; 19:964-971. [DOI: 10.1080/17461391.2019.1571113] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
124
|
Guidetti G, Guidetti R, Sgalla RA. The saccadic training for driving safety. HEARING, BALANCE AND COMMUNICATION 2019. [DOI: 10.1080/21695717.2018.1540233] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Giorgio Guidetti
- Vertigo Center – Poliambulatorio Chirurgico Modenese, Modena, Italy
| | | | - Roberto Antonio Sgalla
- Director for Special Departments of the State Italian Police of Italian Ministry of the Interior, Italy
| |
Collapse
|
125
|
Roach VA, Fraser GM, Kryklywy JH, Mitchell DGV, Wilson TD. Guiding Low Spatial Ability Individuals through Visual Cueing: The Dual Importance of Where and When to Look. ANATOMICAL SCIENCES EDUCATION 2019; 12:32-42. [PMID: 29603656 DOI: 10.1002/ase.1783] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 03/01/2018] [Accepted: 03/03/2018] [Indexed: 05/28/2023]
Abstract
Research suggests that spatial ability may predict success in complex disciplines including anatomy, where mastery requires a firm understanding of the intricate relationships occurring along the course of veins, arteries, and nerves, as they traverse through and around bones, muscles, and organs. Debate exists on the malleability of spatial ability, and some suggest that spatial ability can be enhanced through training. It is hypothesized that spatial ability can be trained in low-performing individuals through visual guidance. To address this, training was completed through a visual guidance protocol. This protocol was based on eye-movement patterns of high-performing individuals, collected via eye-tracking as they completed an Electronic Mental Rotations Test (EMRT). The effects of guidance were evaluated using 33 individuals with low mental rotation ability, in a counterbalanced crossover design. Individuals were placed in one of two treatment groups (late or early guidance) and completed both a guided, and an unguided EMRT. A third group (no guidance/control) completed two unguided EMRTs. All groups demonstrated an increase in EMRT scores on their second test (P < 0.001); however, an interaction was observed between treatment and test iteration (P = 0.024). The effect of guidance on scores was contingent on when the guidance was applied. When guidance was applied early, scores were significantly greater than expected (P = 0.028). These findings suggest that by guiding individuals with low mental rotation ability "where" to look early in training, better search approaches may be adopted, yielding improvements in spatial reasoning scores. It is proposed that visual guidance may be applied in spatial fields, such as STEMM (science, technology, engineering, mathematics and medicine), surgery, and anatomy to improve student's interpretation of visual content. Anat Sci Educ. © 2018 American Association of Anatomists.
Collapse
Affiliation(s)
- Victoria A Roach
- Department of Biomedical Sciences, William Beaumont School of Medicine, Oakland University, Rochester, Michigan
| | - Graham M Fraser
- Cardiovascular Research Group, Division of BioMedical Sciences, Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland, Canada
| | - James H Kryklywy
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Derek G V Mitchell
- Department of Psychiatry, Schulich School of Medicine and Dentistry, Brain and Mind Institute, London, Ontario, Canada
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada
- Department of Anatomy and Cell Biology, Corps for Research of Instructional and Perceptual Technologies, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Ontario, Canada
| | - Timothy D Wilson
- Department of Anatomy and Cell Biology, Corps for Research of Instructional and Perceptual Technologies, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
126
|
Prytz EG, Norén C, Jonson CO. Fixation Differences in Visual Search of Accident Scenes by Novices and Expert Emergency Responders. HUMAN FACTORS 2018; 60:1219-1227. [PMID: 30102566 DOI: 10.1177/0018720818788142] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE We sought to investigate whether expert-novice differences in visual search behavior found in other domains also apply to accident scenes and the emergency response domain. BACKGROUND Emergency service professionals typically arrive at accidents only after being dispatched when a civilian witness has called an emergency dispatch number. Differences in visual search behavior between the civilian witness (usually a novice in terms of emergency response) and the professional first responders (experts at emergency response) could thus result in the experts being given insufficient or erroneous information, which would lead them to arrive unprepared for the actual situation. METHOD A between-subjects, controlled eye-tracking experiment with 20 novices and 17 experts (rescue and ambulance service personnel) was conducted to explore expert-novice differences in visual search of accident and control images. RESULTS The results showed that the experts spent more time looking at task-relevant areas of the accident images than novices did, as predicted by the information reduction hypothesis. The longer time was due to longer fixation durations rather than a larger fixation count. CONCLUSION Expert-novice differences in visual search are present in the emergency domain. Given that this domain is essential to saving lives and also relies heavily on novices as the first link in the chain of response, such differences deserve further exploration. APPLICATION Visual search behavior from experts can be used for training purposes. Eye-tracking studies of novices can be used to inform the design of emergency dispatch interviews.
Collapse
|
127
|
Brams S, Hooge ITC, Ziv G, Dauwe S, Evens K, De Wolf T, Levin O, Wagemans J, Helsen WF. Does effective gaze behavior lead to enhanced performance in a complex error-detection cockpit task? PLoS One 2018; 13:e0207439. [PMID: 30462695 PMCID: PMC6248957 DOI: 10.1371/journal.pone.0207439] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Accepted: 10/31/2018] [Indexed: 11/29/2022] Open
Abstract
The purpose of the current study was to examine the relationship between expertise, performance, and gaze behavior in a complex error-detection cockpit task. Twenty-four pilots and 26 non-pilots viewed video-clips from a pilot's viewpoint and were asked to detect malfunctions in the cockpit instrument panel. Compared to non-pilots, pilots detected more malfunctioning instruments, had shorter dwell times on the instruments, made more transitions, visited task-relevant areas more often, and dwelled longer on the areas between the instruments. These results provide evidence for three theories that explain underlying processes for expert performance: The long-term working memory theory, the information-reduction hypothesis, and the holistic model of image perception. In addition, the results for generic attentional skills indicated a higher capability to switch between global and local information processing in pilots compared to non-pilots. Taken together, the results suggest that gaze behavior as well as other generic skills may provide important information concerning underlying processes that can explain successful performance during flight in expert pilots.
Collapse
Affiliation(s)
- Stephanie Brams
- Movement Control & Neuroplasticity Research Group, Department of Kinesiology, KU Leuven, Leuven, Belgium
| | - Ignace T. C. Hooge
- Experimental Psychology, Department of Psychology, Helmholtz Instituut, Utrecht University, Utrecht, The Netherlands
| | - Gal Ziv
- The Academic College at Wingate, Wingate institute, Netanya, Israel
| | - Siska Dauwe
- Movement Control & Neuroplasticity Research Group, Department of Kinesiology, KU Leuven, Leuven, Belgium
| | - Ken Evens
- CAE Oxford Aviation Academy, Brussels, Belgium
| | | | - Oron Levin
- Movement Control & Neuroplasticity Research Group, Department of Kinesiology, KU Leuven, Leuven, Belgium
| | - Johan Wagemans
- Laboratory of Experimental Psychology, Department of Brain & Cognition, KU Leuven, Leuven, Belgium
| | - Werner F. Helsen
- Movement Control & Neuroplasticity Research Group, Department of Kinesiology, KU Leuven, Leuven, Belgium
| |
Collapse
|
128
|
Fichtel E, Lau N, Park J, Henrickson Parker S, Ponnala S, Fitzgibbons S, Safford SD. Eye tracking in surgical education: gaze-based dynamic area of interest can discriminate adverse events and expertise. Surg Endosc 2018; 33:2249-2256. [PMID: 30341656 DOI: 10.1007/s00464-018-6513-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 10/11/2018] [Indexed: 01/07/2023]
Abstract
BACKGROUND Eye-gaze metrics derived from areas of interest (AOIs) have been suggested to be effective for surgical skill assessment. However, prior research is mostly based on static images and simulated tasks that may not translate to complex and dynamic surgical scenes. Therefore, eye-gaze metrics must advance to account for changes in the location of important information during a surgical procedure. METHODS We developed a dynamic AOI generation technique based on eye gaze collected from an expert viewing surgery videos. This AOI updated as the gaze of the expert moved with changes in the surgical scene. This technique was evaluated through an experiment recruiting a total of 20 attendings and residents to view 10 videos associated with and another 10 without adverse events. RESULTS Dwell time percentage (i.e., gaze duration) inside the AOI differentiated video type (U = 13508.5, p < 0.001) between videos with the presence (Mdn = 16.75) versus absence (Mdn = 19.95) of adverse events. This metric also differentiated participant group (U = 14029.5, p < 0.001) between attendings (Mdn = 15.45) and residents (Mdn = 19.80). This indicates that our dynamic AOIs reflecting the expert eye gaze was able to differentiate expertise, and the presence of unexpected adverse events. CONCLUSION This dynamic AOI generation technique produced dynamic AOIs for deriving eye-gaze metrics that were sensitive to expertise level and event characteristics.
Collapse
Affiliation(s)
- Eric Fichtel
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 546 Whittemore Hall, 1185 Perry Street, Blacksburg, VA, 24061, USA
| | - Nathan Lau
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 546 Whittemore Hall, 1185 Perry Street, Blacksburg, VA, 24061, USA.
| | - Juyeon Park
- Virginia Tech Carilion School of Medicine and Carilion Clinic, Virginia Tech, Roanoke, USA
| | | | - Siddarth Ponnala
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, USA
| | - Shimae Fitzgibbons
- Department of Surgery, MedStar Georgetown University Hospital, Washington, DC, USA
| | - Shawn D Safford
- Department of Surgery, Virginia Tech Carilion School of Medicine and Carilion Clinic, Virginia Tech, Roanoke, USA
| |
Collapse
|
129
|
Hooge ITC, Niehorster DC, Nyström M, Andersson R, Hessels RS. Is human classification by experienced untrained observers a gold standard in fixation detection? Behav Res Methods 2018; 50:1864-1881. [PMID: 29052166 PMCID: PMC7875941 DOI: 10.3758/s13428-017-0955-x] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Manual classification is still a common method to evaluate event detection algorithms. The procedure is often as follows: Two or three human coders and the algorithm classify a significant quantity of data. In the gold standard approach, deviations from the human classifications are considered to be due to mistakes of the algorithm. However, little is known about human classification in eye tracking. To what extent do the classifications from a larger group of human coders agree? Twelve experienced but untrained human coders classified fixations in 6 min of adult and infant eye-tracking data. When using the sample-based Cohen's kappa, the classifications of the humans agreed near perfectly. However, we found substantial differences between the classifications when we examined fixation duration and number of fixations. We hypothesized that the human coders applied different (implicit) thresholds and selection rules. Indeed, when spatially close fixations were merged, most of the classification differences disappeared. On the basis of the nature of these intercoder differences, we concluded that fixation classification by experienced untrained human coders is not a gold standard. To bridge the gap between agreement measures (e.g., Cohen's kappa) and eye movement parameters (fixation duration, number of fixations), we suggest the use of the event-based F1 score and two new measures: the relative timing offset (RTO) and the relative timing deviation (RTD).
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362, Lund, Sweden
| | - Richard Andersson
- Eye Information Group, IT University of Copenhagen, Copenhagen, Denmark
- Department of Philosophy and Cognitive Sciences, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Helgonabacken 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
130
|
Weichselbaum H, Huber-Huber C, Ansorge U. Attention capture is temporally stable: Evidence from mixed-model correlations. Cognition 2018; 180:206-224. [PMID: 30081374 DOI: 10.1016/j.cognition.2018.07.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 07/22/2018] [Accepted: 07/26/2018] [Indexed: 10/28/2022]
Abstract
Studies on domain-specific expertise in visual attention, on its cognitive enhancement, or its pathology require individually reliable measurement of visual attention. Yet, the reliability of the most widely used reaction time (RT) differences measuring visual attention is in doubt or unknown. Therefore, we used novel methods of analyses based on linear mixed models (LMMs) and tested the temporal stability, as one index of reliability, of three attentional RT effects in the popular additional-singleton research protocol: (1) bottom-up, (2) top-down, and (3) memory-driven (intertrial priming) influences on attention capture effects. Participants searched for a target having one specific color in most (Exp. 1) or all (Exp. 2) trials. Together with the target, in half (Exp. 1) or two thirds (Exp. 2) of the trials, a distractor was presented that stood out by the target's (Exp. 1) or a target-similar (Exp. 2) color, therefore matching a top-down search set, or by a different color, capturing attention in a bottom-up way. Also, matching distractors were primed or unprimed by the target color of the preceding trial. We analyzed all three attention capture effects in manual and target fixation RTs at two different times, separated by one (Exp. 1 and 2) or four weeks (only in Exp. 1). Random slope correlations of LMMs and standard correlation coefficients computed on individual participants' effect scores showed that RT capture effects were in general temporally stable for both time intervals and dependent variables. These results demonstrate the test-retest reliability necessary for looking at individual differences of attentional RT effects.
Collapse
Affiliation(s)
- Hanna Weichselbaum
- Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010 Vienna, Austria.
| | - Christoph Huber-Huber
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068 Rovereto (TN), Italy
| | - Ulrich Ansorge
- Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010 Vienna, Austria
| |
Collapse
|
131
|
User-Centered Predictive Model for Improving Cultural Heritage Augmented Reality Applications: An HMM-Based Approach for Eye-Tracking Data. J Imaging 2018. [DOI: 10.3390/jimaging4080101] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Today, museum visits are perceived as an opportunity for individuals to explore and make up their own minds. The increasing technical capabilities of Augmented Reality (AR) technology have raised audience expectations, advancing the use of mobile AR in cultural heritage (CH) settings. Hence, there is the need to define a criteria, based on users’ preference, able to drive developers and insiders toward a more conscious development of AR-based applications. Starting from previous research (performed to define a protocol for understanding the visual behaviour of subjects looking at paintings), this paper introduces a truly predictive model of the museum visitor’s visual behaviour, measured by an eye tracker. A Hidden Markov Model (HMM) approach is presented, able to predict users’ attention in front of a painting. Furthermore, this research compares users’ behaviour between adults and children, expanding the results to different kind of users, thus providing a reliable approach to eye trajectories. Tests have been conducted defining areas of interest (AOI) and observing the most visited ones, attempting the prediction of subsequent transitions between AOIs. The results demonstrate the effectiveness and suitability of our approach, with performance evaluation values that exceed 90%.
Collapse
|
132
|
Hessels RS, Niehorster DC, Nyström M, Andersson R, Hooge ITC. Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. ROYAL SOCIETY OPEN SCIENCE 2018; 5:180502. [PMID: 30225041 PMCID: PMC6124022 DOI: 10.1098/rsos.180502] [Citation(s) in RCA: 75] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 08/06/2018] [Indexed: 06/08/2023]
Abstract
Eye movements have been extensively studied in a wide range of research fields. While new methods such as mobile eye tracking and eye tracking in virtual/augmented realities are emerging quickly, the eye-movement terminology has scarcely been revised. We assert that this may cause confusion about two of the main concepts: fixations and saccades. In this study, we assessed the definitions of fixations and saccades held in the eye-movement field, by surveying 124 eye-movement researchers. These eye-movement researchers held a variety of definitions of fixations and saccades, of which the breadth seems even wider than what is reported in the literature. Moreover, these definitions did not seem to be related to researcher background or experience. We urge researchers to make their definitions more explicit by specifying all the relevant components of the eye movement under investigation: (i) the oculomotor component: e.g. whether the eye moves slow or fast; (ii) the functional component: what purposes does the eye movement (or lack thereof) serve; (iii) the coordinate system used: relative to what does the eye move; (iv) the computational definition: how is the event represented in the eye-tracker signal. This should enable eye-movement researchers from different fields to have a discussion without misunderstandings.
Collapse
Affiliation(s)
- Roy S. Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Diederick C. Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Ignace T. C. Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
133
|
A new way to look at simulation-based assessment: the relationship between gaze-tracking and exam performance. CAN J EMERG MED 2018; 21:129-137. [DOI: 10.1017/cem.2018.391] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractObjectiveA key task of the team leader in a medical emergency is effective information gathering. Studying information gathering patterns is readily accomplished with the use of gaze-tracking glasses. This technology was used to generate hypotheses about the relationship between performance scores and expert-hypothesized visual areas of interest in residents across scenarios in simulated medical resuscitation examinations.MethodsEmergency medicine residents wore gaze-tracking glasses during two simulation-based examinations (n=29 and 13 respectively). Blinded experts assessed video-recorded performances using a simulation performance assessment tool that has validity evidence in this context. The relationships between gaze patterns and performance scores were analyzed and potential hypotheses generated. Four scenarios were assessed in this study: diabetic ketoacidosis, bradycardia secondary to beta-blocker overdose, ruptured abdominal aortic aneurysm and metabolic acidosis caused by antifreeze ingestion.ResultsSpecific gaze patterns were correlated with objective performance. High performers were more likely to fixate on task-relevant stimuli and appropriately ignore task-irrelevant stimuli compared with lower performers. For example, shorter latency to fixation on the vital signs in a case of diabetic ketoacidosis was positively correlated with performance (r=0.70, p<0.05). Conversely, total time spent fixating on lab values in a case of ruptured abdominal aortic aneurysm was negatively correlated with performance (r= −0.50, p<0.05).ConclusionsThere are differences between the visual patterns of high and low-performing residents. These findings may allow for better characterization of expertise development in resuscitation medicine and provide a framework for future study of visual behaviours in resuscitation cases.
Collapse
|
134
|
Gandomkar Z, Tay K, Brennan PC, Mello-Thoms C. Recurrence quantification analysis of radiologists' scanpaths when interpreting mammograms. Med Phys 2018; 45:3052-3062. [PMID: 29694675 DOI: 10.1002/mp.12935] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 02/22/2018] [Accepted: 02/23/2018] [Indexed: 11/05/2022] Open
Abstract
PURPOSE The purpose of this study was to Propose a classifier based on recurrence quantification analysis (RQA) metrics for distinguishing experts' scanpaths from those of less-experienced readers and to explore the association of spatiotemporal dynamics of the mammographic scanpaths with the characteristics of cases and radiologists using RQA metrics. MATERIALS AND METHODS Eye movements were recorded from eight radiologists (two cohorts: four experienced and four less-experienced) while reading 120 mammograms (59 cancer, 61 normal). Ten RQA measures were extracted for each recorded scanpath. The measures described the temporal distribution of recurrent fixations as well as laminar and deterministic eye movements. Recurrent fixations are fixations that are located close to a previously fixated point in a scanpath. Deterministic eye movements represent looking back and forth between two locations, while laminar eye movements indicate detailed scanning of an area with consecutive fixations. The RQA metrics along with six conventional eye-tracking parameters were used to construct a classifier for distinguishing experts' scanpaths from those of less-experienced readers. Leave-one-out cross validation was used for evaluating the classifier. For each reader cohort, the ANOVA analysis was done to study the relationship of RQA measures with breast density, case pathology, readers' expertise, and readers' decisions on the case. The proportions of laminar and deterministic movements involved fixations in the location of lesions were also compared for two reader cohorts using two proportion z-tests. RESULTS All RQA measures differed significantly between scanpaths of experienced readers and those of less-experienced readers. The classifier achieved an area under the receiver operating characteristic curve of 0.89 (0.87-0.91) for detecting experts' scanpaths. Proportionately more refixations and laminar and deterministic sequences were in the location of the lesion for the experienced cohort compared to the less-experienced cohort (all P-values < 0.001). Eight and four RQA measures differed between normal and cancer cases for the experienced and less experienced readers, respectively. None of metrics differed between fatty and dense breasts for the less experienced readers, while two measures resulted into a significant difference for the experienced readers. For experts, six measures differed significantly between true negatives and false positives and nine were significantly different between true positives and false negatives. For the less-experienced cohort, the corresponding figures were seven and one measures, respectively. CONCLUSION The RQA measures can quantify the differences among experienced and less experienced radiologists. They also capture differences among experts' scanpaths related to case pathology and radiologists' decisions on the case.
Collapse
Affiliation(s)
- Ziba Gandomkar
- Image Optimisation and Perception Group (MIOPeG), Discipline of Medical Imaging and Radiation Sciences, The University of Sydney, Sydney, NSW, Australia
| | - Kevin Tay
- Medical Imaging Department, Prince of Wales Hospital, Randwick, NSW, Australia
| | - Patrick C Brennan
- Image Optimisation and Perception Group (MIOPeG), Discipline of Medical Imaging and Radiation Sciences, The University of Sydney, Sydney, NSW, Australia
| | - Claudia Mello-Thoms
- Image Optimisation and Perception Group (MIOPeG), Discipline of Medical Imaging and Radiation Sciences, The University of Sydney, Sydney, NSW, Australia.,Department of Biomedical Informatics, School of Medicine, The University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
135
|
The Neuroergonomics of Aircraft Cockpits: The Four Stages of Eye-Tracking Integration to Enhance Flight Safety. SAFETY 2018. [DOI: 10.3390/safety4010008] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
136
|
Kano F, Shepherd SV, Hirata S, Call J. Primate social attention: Species differences and effects of individual experience in humans, great apes, and macaques. PLoS One 2018; 13:e0193283. [PMID: 29474416 PMCID: PMC5825077 DOI: 10.1371/journal.pone.0193283] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Accepted: 02/07/2018] [Indexed: 11/18/2022] Open
Abstract
When viewing social scenes, humans and nonhuman primates focus on particular features, such as the models' eyes, mouth, and action targets. Previous studies reported that such viewing patterns vary significantly across individuals in humans, and also across closely-related primate species. However, the nature of these individual and species differences remains unclear, particularly among nonhuman primates. In large samples of human and nonhuman primates, we examined species differences and the effects of experience on patterns of gaze toward social movies. Experiment 1 examined the species differences across rhesus macaques, nonhuman apes (bonobos, chimpanzees, and orangutans), and humans while they viewed movies of various animals' species-typical behaviors. We found that each species had distinct viewing patterns of the models' faces, eyes, mouths, and action targets. Experiment 2 tested the effect of individuals' experience on chimpanzee and human viewing patterns. We presented movies depicting natural behaviors of chimpanzees to three groups of chimpanzees (individuals from a zoo, a sanctuary, and a research institute) differing in their early social and physical experiences. We also presented the same movies to human adults and children differing in their expertise with chimpanzees (experts vs. novices) or movie-viewing generally (adults vs. preschoolers). Individuals varied within each species in their patterns of gaze toward models' faces, eyes, mouths, and action targets depending on their unique individual experiences. We thus found that the viewing patterns for social stimuli are both individual- and species-specific in these closely-related primates. Such individual/species-specificities are likely related to both individual experience and species-typical temperament, suggesting that primate individuals acquire their unique attentional biases through both ontogeny and evolution. Such unique attentional biases may help them learn efficiently about their particular social environments.
Collapse
Affiliation(s)
- Fumihiro Kano
- Kumamoto Sanctuary, Wildlife Research Center, Kyoto University, Kumamoto, Japan
| | | | - Satoshi Hirata
- Kumamoto Sanctuary, Wildlife Research Center, Kyoto University, Kumamoto, Japan
| | - Josep Call
- Department of Developmental and Comparative Psychology, Max-Planck Institute for Evolutionary Anthropology, Leipzig, Germany
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| |
Collapse
|
137
|
Klostermann A, Hossner EJ. The Quiet Eye and Motor Expertise: Explaining the "Efficiency Paradox". Front Psychol 2018; 9:104. [PMID: 29472882 PMCID: PMC5809435 DOI: 10.3389/fpsyg.2018.00104] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Accepted: 01/22/2018] [Indexed: 12/05/2022] Open
Abstract
It has been consistently reported that experts show longer quiet eye (QE) durations when compared to near-experts and novices. However, this finding is rather paradoxical as motor expertise is characterized by an economization of motor-control processes rather than by a prolongation in response programming, a suggested explanatory mechanism of the QE phenomenon. Therefore, an inhibition hypothesis was proposed that suggests an inhibition of non-optimal task solutions over movement parametrization, which is particularly necessary in experts due to the great extent and high density of their experienced task-solution space. In the current study, the effect of the task-solution space' extension was tested by comparing the QE-duration gains in groups that trained a far-aiming task with a small number (low-extent) vs. a large number (high-extent) of task variants. After an extensive training period of more than 750 trials, both groups showed superior performance in post-test and retention test when compared to pretest and longer QE durations in post-test when compared to pretest. However, the QE durations dropped to baseline values at retention. Finally, the expected additional gain in QE duration for the high-extent group was not found and thus, the assumption of long QE durations due to an extended task-solution space was not confirmed. The findings were (by tendency) more in line with the density explanation of the inhibition hypothesis. This density argument suits research revealing a high specificity of motor skills in experts thus providing worthwhile options for future research on the paradoxical relation between the QE and motor expertise.
Collapse
|
138
|
Abstract
How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.
Collapse
Affiliation(s)
| | - Janet H Hsiao
- Department of Psychology, The University of Hong Kong, Pok Fu Lam, Hong Kong
| | - Antoni B Chan
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|
139
|
Mouse-tracking evidence for parallel anticipatory option evaluation. Cogn Process 2017; 19:327-350. [PMID: 29275439 DOI: 10.1007/s10339-017-0851-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2017] [Accepted: 12/13/2017] [Indexed: 10/18/2022]
Abstract
In fast-paced, dynamic tasks, the ability to anticipate the future outcome of a sequence of events is crucial to quickly selecting an appropriate course of action among multiple alternative options. There are two classes of theories that describe how anticipation occurs. Serial theories assume options are generated and evaluated one at a time, in order of quality, whereas parallel theories assume simultaneous generation and evaluation. The present research examined the option evaluation process during a task designed to be analogous to prior anticipation tasks, but within the domain of narrative text comprehension. Prior research has relied on indirect, off-line measurement of the option evaluation process during anticipation tasks. Because the movement of the hand can provide a window into underlying cognitive processes, online metrics such as continuous mouse tracking provide more fine-grained measurements of cognitive processing as it occurs in real time. In this study, participants listened to three-sentence stories and predicted the protagonists' final action by moving a mouse toward one of three possible options. Each story was presented with either one (control condition) or two (distractor condition) plausible ending options. Results seem most consistent with a parallel option evaluation process because initial mouse trajectories deviated further from the best option in the distractor condition compared to the control condition. It is difficult to completely rule out all possible serial processing accounts, although the results do place constraints on the time frame in which a serial processing explanation must operate.
Collapse
|
140
|
|
141
|
Meißner M, Oll J. The Promise of Eye-Tracking Methodology in Organizational Research: A Taxonomy, Review, and Future Avenues. ORGANIZATIONAL RESEARCH METHODS 2017. [DOI: 10.1177/1094428117744882] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Technological advances in recent years have greatly lowered the barriers for using eye tracking (ET) as a research tool in laboratory and field settings. However, despite its potential and widespread application in other disciplines, the use of ET in organizational research remains sparse. This article therefore aims to introduce ET, and thus a new mode of behavioral data, to the field of organizational research. Based on a synthesis of prior literature, we propose an integrative taxonomy that unravels the methodological potential of ET as well as its scope of application. Building on our proposed taxonomy, we systematically review the use of ET in leading management journals and reflect on the current state of research. We further illustrate future avenues for ET in the domains of strategic management, entrepreneurship, and human resources to contribute to the method’s future dissemination and to the advancement of organizational science as well.
Collapse
Affiliation(s)
- Martin Meißner
- Department of Sociology, Environmental and Business Economics, University of Southern Denmark, Esbjerg, Denmark
- Department of Marketing, Monash Business School, Monash University, Caulfield East, VIC, Australia
| | - Josua Oll
- Faculty of Business, Economics & Social Sciences, University of Hamburg, Hamburg, Germany
| |
Collapse
|
142
|
Orquin JL, Chrobot N, Grunert KG. Guiding Decision Makers' Eye Movements with (Un)Predictable Object Locations. JOURNAL OF BEHAVIORAL DECISION MAKING 2017. [DOI: 10.1002/bdm.2060] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Jacob L. Orquin
- Department of Management/MAPP; Aarhus University; Aarhus Denmark
| | - Nina Chrobot
- Department of Psychology; SWPS University of Social Sciences and Humanities; Warsaw Poland
| | - Klaus G. Grunert
- Department of Management/MAPP; Aarhus University; Aarhus Denmark
| |
Collapse
|
143
|
Król ME, Król M. “Economies of Experience”-Disambiguation of Degraded Stimuli Leads to a Decreased Dispersion of Eye-Movement Patterns. Cogn Sci 2017; 42 Suppl 3:728-756. [DOI: 10.1111/cogs.12566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 08/03/2017] [Accepted: 10/09/2017] [Indexed: 10/18/2022]
Affiliation(s)
- Magdalena Ewa Król
- Faculty of Psychology II; SWPS University of Social Sciences and Humanities; Wrocław
| | - Michał Król
- Department of Economics; School of Social Sciences; University of Manchester
| |
Collapse
|
144
|
van Leeuwen PM, de Groot S, Happee R, de Winter JCF. Differences between racing and non-racing drivers: A simulator study using eye-tracking. PLoS One 2017; 12:e0186871. [PMID: 29121090 PMCID: PMC5679571 DOI: 10.1371/journal.pone.0186871] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 09/21/2017] [Indexed: 12/15/2022] Open
Abstract
Motorsport has developed into a professional international competition. However, limited research is available on the perceptual and cognitive skills of racing drivers. By means of a racing simulator, we compared the driving performance of seven racing drivers with ten non-racing drivers. Participants were tasked to drive the fastest possible lap time. Additionally, both groups completed a choice reaction time task and a tracking task. Results from the simulator showed faster lap times, higher steering activity, and a more optimal racing line for the racing drivers than for the non-racing drivers. The non-racing drivers’ gaze behavior corresponded to the tangent point model, whereas racing drivers showed a more variable gaze behavior combined with larger head rotations while cornering. Results from the choice reaction time task and tracking task showed no statistically significant difference between the two groups. Our results are consistent with the current consensus in sports sciences in that task-specific differences exist between experts and novices while there are no major differences in general cognitive and motor abilities.
Collapse
Affiliation(s)
- Peter M. van Leeuwen
- Delft University of Technology, Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, CD Delft, The Netherlands
- * E-mail:
| | - Stefan de Groot
- Delft University of Technology, Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, CD Delft, The Netherlands
| | - Riender Happee
- Delft University of Technology, Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, CD Delft, The Netherlands
| | - Joost C. F. de Winter
- Delft University of Technology, Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, CD Delft, The Netherlands
| |
Collapse
|
145
|
Kredel R, Vater C, Klostermann A, Hossner EJ. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research. Front Psychol 2017; 8:1845. [PMID: 29089918 PMCID: PMC5651090 DOI: 10.3389/fpsyg.2017.01845] [Citation(s) in RCA: 80] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Accepted: 10/03/2017] [Indexed: 12/04/2022] Open
Abstract
Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed—in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.
Collapse
Affiliation(s)
- Ralf Kredel
- Movement Science, University of Bern, Bern, Switzerland
| | | | | | | |
Collapse
|
146
|
Szulewski A, Gegenfurtner A, Howes DW, Sivilotti MLA, van Merriënboer JJG. Measuring physician cognitive load: validity evidence for a physiologic and a psychometric tool. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:951-968. [PMID: 27787677 DOI: 10.1007/s10459-016-9725-2] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Accepted: 10/19/2016] [Indexed: 05/14/2023]
Abstract
In general, researchers attempt to quantify cognitive load using physiologic and psychometric measures. Although the construct measured by both of these metrics is thought to represent overall cognitive load, there is a paucity of studies that compares these techniques to one another. The authors compared data obtained from one physiologic tool (pupillometry) to one psychometric tool (Paas scale) to explore whether they actually measured the construct of cognitive load as purported. Thirty-two participants with a range of resuscitation medicine experience and expertise completed resuscitation-medicine based multiple-choice-questions as well as arithmetic questions. Cognitive load, as measured by both tools, was found to be higher for the more difficult questions as well as for questions that were answered incorrectly (p < 0.001). The group with the least medical experience had higher cognitive load than both the intermediate and experienced groups when answering domain-specific questions (p = 0.023 and p = 0.003 respectively for the physiologic tool; p = 0.006 and p < 0.001 respectively for the psychometric tool). There was a strong positive correlation (Spearman's ρ = 0.827, p < 0.001 for arithmetic questions; Spearman's ρ = 0.606, p < 0.001 for medical questions) between the two cognitive load measurement tools. These findings support the validity argument that both physiologic and psychometric metrics measure the construct of cognitive load.
Collapse
Affiliation(s)
- Adam Szulewski
- Department of Emergency Medicine, Queen's University, Kingston General Hospital, 76 Stuart Street, Kingston, ON, K7L 2V7, Canada
| | - Andreas Gegenfurtner
- Institut für Qualität und Weiterbildung, Technische Hochschule Deggendorf, Edlmairstraße 9, 94469, Deggendorf, Germany
| | - Daniel W Howes
- Departments of Emergency Medicine and Critical Care, Queen's University, Kingston General Hospital, 76 Stuart Street, Kingston, ON, K7L 2V7, Canada.
| | - Marco L A Sivilotti
- Departments of Emergency Medicine and Biomedical and Molecular Sciences, Kingston General Hospital, 76 Stuart Street, Kingston, ON, K7L 2V7, Canada
| | - Jeroen J G van Merriënboer
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, Room N.5.06, 6200 MD, Maastricht, The Netherlands
| |
Collapse
|
147
|
Sheridan H, Reingold EM. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review. Front Psychol 2017; 8:1620. [PMID: 29033865 PMCID: PMC5627012 DOI: 10.3389/fpsyg.2017.01620] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 09/04/2017] [Indexed: 12/11/2022] Open
Abstract
In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise.
Collapse
Affiliation(s)
- Heather Sheridan
- Department of Psychology, University at Albany, State University of New York, Albany, NY, United States
| | - Eyal M. Reingold
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| |
Collapse
|
148
|
Effect of Different Evasion Maneuvers on Anticipation and Visual Behavior in Elite Rugby League Players. Motor Control 2017; 22:18-27. [PMID: 28121283 DOI: 10.1123/mc.2016-0034] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This study examined the anticipation and visual behavior of elite rugby league players during two different evasion maneuvers (side- and split-steps). Participants (N = 48) included elite rugby league players (n = 38) and controls (n = 10). Each participant watched videos consisting of side- and split-steps, and anticipation of movement and eye behavior were measured. No significant differences between the groups or evasion maneuvers were found. The split-step was significantly harder to predict. Elite players appeared to spend more time viewing the torso and mid-region of the body compared with the controls.
Collapse
|
149
|
Ravesloot CJ, van der Schaaf MF, Kruitwagen CLJJ, van der Gijp A, Rutgers DR, Haaring C, ten Cate O, van Schaik JPJ. Predictors of Knowledge and Image Interpretation Skill Development in Radiology Residents. Radiology 2017; 284:758-765. [DOI: 10.1148/radiol.2017152648] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Cécile J. Ravesloot
- From the Department of Radiology (C.J.R., A.v.d.G., D.R.R., C.H., J.P.J.v.S.), Julius Center (C.L.J.J.K.) and Center for Research and Education Development (O.t.C.), University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, the Netherlands; and Department of Education, University Utrecht, Utrecht, the Netherlands (M.F.v.d.S.)
| | - Marieke F. van der Schaaf
- From the Department of Radiology (C.J.R., A.v.d.G., D.R.R., C.H., J.P.J.v.S.), Julius Center (C.L.J.J.K.) and Center for Research and Education Development (O.t.C.), University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, the Netherlands; and Department of Education, University Utrecht, Utrecht, the Netherlands (M.F.v.d.S.)
| | - Cas L. J. J. Kruitwagen
- From the Department of Radiology (C.J.R., A.v.d.G., D.R.R., C.H., J.P.J.v.S.), Julius Center (C.L.J.J.K.) and Center for Research and Education Development (O.t.C.), University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, the Netherlands; and Department of Education, University Utrecht, Utrecht, the Netherlands (M.F.v.d.S.)
| | - Anouk van der Gijp
- From the Department of Radiology (C.J.R., A.v.d.G., D.R.R., C.H., J.P.J.v.S.), Julius Center (C.L.J.J.K.) and Center for Research and Education Development (O.t.C.), University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, the Netherlands; and Department of Education, University Utrecht, Utrecht, the Netherlands (M.F.v.d.S.)
| | - Dirk R. Rutgers
- From the Department of Radiology (C.J.R., A.v.d.G., D.R.R., C.H., J.P.J.v.S.), Julius Center (C.L.J.J.K.) and Center for Research and Education Development (O.t.C.), University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, the Netherlands; and Department of Education, University Utrecht, Utrecht, the Netherlands (M.F.v.d.S.)
| | - Cees Haaring
- From the Department of Radiology (C.J.R., A.v.d.G., D.R.R., C.H., J.P.J.v.S.), Julius Center (C.L.J.J.K.) and Center for Research and Education Development (O.t.C.), University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, the Netherlands; and Department of Education, University Utrecht, Utrecht, the Netherlands (M.F.v.d.S.)
| | - Olle ten Cate
- From the Department of Radiology (C.J.R., A.v.d.G., D.R.R., C.H., J.P.J.v.S.), Julius Center (C.L.J.J.K.) and Center for Research and Education Development (O.t.C.), University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, the Netherlands; and Department of Education, University Utrecht, Utrecht, the Netherlands (M.F.v.d.S.)
| | - Jan P. J. van Schaik
- From the Department of Radiology (C.J.R., A.v.d.G., D.R.R., C.H., J.P.J.v.S.), Julius Center (C.L.J.J.K.) and Center for Research and Education Development (O.t.C.), University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, the Netherlands; and Department of Education, University Utrecht, Utrecht, the Netherlands (M.F.v.d.S.)
| |
Collapse
|
150
|
Berndt M, Strijbos JW, Fischer F. Effects of written peer-feedback content and sender’s competence on perceptions, performance, and mindful cognitive processing. EUROPEAN JOURNAL OF PSYCHOLOGY OF EDUCATION 2017. [DOI: 10.1007/s10212-017-0343-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|