1
|
Zhu H, Pi YL, Qiu FH, Wang FJ, Liu K, Ni Z, Wu Y, Zhang J. Visual and Action-control Expressway Associated with Efficient Information Transmission in Elite Athletes. Neuroscience 2019; 404:353-370. [PMID: 30771510 DOI: 10.1016/j.neuroscience.2019.02.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2018] [Revised: 02/02/2019] [Accepted: 02/04/2019] [Indexed: 11/19/2022]
Abstract
Effective information transmission for open skill performance requires fine-scale coordination of distributed networks of brain regions linked by white matter tracts. However, how patterns of connectivity in these anatomical pathways may improve global efficiency remains unclear. In this study, we hypothesized that the feeder edges in visual and motor systems have the potential to become "expressways" that increase the efficiency of information communication across brain networks of open skill experts. Thirty elite athletes and thirty novice subjects were recruited to participate in visual tracking and motor imagery tasks. We collected structural imaging data from these subjects, and then resolved structural neural networks using deterministic tractography to identify streamlines connecting cortical and subcortical brain regions of each participant. We observed that superior skill performance in elite athletes was associated with increased information transmission efficiency in feeder edges distributed between orbitofrontal and basal ganglia modules, as well as among temporal, occipital, and limbic system modules. These findings suggest that there is an expressway linking visual and action-control system of skill experts that enables more efficient interactions of peripheral and central information in support of effective performance of an open skill.
Collapse
Affiliation(s)
- Hua Zhu
- Key Laboratory of Exercise and Health Sciences of Ministry of Education, Shanghai University of Sport, Shanghai 200438, China
| | - Yan-Ling Pi
- Shanghai Punan Hospital of Pudong New District, Shanghai, China
| | - Fang-Hui Qiu
- Key Laboratory of Exercise and Health Sciences of Ministry of Education, Shanghai University of Sport, Shanghai 200438, China
| | - Feng-Juan Wang
- Physical Education and Educational Science Department, Tianjin University of Sport, Tianjin, China
| | - Ke Liu
- Shanghai Punan Hospital of Pudong New District, Shanghai, China
| | - Zhen Ni
- Key Laboratory of Exercise and Health Sciences of Ministry of Education, Shanghai University of Sport, Shanghai 200438, China
| | - Yin Wu
- Key Laboratory of Exercise and Health Sciences of Ministry of Education, Shanghai University of Sport, Shanghai 200438, China
| | - Jian Zhang
- Key Laboratory of Exercise and Health Sciences of Ministry of Education, Shanghai University of Sport, Shanghai 200438, China.
| |
Collapse
|
2
|
Liu J, Xue L. Visual Development of Chinese Children, Studied with Eye-Tracking Technology. VISUAL ANTHROPOLOGY 2019. [DOI: 10.1080/08949468.2019.1603033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
3
|
Abstract
The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings.
Collapse
|
4
|
A guideline for integrating dynamic areas of interests in existing set-up for capturing eye movement: Looking at moving aircraft. Behav Res Methods 2018; 49:822-834. [PMID: 27287446 DOI: 10.3758/s13428-016-0745-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Today, capturing the behavior of a human eye is considered a standard method for measuring the information-gathering process and thereby gaining insights into cognitive processes. Due to the dynamic character of most task environments there is still a lack of a structured and automated approach for analyzing eye movement in combination with moving objects. In this article, we present a guideline for advanced gaze analysis, called IGDAI (Integration Guideline for Dynamic Areas of Interest). The application of IGDAI allows gathering dynamic areas of interest and simplifies its combination with eye movement. The first step of IGDAI defines the basic requirements for the experimental setup including the embedding of an eye tracker. The second step covers the issue of storing the information of task environments for the dynamic AOI analysis. Implementation examples in XML are presented fulfilling the requirements for most dynamic task environments. The last step includes algorithms to combine the captured eye movement and the dynamic areas of interest. A verification study was conducted, presenting an air traffic controller environment to participants. The participants had to distinguish between different types of dynamic objects. The results show that in comparison to static areas of interest, IGDAI allows a faster and more detailed view on the distribution of eye movement.
Collapse
|
5
|
|
6
|
Abstract
Recent years have witnessed a remarkable growth in the way mathematics, informatics, and computer science can process data. In disciplines such as machine learning,
pattern recognition, computer vision, computational neurology, molecular biology,
information retrieval, etc., many new methods have been developed to cope with the
ever increasing amount and complexity of the data. These new methods offer interesting possibilities for processing, classifying and interpreting eye-tracking data. The
present paper exemplifies the application of topological arguments to improve the
evaluation of eye-tracking data. The task of classifying raw eye-tracking data into
saccades and fixations, with a single, simple as well as intuitive argument, described
as coherence of spacetime, is discussed, and the hierarchical ordering of the fixations
into dwells is shown. The method, namely identification by topological characteristics
(ITop), is parameter-free and needs no pre-processing and post-processing of the raw
data. The general and robust topological argument is easy to expand into complex
settings of higher visual tasks, making it possible to identify visual strategies.
Collapse
Affiliation(s)
- Oliver Hein
- Neurological University Clinic Hamburg UKE, Germany
| | | |
Collapse
|
7
|
Lappi O. Eye movements in the wild: Oculomotor control, gaze behavior & frames of reference. Neurosci Biobehav Rev 2016; 69:49-68. [PMID: 27461913 DOI: 10.1016/j.neubiorev.2016.06.006] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2015] [Revised: 05/14/2016] [Accepted: 06/08/2016] [Indexed: 11/19/2022]
Abstract
Understanding the brain's capacity to encode complex visual information from a scene and to transform it into a coherent perception of 3D space and into well-coordinated motor commands are among the outstanding questions in the study of integrative brain function. Eye movement methodologies have allowed us to begin addressing these questions in increasingly naturalistic tasks, where eye and body movements are ubiquitous and, therefore, the applicability of most traditional neuroscience methods restricted. This review explores foundational issues in (1) how oculomotor and motor control in lab experiments extrapolates into more complex settings and (2) how real-world gaze behavior in turn decomposes into more elementary eye movement patterns. We review the received typology of oculomotor patterns in laboratory tasks, and how they map onto naturalistic gaze behavior (or not). We discuss the multiple coordinate systems needed to represent visual gaze strategies, how the choice of reference frame affects the description of eye movements, and the related but conceptually distinct issue of coordinate transformations between internal representations within the brain.
Collapse
Affiliation(s)
- Otto Lappi
- Cognitive Science, Institute of Behavioural Sciences, PO BOX 9, 00014 University of Helsinki, Finland.
| |
Collapse
|
8
|
Abstract
User evaluations of interactive and dynamic applications face various challenges related to the active nature of these displays. For example, users can often zoom and pan on digital products, and these interactions cause changes in the extent and/or level of detail of the stimulus. Therefore, in eye tracking studies, when a user's gaze is at a particular screen position (gaze position) over a period of time, the information contained in this particular position may have changed. Such digital activities are commonplace in modern life, yet it has been difficult to automatically compare the changing information at the viewed position, especially across many participants. Existing solutions typically involve tedious and time-consuming manual work. In this article, we propose a methodology that can overcome this problem. By combining eye tracking with user logging (mouse and keyboard actions) with cartographic products, we are able to accurately reference screen coordinates to geographic coordinates. This referencing approach allows researchers to know which geographic object (location or attribute) corresponds to the gaze coordinates at all times. We tested the proposed approach through two case studies, and discuss the advantages and disadvantages of the applied methodology. Furthermore, the applicability of the proposed approach is discussed with respect to other fields of research that use eye tracking-namely, marketing, sports and movement sciences, and experimental psychology. From these case studies and discussions, we conclude that combining eye tracking and user-logging data is an essential step forward in efficiently studying user behavior with interactive and static stimuli in multiple research fields.
Collapse
|
9
|
Abstract
This article presents GazeAlyze, a software package, written as a MATLAB (MathWorks Inc., Natick, MA) toolbox developed for the analysis of eye movement data. GazeAlyze was developed for the batch processing of multiple data files and was designed as a framework with extendable modules. GazeAlyze encompasses the main functions of the entire processing queue of eye movement data to static visual stimuli. This includes detecting and filtering artifacts, detecting events, generating regions of interest, generating spread sheets for further statistical analysis, and providing methods for the visualization of results, such as path plots and fixation heat maps. All functions can be controlled through graphical user interfaces. GazeAlyze includes functions for correcting eye movement data for the displacement of the head relative to the camera after calibration in fixed head mounts. The preprocessing and event detection methods in GazeAlyze are based on the software ILAB 3.6.8 Gitelman (Behav Res Methods Instrum Comput 34(4), 605-612, 2002). GazeAlyze is distributed free of charge under the terms of the GNU public license and allows code modifications to be made so that the program's performance can be adjusted according to a user's scientific requirements.
Collapse
|
10
|
Papageorgiou E, Hardiess G, Mallot HA, Schiefer U. Gaze patterns predicting successful collision avoidance in patients with homonymous visual field defects. Vision Res 2012; 65:25-37. [PMID: 22721638 DOI: 10.1016/j.visres.2012.06.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2011] [Revised: 05/23/2012] [Accepted: 06/06/2012] [Indexed: 11/28/2022]
Abstract
Aim of the present study was to identify efficient compensatory gaze patterns applied by patients with homonymous visual field defects (HVFDs) under virtual reality (VR) conditions in a dynamic collision avoidance task. Thirty patients with HVFDs due to vascular brain lesions and 30 normal subjects performed a collision avoidance task with moving objects at an intersection under two difficulty levels. Based on their performance (i.e. the number of collisions), patients were assigned to either an "adequate" (HVFD(A)) or "inadequate" (HVFD(I)) subgroup by the median split method. Eye and head tracking data were available for 14 patients and 19 normal subjects. Saccades, fixations, mean number of gaze shifts, scanpath length and the mean gaze eccentricity, were compared between HVFD(A), HVFD(I) patients and normal subjects. For both difficulty levels, the gaze patterns of HVFD(A) patients (N=5) compared to HVFD(I) patients (N=9) were characterized by longer saccadic amplitudes towards both the affected and the intact side, larger mean gaze eccentricity, more gaze shifts, longer scanpaths and more fixations on vehicles but fewer fixations on the intersection. Both patient groups displayed more fixations in the affected compared to the intact hemifield. Fixation number, fixation duration, scanpath length, and number of gaze shifts were similar between HVFD(A) patients and normal subjects. Patients with HVFDs who adapt successfully to their visual deficit, display distinct gaze patterns characterized by increased exploratory eye and head movements, particularly towards moving objects of interest on their blind side. In the context of a dynamic environment, efficient compensation in patients with HVFDs is possible by means of gaze scanning. This strategy allows continuous update of the moving objects' spatial location and selection of the task-relevant ones, which will be represented in visual working memory.
Collapse
Affiliation(s)
- Eleni Papageorgiou
- Center for Ophthalmology, Institute for Ophthalmic Research, University of Tübingen, Germany.
| | | | | | | |
Collapse
|
11
|
Reimer B, Mehler B, Wang Y, Coughlin JF. A field study on the impact of variations in shortterm memory demands on drivers' visual attention and driving performance across three age groups. HUMAN FACTORS 2012; 54:454-68. [PMID: 22768646 DOI: 10.1177/0018720812437274] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
OBJECTIVE The aim of this study was to assess sensitivity of visual attention and driving performance for detecting changes in driver cognitive workload across different age groups. BACKGROUND The literature shows mixed results concerning the sensitivity of gaze concentration metrics to variations in cognitive demand. No studies appear showing how age affects gaze allocation during cognitive demand. METHOD Recordings of drivers' gaze and driving performance by individuals in their 20s, 40s, and 60s were captured in actual driving conditions during three levels of cognitive demand. RESULTS Gaze concentration increased with task difficulty through the low and moderate levels of demand and then appeared to level out at the high demand level. At the moderate difficulty level, gaze concentration increased by 2.4 cm (approximately 2 degrees) from the reference period. The degree of gaze concentration with added cognitive demand is not related to age in the relatively healthy drivers studied. Driving performance measures did not show a consistent relationship with the objective demand level. CONCLUSION Gaze concentration appears at low levels of cognitive demand prior to the appearance of marked decrements in driving control. There is no compelling evidence from this study that driving performance measures can be used to index differences in workload prior to capacity saturation. APPLICATION Drivers' awareness of vehicle surroundings is incrementally affected by increases in cognitive demand. Developers of more advanced driver support systems should consider gaze concentration as a measure of driver cognitive workload. This recommendation is particularly relevant in light of the added benefits of gaze measurements for detecting visual demand.
Collapse
Affiliation(s)
- Bryan Reimer
- MIT, AgeLab, 77 Massachusetts Ave., E40-278, Cambridge, MA 02139, USA.
| | | | | | | |
Collapse
|
12
|
Reimer B, Mehler B, Wang Y, Coughlin JF. The Impact of Systematic Variation of Cognitive Demand on Drivers' Visual Attention across Multiple Age Groups. ACTA ACUST UNITED AC 2010. [DOI: 10.1177/154193121005402407] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Recordings of drivers' gaze under three levels of cognitive demand were captured under actual driving conditions in individuals in their 20's, 40's and 60's. Changes in the allocation of visual attention between single-task driving and the three levels of cognitive secondary tasks are summarized. Under the conditions studied here, gaze centralization varies by task difficulty and appears predominantly in the horizontal plane. The degree of horizontal gaze centralization with added cognitive workload is not related to age in the relatively healthy individuals studied.
Collapse
Affiliation(s)
- Bryan Reimer
- MIT AgeLab & New England University Transportation Center Cambridge Massachusetts
| | - Bruce Mehler
- MIT AgeLab & New England University Transportation Center Cambridge Massachusetts
| | - Ying Wang
- MIT AgeLab & New England University Transportation Center Cambridge Massachusetts
| | - Joseph F. Coughlin
- MIT AgeLab & New England University Transportation Center Cambridge Massachusetts
| |
Collapse
|
13
|
DynAOI: a tool for matching eye-movement data with dynamic areas of interest in animations and movies. Behav Res Methods 2010; 42:179-87. [PMID: 20160298 DOI: 10.3758/brm.42.1.179] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Analyzing gaze behavior with dynamic stimulus material is of growing importance in experimental psychology; however, there is still a lack of efficient analysis tools that are able to handle dynamically changing areas of interest. In this article, we present DynAOI, an open-source tool that allows for the definition of dynamic areas of interest. It works automatically with animations that are based on virtual three-dimensional models. When one is working with videos of real-world scenes, a three-dimensional model of the relevant content needs to be created first. The recorded eye-movement data are matched with the static and dynamic objects in the model underlying the video content, thus creating static and dynamic areas of interest. A validation study asking participants to track particular objects demonstrated that DynAOI is an efficient tool for handling dynamic areas of interest.
Collapse
|
14
|
Reimer B, Mehler B, D'Ambrosio LA, Fried R. The impact of distractions on young adult drivers with attention deficit hyperactivity disorder (ADHD). ACCIDENT; ANALYSIS AND PREVENTION 2010; 42:842-851. [PMID: 20380911 DOI: 10.1016/j.aap.2009.06.021] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2008] [Revised: 06/10/2009] [Accepted: 06/15/2009] [Indexed: 05/27/2023]
Abstract
Young adults with attention deficit hyperactivity disorder (ADHD) are at higher risk for being involved in automobile crashes. Although driving simulators have been used to identify and understand underlying behaviors, prior research has focused largely on single-task, non-distracted driving. However, in-vehicle infotainment and communications systems often vie for a driver's attention, potentially increasing the risk of collision. This paper explores the impact of secondary tasks on individuals with and without ADHD, a medical condition known to affect the regulation of attention. Data are drawn from a validated driving simulation representing periods before, during, and after participation in a secondary cognitive task. A hands-free phone task was employed in a high stimulus, urban setting and a working memory task during low stimulus, highway driving. Drivers with ADHD had more difficulty on the telephone task, yet did not show an increased decrement in driving performance greater than control participants. In contrast, participants with ADHD showed a larger decline in driving performance than controls during a secondary task in a low demand setting. The results suggest that the interaction of the nature of the driving context and the secondary task has a significant influence on how drivers with ADHD allocate attention and, in-turn, on the relative impact on driving performance. Drivers with ADHD appear particularly susceptible to distraction during periods of low stimulus driving.
Collapse
Affiliation(s)
- Bryan Reimer
- AgeLab, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
| | | | | | | |
Collapse
|
15
|
Rice S, Clayton KD, Trafimow D, Keller D, Hughes J. The effects of private and collective self-priming on visual search: taking advantage of organized contextual stimuli. BRITISH JOURNAL OF SOCIAL PSYCHOLOGY 2008; 48:467-86. [PMID: 18789184 DOI: 10.1348/014466608x354580] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Two experiments tested the hypothesis that priming the collective self improves some visual search tasks. In both experiments, participants searched for an O among Qs. The pattern of distracters was manipulated across experiments to allow the possibility of grouping (Experiment 1) or to disallow this possibility (Experiment 2). Consistent with expectations, collective self-priming increased visual search speed when grouping was possible but it had no effect on visual search speed when grouping was not possible. In combination, the data support the notion that collective self-priming makes people more likely to utilize a pattern to facilitate visual search when there is a pattern present to be perceived.
Collapse
Affiliation(s)
- Stephen Rice
- New Mexico State University, Las Cruces, New Mexico, USA.
| | | | | | | | | |
Collapse
|