1
|
Garlichs A, Lustig M, Gamer M, Blank H. Expectations guide predictive eye movements and information sampling during face recognition. iScience 2024; 27:110920. [PMID: 39351204 PMCID: PMC11439840 DOI: 10.1016/j.isci.2024.110920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/21/2024] [Accepted: 09/06/2024] [Indexed: 10/04/2024] Open
Abstract
Context information has a crucial impact on our ability to recognize faces. Theoretical frameworks of predictive processing suggest that predictions derived from context guide sampling of sensory evidence at informative locations. However, it is unclear how expectations influence visual information sampling during face perception. To investigate the effects of expectations on eye movements during face anticipation and recognition, we conducted two eye-tracking experiments (n = 34, each) using cued face morphs containing expected and unexpected facial features, and clear expected and unexpected faces. Participants performed predictive saccades toward expected facial features and fixated expected more often and longer than unexpected features. In face morphs, expected features attracted early eye movements, followed by unexpected features, indicating that top-down as well as bottom-up information drives face sampling. Our results provide compelling evidence that expectations influence face processing by guiding predictive and early eye movements toward anticipated informative locations, supporting predictive processing.
Collapse
Affiliation(s)
- Annika Garlichs
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Mark Lustig
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Psychology, University of Hamburg, Hamburg, Germany
| | - Matthias Gamer
- Department of Psychology, University of Würzburg, Würzburg, Germany
| | - Helen Blank
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Predictive Cognition, Research Center One Health Ruhr of the University Alliance Ruhr, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
2
|
Brand TK, Schütz AC, Müller H, Maurer H, Hegele M, Maurer LK. Sensorimotor prediction is used to direct gaze toward task-relevant locations in a goal-directed throwing task. J Neurophysiol 2024; 132:485-500. [PMID: 38919149 DOI: 10.1152/jn.00052.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 06/17/2024] [Accepted: 06/19/2024] [Indexed: 06/27/2024] Open
Abstract
Previous research has shown that action effects of self-generated movements are internally predicted before outcome feedback becomes available. To test whether these sensorimotor predictions are used to facilitate visual information uptake for feedback processing, we measured eye movements during the execution of a goal-directed throwing task. Participants could fully observe the effects of their throwing actions (ball trajectory and either hitting or missing a target) in most of the trials. In a portion of the trials, the ball trajectory was not visible, and participants only received static information about the outcome. We observed a large proportion of predictive saccades, shifting gaze toward the goal region before the ball arrived and outcome feedback became available. Fixation locations after predictive saccades systematically covaried with future ball positions in trials with continuous ball flight information, but notably also in trials with static outcome feedback and only efferent and proprioceptive information about the movement that could be used for predictions. Fixation durations at the chosen positions after feedback onset were modulated by action outcome (longer durations for misses than for hits) and outcome uncertainty (longer durations for narrow vs. clear outcomes). Combining both effects, durations were longest for narrow errors and shortest for clear hits, indicating that the chosen locations offer informational value for feedback processing. Thus, humans are able to use sensorimotor predictions to direct their gaze toward task-relevant feedback locations. Outcome-dependent saccade latency differences (miss vs. hit) indicate that also predictive valuation processes are involved in planning predictive saccades.NEW & NOTEWORTHY We elucidate the potential benefits of sensorimotor predictions, focusing on how the system actually uses this information to optimize feedback processing in goal-directed actions. Sensorimotor information is used to predict spatial parameters of movement outcomes, guiding predictive saccades toward future action effects. Saccade latencies and fixation durations are modulated by outcome quality, indicating that predictive valuation processes are considered and that the locations chosen are of high informational value for feedback processing.
Collapse
Affiliation(s)
- Theresa K Brand
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Alexander C Schütz
- General and Biological Psychology, Department of Psychology, Philipps University Marburg, Marburg, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Hermann Müller
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Heiko Maurer
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
| | - Mathias Hegele
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Lisa K Maurer
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| |
Collapse
|
3
|
Salisbury JM, Palmer SE. A dynamic scale-mixture model of motion in natural scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.19.563101. [PMID: 37961311 PMCID: PMC10634686 DOI: 10.1101/2023.10.19.563101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Some of the most important tasks of visual and motor systems involve estimating the motion of objects and tracking them over time. Such systems evolved to meet the behavioral needs of the organism in its natural environment, and may therefore be adapted to the statistics of motion it is likely to encounter. By tracking the movement of individual points in movies of natural scenes, we begin to identify common properties of natural motion across scenes. As expected, objects in natural scenes move in a persistent fashion, with velocity correlations lasting hundreds of milliseconds. More subtly, but crucially, we find that the observed velocity distributions are heavy-tailed and can be modeled as a Gaussian scale-mixture. Extending this model to the time domain leads to a dynamic scale-mixture model, consisting of a Gaussian process multiplied by a positive scalar quantity with its own independent dynamics. Dynamic scaling of velocity arises naturally as a consequence of changes in object distance from the observer, and may approximate the effects of changes in other parameters governing the motion in a given scene. This modeling and estimation framework has implications for the neurobiology of sensory and motor systems, which need to cope with these fluctuations in scale in order to represent motion efficiently and drive fast and accurate tracking behavior.
Collapse
|
4
|
Valzolgher C. Motor Strategies: The Role of Active Behavior in Spatial Hearing Research. Psychol Rep 2024:332941241260246. [PMID: 38857521 DOI: 10.1177/00332941241260246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024]
Abstract
When completing a task, the ability to implement behavioral strategies to solve it in an effective and cognitively less-demanding way is extremely adaptive for humans. This behavior makes it possible to accumulate evidence and test one's own predictions about the external world. In this work, starting from examples in the field of spatial hearing research, I analyze the importance of considering motor strategies in perceptual tasks, and I stress the urgent need to create ecological experimental settings, which are essential in allowing the implementation of such behaviors and in measuring them. In particular, I will consider head movements as an example of strategic behavior implemented to solve acoustic space-perception tasks.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| |
Collapse
|
5
|
Gerharz L, Brenner E, Billino J, Voudouris D. Age effects on predictive eye movements for action. J Vis 2024; 24:8. [PMID: 38856982 PMCID: PMC11166221 DOI: 10.1167/jov.24.6.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 04/22/2024] [Indexed: 06/11/2024] Open
Abstract
When interacting with the environment, humans typically shift their gaze to where information is to be found that is useful for the upcoming action. With increasing age, people become slower both in processing sensory information and in performing their movements. One way to compensate for this slowing down could be to rely more on predictive strategies. To examine whether we could find evidence for this, we asked younger (19-29 years) and older (55-72 years) healthy adults to perform a reaching task wherein they hit a visual target that appeared at one of two possible locations. In separate blocks of trials, the target could appear always at the same location (predictable), mainly at one of the locations (biased), or at either location randomly (unpredictable). As one might expect, saccades toward predictable targets had shorter latencies than those toward less predictable targets, irrespective of age. Older adults took longer to initiate saccades toward the target location than younger adults, even when the likely target location could be deduced. Thus we found no evidence of them relying more on predictive gaze. Moreover, both younger and older participants performed more saccades when the target location was less predictable, but again no age-related differences were found. Thus we found no tendency for older adults to rely more on prediction.
Collapse
Affiliation(s)
- Leonard Gerharz
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- https://orcid.org/0009-0006-0487-2609
| | - Eli Brenner
- Department of Human Movement Science, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Jutta Billino
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Dimitris Voudouris
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
6
|
Arthur T, Vine S, Wilson M, Harris D. The role of prediction and visual tracking strategies during manual interception: An exploration of individual differences. J Vis 2024; 24:4. [PMID: 38842836 PMCID: PMC11160954 DOI: 10.1167/jov.24.6.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 04/10/2024] [Indexed: 06/07/2024] Open
Abstract
The interception (or avoidance) of moving objects is a common component of various daily living tasks; however, it remains unclear whether precise alignment of foveal vision with a target is important for motor performance. Furthermore, there has also been little examination of individual differences in visual tracking strategy and the use of anticipatory gaze adjustments. We examined the importance of in-flight tracking and predictive visual behaviors using a virtual reality environment that required participants (n = 41) to intercept tennis balls projected from one of two possible locations. Here, we explored whether different tracking strategies spontaneously arose during the task, and which were most effective. Although indices of closer in-flight tracking (pursuit gain, tracking coherence, tracking lag, and saccades) were predictive of better interception performance, these relationships were rather weak. Anticipatory gaze shifts toward the correct release location of the ball provided no benefit for subsequent interception. Nonetheless, two interceptive strategies were evident: 1) early anticipation of the ball's onset location followed by attempts to closely track the ball in flight (i.e., predictive strategy); or 2) positioning gaze between possible onset locations and then using peripheral vision to locate the moving ball (i.e., a visual pivot strategy). Despite showing much poorer in-flight foveal tracking of the ball, participants adopting a visual pivot strategy performed slightly better in the task. Overall, these results indicate that precise alignment of the fovea with the target may not be critical for interception tasks, but that observers can adopt quite varied visual guidance approaches.
Collapse
Affiliation(s)
- Tom Arthur
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, EX1 2LU, UK
| | - Samuel Vine
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, EX1 2LU, UK
| | - Mark Wilson
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, EX1 2LU, UK
| | - David Harris
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, EX1 2LU, UK
| |
Collapse
|
7
|
Malpica S, Martin D, Serrano A, Gutierrez D, Masia B. Task-Dependent Visual Behavior in Immersive Environments: A Comparative Study of Free Exploration, Memory and Visual Search. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4417-4425. [PMID: 37788210 DOI: 10.1109/tvcg.2023.3320259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Visual behavior depends on both bottom-up mechanisms, where gaze is driven by the visual conspicuity of the stimuli, and top-down mechanisms, guiding attention towards relevant areas based on the task or goal of the viewer. While this is well-known, visual attention models often focus on bottom-up mechanisms. Existing works have analyzed the effect of high-level cognitive tasks like memory or visual search on visual behavior; however, they have often done so with different stimuli, methodology, metrics and participants, which makes drawing conclusions and comparisons between tasks particularly difficult. In this work we present a systematic study of how different cognitive tasks affect visual behavior in a novel within-subjects design scheme. Participants performed free exploration, memory and visual search tasks in three different scenes while their eye and head movements were being recorded. We found significant, consistent differences between tasks in the distributions of fixations, saccades and head movements. Our findings can provide insights for practitioners and content creators designing task-oriented immersive applications.
Collapse
|
8
|
Eisenberg ML, Rodebaugh TL, Flores S, Zacks JM. Impaired prediction of ongoing events in posttraumatic stress disorder. Neuropsychologia 2023; 188:108636. [PMID: 37437653 DOI: 10.1016/j.neuropsychologia.2023.108636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 06/12/2023] [Accepted: 06/30/2023] [Indexed: 07/14/2023]
Abstract
The ability to make accurate predictions about what is going to happen in the near future is critical for comprehension of everyday activity. However, predictive processing may be disrupted in Posttraumatic Stress Disorder (PTSD). Hypervigilance may lead people with PTSD to make inaccurate predictions about the likelihood of future danger. This disruption in predictive processing may occur not only in response to threatening stimuli, but also during processing of neutral stimuli. Therefore, the current study investigated whether PTSD was associated with difficulty making predictions about near-future neutral activity. Sixty-three participants with PTSD and 63 trauma controls completed two tasks, one testing explicit prediction and the other testing implicit prediction. Higher PTSD severity was associated with greater difficulty with predictive processing on both of these tasks. These results suggest that effective treatments to improve functional outcomes for people with PTSD may work, in part, by improving predictive processing.
Collapse
Affiliation(s)
- Michelle L Eisenberg
- Department of Psychology, Box 1125, Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO, 63130, USA
| | - Thomas L Rodebaugh
- Department of Psychology, Box 1125, Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO, 63130, USA
| | - Shaney Flores
- Department of Psychology, Box 1125, Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO, 63130, USA
| | - Jeffrey M Zacks
- Department of Psychology, Box 1125, Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO, 63130, USA.
| |
Collapse
|
9
|
Fogt JS, Fogt N. Studies of Vision in Cricket-A Narrative Review. Vision (Basel) 2023; 7:57. [PMID: 37756131 PMCID: PMC10536906 DOI: 10.3390/vision7030057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/09/2023] [Accepted: 08/24/2023] [Indexed: 09/29/2023] Open
Abstract
Vision is thought to play a substantial role in hitting and fielding in cricket. An understanding of which visual skills contribute during cricket play could inform future clinical training trials. This paper reviews what has been reported thus far regarding the relationship of visual skills to cricket performance and reviews the results of clinical trials in which the impact of visual skills training on cricket performance has been addressed. Fundamental or low-level visual skills, with the exception of color vision and perhaps near stereopsis and dynamic visual acuity, are similar between cricket players and the general population. Simple reaction time has been found to be shorter in cricket players in some but not all studies. While there is mixed or no evidence that the aforementioned visual skills are superior in cricket players compared to non-players, comparisons of eye and head movements and gaze tracking have revealed consistent differences between elite cricket batters and sub-elite batters. Future training studies could examine whether teaching sub-elite batters to emulate the gaze tracking patterns of elite batters is beneficial for batting. Lastly, clinical trials in which visual skills of cricket players have been trained have in many cases resulted in positive effects on visual skills, or judgments required in cricket, or cricket play. However, clinical trials with larger and more diverse groups of participants and correlations to on-field metrics and on-field performance (i.e., domain-specific assessments) are necessary before conclusions can be drawn regarding the efficacy of vision training.
Collapse
|
10
|
de la Malla C, Goettker A. The effect of impaired velocity signals on goal-directed eye and hand movements. Sci Rep 2023; 13:13646. [PMID: 37607970 PMCID: PMC10444871 DOI: 10.1038/s41598-023-40394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
Information about position and velocity is essential to predict where moving targets will be in the future, and to accurately move towards them. But how are the two signals combined over time to complete goal-directed movements? We show that when velocity information is impaired due to using second-order motion stimuli, saccades directed towards moving targets land at positions where targets were ~ 100 ms before saccade initiation, but hand movements are accurate. Importantly, the longer latencies of hand movements allow for additional time to process the sensory information available. When increasing the period of time one sees the moving target before making the saccade, saccades become accurate. In line with that, hand movements with short latencies show higher curvature, indicating corrections based on an update of incoming sensory information. These results suggest that movements are controlled by an independent and evolving combination of sensory information about the target's position and velocity.
Collapse
Affiliation(s)
- Cristina de la Malla
- Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institute of Neurosciences, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
11
|
Stavropoulos A, Lakshminarasimhan KJ, Angelaki DE. Belief embodiment through eye movements facilitates memory-guided navigation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.21.554107. [PMID: 37662309 PMCID: PMC10473632 DOI: 10.1101/2023.08.21.554107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Neural network models optimized for task performance often excel at predicting neural activity but do not explain other properties such as the distributed representation across functionally distinct areas. Distributed representations may arise from animals' strategies for resource utilization, however, fixation-based paradigms deprive animals of a vital resource: eye movements. During a naturalistic task in which humans use a joystick to steer and catch flashing fireflies in a virtual environment lacking position cues, subjects physically track the latent task variable with their gaze. We show this strategy to be true also during an inertial version of the task in the absence of optic flow and demonstrate that these task-relevant eye movements reflect an embodiment of the subjects' dynamically evolving internal beliefs about the goal. A neural network model with tuned recurrent connectivity between oculomotor and evidence-integrating frontoparietal circuits accounted for this behavioral strategy. Critically, this model better explained neural data from monkeys' posterior parietal cortex compared to task-optimized models unconstrained by such an oculomotor-based cognitive strategy. These results highlight the importance of unconstrained movement in working memory computations and establish a functional significance of oculomotor signals for evidence-integration and navigation computations via embodied cognition.
Collapse
Affiliation(s)
| | | | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY, USA
- Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
12
|
Shinkai R, Ando S, Nonaka Y, Yoshimura Y, Kizuka T, Ono S. Importance of head movements in gaze tracking during table tennis forehand stroke. Hum Mov Sci 2023; 90:103124. [PMID: 37478682 DOI: 10.1016/j.humov.2023.103124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 04/06/2023] [Accepted: 07/07/2023] [Indexed: 07/23/2023]
Abstract
The purpose of this study was to clarify the properties of gaze and head movements during forehand stroke in table tennis. Collegiate table tennis players (n = 12) conducted forehand strokes toward a ball launched by a skilled experimenter. A total of ten trials were conducted for the experimental task. Horizontal and vertical movements of the ball, gaze, head and eye were analyzed from the image recorded by an eye tracking device. The results showed that participants did not always keep their gaze and head position on the ball throughout the entire ball path. Our results indicate that table tennis players tend to gaze at the ball in the initial ball-tracking phase. Furthermore, there was a significant negative correlation between eye and head position especially in the vertical direction. This result suggests that horizontal VOR is suppressed more than vertical VOR in ball-tracking during table tennis forehand stroke. Finally, multiple regression analysis showed that the effect of head position to gaze position was significantly higher than that of eye position. This result indicates that gaze position during forehand stroke could be associated with head position rather than eye position. Taken together, head movements may play an important role in maintaining the ball in a constant egocentric direction in table tennis forehand stroke.
Collapse
Affiliation(s)
- Ryosuke Shinkai
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki 305-8574, Japan
| | - Shintaro Ando
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki 305-8574, Japan
| | - Yuki Nonaka
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki 305-8574, Japan
| | - Yusei Yoshimura
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki 305-8574, Japan
| | - Tomohiro Kizuka
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki 305-8574, Japan
| | - Seiji Ono
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki 305-8574, Japan.
| |
Collapse
|
13
|
Troncoso A, Soto V, Gomila A, Martínez-Pernía D. Moving beyond the lab: investigating empathy through the Empirical 5E approach. Front Psychol 2023; 14:1119469. [PMID: 37519389 PMCID: PMC10374225 DOI: 10.3389/fpsyg.2023.1119469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 06/05/2023] [Indexed: 08/01/2023] Open
Abstract
Empathy is a complex and multifaceted phenomenon that plays a crucial role in human social interactions. Recent developments in social neuroscience have provided valuable insights into the neural underpinnings and bodily mechanisms underlying empathy. This methodology often prioritizes precision, replicability, internal validity, and confound control. However, fully understanding the complexity of empathy seems unattainable by solely relying on artificial and controlled laboratory settings, while overlooking a comprehensive view of empathy through an ecological experimental approach. In this article, we propose articulating an integrative theoretical and methodological framework based on the 5E approach (the "E"s stand for embodied, embedded, enacted, emotional, and extended perspectives of empathy), highlighting the relevance of studying empathy as an active interaction between embodied agents, embedded in a shared real-world environment. In addition, we illustrate how a novel multimodal approach including mobile brain and body imaging (MoBi) combined with phenomenological methods, and the implementation of interactive paradigms in a natural context, are adequate procedures to study empathy from the 5E approach. In doing so, we present the Empirical 5E approach (E5E) as an integrative scientific framework to bridge brain/body and phenomenological attributes in an interbody interactive setting. Progressing toward an E5E approach can be crucial to understanding empathy in accordance with the complexity of how it is experienced in the real world.
Collapse
Affiliation(s)
- Alejandro Troncoso
- Center for Social and Cognitive Neuroscience, School of Psychology, Adolfo Ibáñez University, Santiago, Chile
| | - Vicente Soto
- Center for Social and Cognitive Neuroscience, School of Psychology, Adolfo Ibáñez University, Santiago, Chile
| | - Antoni Gomila
- Department of Psychology, University of the Balearic Islands, Palma de Mallorca, Spain
| | - David Martínez-Pernía
- Center for Social and Cognitive Neuroscience, School of Psychology, Adolfo Ibáñez University, Santiago, Chile
| |
Collapse
|
14
|
Talley J, Pusdekar S, Feltenberger A, Ketner N, Evers J, Liu M, Gosh A, Palmer SE, Wardill TJ, Gonzalez-Bellido PT. Predictive saccades and decision making in the beetle-predating saffron robber fly. Curr Biol 2023:S0960-9822(23)00770-4. [PMID: 37379842 DOI: 10.1016/j.cub.2023.06.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 04/28/2023] [Accepted: 06/06/2023] [Indexed: 06/30/2023]
Abstract
Internal predictions about the sensory consequences of self-motion, encoded by corollary discharge, are ubiquitous in the animal kingdom, including for fruit flies, dragonflies, and humans. In contrast, predicting the future location of an independently moving external target requires an internal model. With the use of internal models for predictive gaze control, vertebrate predatory species compensate for their sluggish visual systems and long sensorimotor latencies. This ability is crucial for the timely and accurate decisions that underpin a successful attack. Here, we directly demonstrate that the robber fly Laphria saffrana, a specialized beetle predator, also uses predictive gaze control when head tracking potential prey. Laphria uses this predictive ability to perform the difficult categorization and perceptual decision task of differentiating a beetle from other flying insects with a low spatial resolution retina. Specifically, we show that (1) this predictive behavior is part of a saccade-and-fixate strategy, (2) the relative target angular position and velocity, acquired during fixation, inform the subsequent predictive saccade, and (3) the predictive saccade provides Laphria with additional fixation time to sample the frequency of the prey's specular wing reflections. We also demonstrate that Laphria uses such wing reflections as a proxy for the wingbeat frequency of the potential prey and that consecutively flashing LEDs to produce apparent motion elicits attacks when the LED flicker frequency matches that of the beetle's wingbeat cycle.
Collapse
Affiliation(s)
- Jennifer Talley
- Air Force Research Laboratory, Munitions Directorate, Eglin AFB, FL 32542, USA.
| | - Siddhant Pusdekar
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA
| | - Aaron Feltenberger
- Air Force Research Laboratory, Munitions Directorate, Eglin AFB, FL 32542, USA
| | - Natalie Ketner
- Air Force Research Laboratory, Munitions Directorate, Eglin AFB, FL 32542, USA
| | - Johnny Evers
- Air Force Research Laboratory, Munitions Directorate, Eglin AFB, FL 32542, USA
| | - Molly Liu
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA
| | - Atishya Gosh
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA; Department of Biomedical Informatics and Computational Biology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Stephanie E Palmer
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, IL 60637, USA
| | - Trevor J Wardill
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA; Department of Biomedical Informatics and Computational Biology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Paloma T Gonzalez-Bellido
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA; Department of Biomedical Informatics and Computational Biology, University of Minnesota, Minneapolis, MN 55455, USA.
| |
Collapse
|
15
|
Bakst L, McGuire JT. Experience-driven recalibration of learning from surprising events. Cognition 2023; 232:105343. [PMID: 36481590 PMCID: PMC9851993 DOI: 10.1016/j.cognition.2022.105343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/13/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022]
Abstract
Different environments favor different patterns of adaptive learning. A surprising event that in one context would accelerate belief updating might, in another context, be downweighted as a meaningless outlier. Here, we investigated whether people would spontaneously regulate the influence of surprise on learning in response to event-by-event experiential feedback. Across two experiments, we examined whether participants performing a perceptual judgment task under spatial uncertainty (n = 29, n = 63) adapted their patterns of predictive gaze according to the informativeness or uninformativeness of surprising events in their current environment. Uninstructed predictive eye movements exhibited a form of metalearning in which surprise came to modulate event-by-event learning rates in opposite directions across contexts. Participants later appropriately readjusted their patterns of adaptive learning when the statistics of the environment underwent an unsignaled reversal. Although significant adjustments occurred in both directions, performance was consistently superior in environments in which surprising events reflected meaningful change, potentially reflecting a bias towards interpreting surprise as informative and/or difficulty ignoring salient outliers. Our results provide evidence for spontaneous, context-appropriate recalibration of the role of surprise in adaptive learning.
Collapse
Affiliation(s)
- Leah Bakst
- Department of Psychological & Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02215, USA; Center for Systems Neuroscience, Boston University, 610 Commonwealth Avenue, Boston, MA 02215, USA.
| | - Joseph T McGuire
- Department of Psychological & Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02215, USA; Center for Systems Neuroscience, Boston University, 610 Commonwealth Avenue, Boston, MA 02215, USA.
| |
Collapse
|
16
|
Vater C, Mann DL. Are predictive saccades linked to the processing of peripheral information? PSYCHOLOGICAL RESEARCH 2022; 87:1501-1519. [PMID: 36167931 DOI: 10.1007/s00426-022-01743-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 09/15/2022] [Indexed: 11/29/2022]
Abstract
High-level athletes can predict the actions of an opposing player. Interestingly, such predictions are also reflected by the athlete's gaze behavior. In cricket, for example, players first pursue the ball with their eyes before they very often initiate two predictive saccades: one to the predicted ball-bounce point and a second to the predicted ball-bat-contact point. That means, they move their eyes ahead of the ball and "wait" for the ball at the new fixation location, potentially using their peripheral vision to update information about the ball's trajectory. In this study, we investigated whether predictive saccades are linked to the processing of information in peripheral vision and if predictive saccades are superior to continuously following the ball with foveal vision using smooth-pursuit eye-movements (SPEMs). In the first two experiments, we evoked the typical eye-movements observed in cricket and showed that the information gathered during SPEMs is sufficient to predict when the moving object will hit the target location and that (additional) peripheral monitoring of the object does not help to improve performance. In a third experiment, we show that it could actually be beneficial to use SPEMs rather than predictive saccades to improve performance. Thus, predictive saccades ahead of a target are unlikely to be performed to enhance the peripheral monitoring of target.
Collapse
Affiliation(s)
- Christian Vater
- Institute of Sport Science, University of Bern, Bremgartenstrasse 145, 3012, Bern, Switzerland.
| | - David L Mann
- Faculty of Behavioural and Movement Sciences, Motor Learning and Performance, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
17
|
Gregory SEA, Wang H, Kessler K. EEG alpha and theta signatures of socially and non-socially cued working memory in virtual reality. Soc Cogn Affect Neurosci 2022; 17:531-540. [PMID: 34894148 PMCID: PMC9164206 DOI: 10.1093/scan/nsab123] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 10/19/2021] [Accepted: 11/20/2021] [Indexed: 11/14/2022] Open
Abstract
In this preregistered study (https://osf.io/s4rm9) we investigated the behavioural and neurological [electroencephalography; alpha (attention) and theta (effort)] effects of dynamic non-predictive social and non-social cues on working memory. In a virtual environment realistic human-avatars dynamically looked to the left or right side of a table. A moving stick served as a non-social control cue. Kitchen items were presented in the valid cued or invalid un-cued location for encoding. Behavioural findings showed a similar influence of the cues on working memory performance. Alpha power changes were equivalent for the cues during cueing and encoding, reflecting similar attentional processing. However, theta power changes revealed different patterns for the cues. Theta power increased more strongly for the non-social cue compared to the social cue during initial cueing. Furthermore, while for the non-social cue there was a significantly larger increase in theta power for valid compared to invalid conditions during encoding, this was reversed for the social cue, with a significantly larger increase in theta power for the invalid compared to valid conditions, indicating differences in the cues' effects on cognitive effort. Therefore, while social and non-social attention cues impact working memory performance in a similar fashion, the underlying neural mechanisms appear to differ.
Collapse
Affiliation(s)
- Samantha E A Gregory
- Department of Psychology, University of Salford, Salford M5 4WT, UK
- Institute of Health and Neurodevelopment, Aston Laboratory for Immersive Virtual Environments, Aston University, Birmingham B4 7ET, UK
| | - Hongfang Wang
- Institute of Health and Neurodevelopment, Aston Laboratory for Immersive Virtual Environments, Aston University, Birmingham B4 7ET, UK
| | - Klaus Kessler
- Institute of Health and Neurodevelopment, Aston Laboratory for Immersive Virtual Environments, Aston University, Birmingham B4 7ET, UK
- School of Psychology, University College Dublin, Dublin D04 V1W8, Ireland
| |
Collapse
|
18
|
Harris DJ, Arthur T, Broadbent DP, Wilson MR, Vine SJ, Runswick OR. An Active Inference Account of Skilled Anticipation in Sport: Using Computational Models to Formalise Theory and Generate New Hypotheses. Sports Med 2022; 52:2023-2038. [PMID: 35503403 PMCID: PMC9388417 DOI: 10.1007/s40279-022-01689-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2022] [Indexed: 11/30/2022]
Abstract
Optimal performance in time-constrained and dynamically changing environments depends on making reliable predictions about future outcomes. In sporting tasks, performers have been found to employ multiple information sources to maximise the accuracy of their predictions, but questions remain about how different information sources are weighted and integrated to guide anticipation. In this paper, we outline how predictive processing approaches, and active inference in particular, provide a unifying account of perception and action that explains many of the prominent findings in the sports anticipation literature. Active inference proposes that perception and action are underpinned by the organism’s need to remain within certain stable states. To this end, decision making approximates Bayesian inference and actions are used to minimise future prediction errors during brain–body–environment interactions. Using a series of Bayesian neurocomputational models based on a partially observable Markov process, we demonstrate that key findings from the literature can be recreated from the first principles of active inference. In doing so, we formulate a number of novel and empirically falsifiable hypotheses about human anticipation capabilities that could guide future investigations in the field.
Collapse
Affiliation(s)
- David J Harris
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK.
| | - Tom Arthur
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - David P Broadbent
- Division of Sport, Health and Exercise Sciences, Department of Life Sciences, Brunel University London, London, UK
| | - Mark R Wilson
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Samuel J Vine
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Oliver R Runswick
- Department of Psychology, Institute of Psychiatry, Psychology, and Neuroscience, King's College London, London, UK
| |
Collapse
|
19
|
Maier M, Blume F, Bideau P, Hellwich O, Abdel Rahman R. Knowledge-augmented face perception: Prospects for the Bayesian brain-framework to align AI and human vision. Conscious Cogn 2022; 101:103301. [DOI: 10.1016/j.concog.2022.103301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 11/27/2021] [Accepted: 01/04/2022] [Indexed: 11/03/2022]
|
20
|
A dataset of EEG recordings from 47 participants collected during a virtual reality working memory task where attention was cued by a social avatar and non-social stick cue. Data Brief 2022; 41:107827. [PMID: 35127998 PMCID: PMC8800056 DOI: 10.1016/j.dib.2022.107827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/10/2022] [Accepted: 01/11/2022] [Indexed: 11/19/2022] Open
|
21
|
Lappi O. Gaze Strategies in Driving-An Ecological Approach. Front Psychol 2022; 13:821440. [PMID: 35360580 PMCID: PMC8964278 DOI: 10.3389/fpsyg.2022.821440] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 02/07/2022] [Indexed: 01/16/2023] Open
Abstract
Human performance in natural environments is deeply impressive, and still much beyond current AI. Experimental techniques, such as eye tracking, may be useful to understand the cognitive basis of this performance, and "the human advantage." Driving is domain where these techniques may deployed, in tasks ranging from rigorously controlled laboratory settings through high-fidelity simulations to naturalistic experiments in the wild. This research has revealed robust patterns that can be reliably identified and replicated in the field and reproduced in the lab. The purpose of this review is to cover the basics of what is known about these gaze behaviors, and some of their implications for understanding visually guided steering. The phenomena reviewed will be of interest to those working on any domain where visual guidance and control with similar task demands is involved (e.g., many sports). The paper is intended to be accessible to the non-specialist, without oversimplifying the complexity of real-world visual behavior. The literature reviewed will provide an information base useful for researchers working on oculomotor behaviors and physiology in the lab who wish to extend their research into more naturalistic locomotor tasks, or researchers in more applied fields (sports, transportation) who wish to bring aspects of the real-world ecology under experimental scrutiny. Part of a Research Topic on Gaze Strategies in Closed Self-paced tasks, this aspect of the driving task is discussed. It is in particular emphasized why it is important to carefully separate the visual strategies driving (quite closed and self-paced) from visual behaviors relevant to other forms of driver behavior (an open-ended menagerie of behaviors). There is always a balance to strike between ecological complexity and experimental control. One way to reconcile these demands is to look for natural, real-world tasks and behavior that are rich enough to be interesting yet sufficiently constrained and well-understood to be replicated in simulators and the lab. This ecological approach to driving as a model behavior and the way the connection between "lab" and "real world" can be spanned in this research is of interest to anyone keen to develop more ecologically representative designs for studying human gaze behavior.
Collapse
Affiliation(s)
- Otto Lappi
- Cognitive Science/TRU, University of Helsinki, Helsinki, Finland
| |
Collapse
|
22
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? J Vis 2022; 22:12. [PMID: 35323868 PMCID: PMC8963670 DOI: 10.1167/jov.22.4.12] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 02/08/2022] [Indexed: 11/24/2022] Open
Abstract
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | - Pierre Lebranchu
- LS2N UMR CNRS 6004, University of Nantes and Nantes University Hospital, Nantes, France
| | | | - Patrick Le Callet
- LS2N UMR CNRS 6004, University of Nantes, Nantes, France
- http://pagesperso.ls2n.fr/~lecallet-p/index.html
| |
Collapse
|
23
|
Sander J, Fogt N. Estimations of the Passing Height of Approaching Objects. Optom Vis Sci 2022; 99:274-280. [PMID: 34897235 PMCID: PMC8897280 DOI: 10.1097/opx.0000000000001847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
SIGNIFICANCE Limited optical cues associated with ball flight were inadequate to estimate the vertical passing distance of approaching balls. These results suggest that these optical cues either must be integrated with contextual and kinematic cues or must be of larger amplitude to contribute to estimates of vertical passing distance. PURPOSE To intercept or avoid approaching objects, individuals must estimate both when and where the object will arrive. The purpose of this experiment was to determine whether individuals could estimate the vertical passing height of a ball approaching at different linear speeds when vertical angular retinal image velocity and cues for time to contact were minimized. METHODS Twenty participants stood 40 feet from a pitching machine that projected tennis balls toward observers at six random speeds from 56 to 80 mph. The flight of the balls was stopped after 9 feet. The actual passing height ranged from about 35 (lowest speed) to 136 cm (highest speed). Observers indicated the height at which they expected the balls to arrive. Overall, the height estimates increased as ball speed increased (means, 121 ± 13 cm [lowest speed] and 131 ± 10 cm [highest speed]). However, only at the higher speeds were the absolute height estimates close to the actual height of the ball. At the higher ball speeds, estimates for participants with some experience in baseball or softball were more accurate (86.4% correct at the highest speed) than estimates for participants with no experience. CONCLUSIONS Overall, estimates of vertical passing distance were inaccurate particularly at the lower speeds. Underestimates of vertical drop at lower speeds may have resulted from overestimates of ball speeds. At short exposure durations, optical cues associated with ball flight were inadequate for predictions of vertical passing distance at all speeds for the no-experience group and at lower speeds for the experienced group.
Collapse
Affiliation(s)
- Jacob Sander
- The Ohio State University College of Optometry, Columbus, Ohio
| | | |
Collapse
|
24
|
Tammi T, Pekkanen J, Tuhkanen S, Oksama L, Lappi O. Tracking an occluded visual target with sequences of saccades. J Vis 2022; 22:9. [PMID: 35040924 PMCID: PMC8764209 DOI: 10.1167/jov.22.1.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Gaze behavior during visual tracking consists of a combination of pursuit and saccadic movements. When the tracked object is intermittently occluded, the role of smooth pursuit is reduced, with a corresponding increase in the role of saccades. However, studies of visual tracking during occlusion have focused only on the first few saccades, usually with occlusion periods of less than 1 second in duration. We investigated tracking on a circular trajectory with random occlusions and found that an occluded object can be tracked reliably for up to several seconds with mainly anticipatory saccades and very little smooth pursuit. Furthermore, we investigated the accumulation of uncertainty in prediction and found that prediction errors seem to accumulate faster when an absolute reference frame is not available during tracking. We suggest that the observed saccadic tracking reflects the use of a time-based internal estimate of object position that is anchored to the environment via fixations.
Collapse
Affiliation(s)
- Tuisku Tammi
- Cognitive Science, University of Helsinki, Helsinki, Finland.,National Defence University, Finland.,
| | - Jami Pekkanen
- Cognitive Science, University of Helsinki, Helsinki, Finland.,
| | - Samuel Tuhkanen
- Cognitive Science, University of Helsinki, Helsinki, Finland.,
| | - Lauri Oksama
- Human Performance Division, Finnish Defence Research Agency, Finland.,
| | - Otto Lappi
- Cognitive Science, University of Helsinki, Helsinki, Finland.,Traffic Research Unit, University of Helsinki, Helsinki, Finland.,
| |
Collapse
|
25
|
Kulke L, Pasqualette L. Emotional content influences eye-movements under natural but not under instructed conditions. Cogn Emot 2021; 36:332-344. [PMID: 34886742 DOI: 10.1080/02699931.2021.2009446] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
ABSTRACTIn everyday life, people can freely decide if and where they would like to move their attention and gaze, often influenced by physical and emotional salience of stimuli. However, many laboratory paradigms explicitly instruct participants when and how to move their eyes, leading to unnatural instructed eye-movements. The current preregistered study compared eye-movements to peripherally appearing faces with happy, angry and neutral expressions under natural and instructed conditions. Participants reliably moved their eyes towards peripheral faces, even when they were not instructed to do so; however, eye-movements were significantly slower under natural than under instructed conditions. Competing central stimuli decelerated eye-movements independently of instructions. Unexpectedly, the emotional salience only affected eye-movements under natural conditions, with faster saccades towards emotional than towards neutral faces. No effects of emotional expression occurred when participants were instructed to move their eyes. The study shows that natural eye-movements significantly differ from instructed eye-movements and emotion-driven attention effects are reduced when participants are artificially instructed to move their eyes, suggesting that research should investigate eye-movements under natural conditions.
Collapse
Affiliation(s)
- Louisa Kulke
- Neurocognitive Developmental Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Germany
| | - Laura Pasqualette
- Neurocognitive Developmental Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Germany
| |
Collapse
|
26
|
Arthur T, Harris DJ. Predictive eye movements are adjusted in a Bayes-optimal fashion in response to unexpectedly changing environmental probabilities. Cortex 2021; 145:212-225. [PMID: 34749190 DOI: 10.1016/j.cortex.2021.09.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 08/18/2021] [Accepted: 09/27/2021] [Indexed: 11/30/2022]
Abstract
This study examined the application of active inference to dynamic visuomotor control. Active inference proposes that actions are dynamically planned according to uncertainty about sensory information, prior expectations, and the environment, with motor adjustments serving to minimise future prediction errors. We investigated whether predictive gaze behaviours are indeed adjusted in this Bayes-optimal fashion during a virtual racquetball task. In this task, participants intercepted bouncing balls with varying levels of elasticity, under conditions of higher or lower environmental volatility. Participants' gaze patterns differed between stable and volatile conditions in a manner consistent with generative models of Bayes-optimal behaviour. Partially observable Markov models also revealed an increased rate of associative learning in response to unpredictable shifts in environmental probabilities, although there was no overall effect of volatility on this parameter. Findings extend active inference frameworks into complex and unconstrained visuomotor tasks and present important implications for a neurocomputational understanding of the visual guidance of action.
Collapse
Affiliation(s)
- Tom Arthur
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, Exeter, EX1 2LU, UK; Centre for Applied Autism Research, Department of Psychology, University of Bath, Bath, BA2 7AY, UK
| | - David J Harris
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, Exeter, EX1 2LU, UK.
| |
Collapse
|
27
|
Smith ME, Loschky LC, Bailey HR. Knowledge guides attention to goal-relevant information in older adults. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:56. [PMID: 34406505 PMCID: PMC8374018 DOI: 10.1186/s41235-021-00321-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 07/31/2021] [Indexed: 11/18/2022]
Abstract
How does viewers’ knowledge guide their attention while they watch everyday events, how does it affect their memory, and does it change with age? Older adults have diminished episodic memory for everyday events, but intact semantic knowledge. Indeed, research suggests that older adults may rely on their semantic memory to offset impairments in episodic memory, and when relevant knowledge is lacking, older adults’ memory can suffer. Yet, the mechanism by which prior knowledge guides attentional selection when watching dynamic activity is unclear. To address this, we studied the influence of knowledge on attention and memory for everyday events in young and older adults by tracking their eyes while they watched videos. The videos depicted activities that older adults perform more frequently than young adults (balancing a checkbook, planting flowers) or activities that young adults perform more frequently than older adults (installing a printer, setting up a video game). Participants completed free recall, recognition, and order memory tests after each video. We found age-related memory deficits when older adults had little knowledge of the activities, but memory did not differ between age groups when older adults had relevant knowledge and experience with the activities. Critically, results showed that knowledge influenced where viewers fixated when watching the videos. Older adults fixated less goal-relevant information compared to young adults when watching young adult activities, but they fixated goal-relevant information similarly to young adults, when watching more older adult activities. Finally, results showed that fixating goal-relevant information predicted free recall of the everyday activities for both age groups. Thus, older adults may use relevant knowledge to more effectively infer the goals of actors, which guides their attention to goal-relevant actions, thus improving their episodic memory for everyday activities.
Collapse
Affiliation(s)
- Maverick E Smith
- Department of Psychological Sciences, Kansas State University, 471 Bluemont Hall, 1100 Mid-campus Dr., Manhattan, KS, 66506, USA.
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, 471 Bluemont Hall, 1100 Mid-campus Dr., Manhattan, KS, 66506, USA
| | - Heather R Bailey
- Department of Psychological Sciences, Kansas State University, 471 Bluemont Hall, 1100 Mid-campus Dr., Manhattan, KS, 66506, USA
| |
Collapse
|
28
|
Gregory SEA. Investigating facilitatory versus inhibitory effects of dynamic social and non-social cues on attention in a realistic space. PSYCHOLOGICAL RESEARCH 2021; 86:1578-1590. [PMID: 34374844 PMCID: PMC9177496 DOI: 10.1007/s00426-021-01574-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 07/29/2021] [Indexed: 11/25/2022]
Abstract
This study aimed to investigate the facilitatory versus inhibitory effects of dynamic non-predictive central cues presented in a realistic environment. Realistic human-avatars initiated eye contact and then dynamically looked to the left, right or centre of a table. A moving stick served as a non-social control cue and participants localised (Experiment 1) or discriminated (Experiment 2) a contextually relevant target (teapot/teacup). The cues movement took 500 ms and stimulus onset asynchronies (SOA, 150 ms/300 ms/500 ms/1000 ms) were measured from movement initiation. Similar cuing effects were seen for the social avatar and non-social stick cue across tasks. Results showed facilitatory processes without inhibition, though there was some variation by SOA and task. This is the first time facilitatory versus inhibitory processes have been directly investigated where eye contact is initiated prior to gaze shift. These dynamic stimuli allow a better understanding of how attention might be cued in more realistic environments.
Collapse
Affiliation(s)
- Samantha E A Gregory
- Aston Institute of Health and Neurodevelopment, Aston University, Birmingham, B4 7ET, UK.
| |
Collapse
|
29
|
Saurels BW, Hohaia W, Yarrow K, Johnston A, Arnold DH. Visual predictions, neural oscillations and naïve physics. Sci Rep 2021; 11:16127. [PMID: 34373486 PMCID: PMC8352981 DOI: 10.1038/s41598-021-95295-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Accepted: 06/29/2021] [Indexed: 11/09/2022] Open
Abstract
Prediction is a core function of the human visual system. Contemporary research suggests the brain builds predictive internal models of the world to facilitate interactions with our dynamic environment. Here, we wanted to examine the behavioural and neurological consequences of disrupting a core property of peoples’ internal models, using naturalistic stimuli. We had people view videos of basketball and asked them to track the moving ball and predict jump shot outcomes, all while we recorded eye movements and brain activity. To disrupt people’s predictive internal models, we inverted footage on half the trials, so dynamics were inconsistent with how movements should be shaped by gravity. When viewing upright videos people were better at predicting shot outcomes, at tracking the ball position, and they had enhanced alpha-band oscillatory activity in occipital brain regions. The advantage for predicting upright shot outcomes scaled with improvements in ball tracking and occipital alpha-band activity. Occipital alpha-band activity has been linked to selective attention and spatially-mapped inhibitions of visual brain activity. We propose that when people have a more accurate predictive model of the environment, they can more easily parse what is relevant, allowing them to better target irrelevant positions for suppression—resulting in both better predictive performance and in neural markers of inhibited information processing.
Collapse
Affiliation(s)
- Blake W Saurels
- School of Psychology, The University of Queensland, Brisbane, Australia.
| | - Wiremu Hohaia
- School of Psychology, The University of Queensland, Brisbane, Australia
| | - Kielan Yarrow
- Department of Psychology, City, University of London, London, UK
| | - Alan Johnston
- School of Psychology, University of Nottingham, Nottingham, UK
| | - Derek H Arnold
- School of Psychology, The University of Queensland, Brisbane, Australia
| |
Collapse
|
30
|
Lum JAG, Clark GM. Implicit manual and oculomotor sequence learning in developmental language disorder. Dev Sci 2021; 25:e13156. [PMID: 34240500 DOI: 10.1111/desc.13156] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 06/01/2021] [Accepted: 06/24/2021] [Indexed: 02/01/2023]
Abstract
Procedural memory functioning in developmental language disorder (DLD) has largely been investigated by examining implicit sequence learning by the manual motor system. This study examined whether poor sequence learning in DLD is present in the oculomotor domain. Twenty children with DLD and 20 age-matched typically developing (TD) children were presented with a serial reaction time (SRT) task. On the task, a visual stimulus repeatedly appears in different positions on a computer display which prompts a manual response. The children were unaware that on the first three blocks and final block of trials, the visual stimulus followed a sequence. On the fourth block, the stimulus appeared in random positions. Manual reaction times (RT) and saccadic amplitudes were recorded, which assessed sequence learning in the manual and oculomotor domains, respectively. Manual RT were sensitive to sequence learning for the TD group, but not the DLD group. For the TD group, manual RT increased when the random block was presented. This was not the case for the DLD group. In the oculomotor domain, sequence learning was present in both groups. Specifically, sequence learning was found to modulate saccadic amplitudes resulting in both DLD and TD children being able to anticipate the location of the visual stimulus. Overall, the study indicates that not all aspects of the procedural memory system are equally impaired in DLD.
Collapse
Affiliation(s)
- Jarrad A G Lum
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Burwood, Victoria, Australia
| | - Gillian M Clark
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Burwood, Victoria, Australia
| |
Collapse
|
31
|
Abstract
SIGNIFICANCE After a 30-year gap, several studies on head and eye movements and gaze tracking in baseball batting have been performed in the last decade. These baseball studies may lead to training protocols for batting. Here we review these studies and compare the tracking behaviors with those in other sports.Baseball batters are often instructed to "keep your eye on the ball." Until recently, the evidence regarding whether batters follow this instruction and if there are benefits to following this instruction was limited. Baseball batting studies demonstrate that batters tend to move the head more than the eyes in the direction of the ball at least until a saccade occurs. Foveal gaze tracking is often maintained on the ball through the early portion of the pitch, so it can be said that baseball batters do keep the eyes on the ball. While batters place gaze at or near the point of bat-ball contact, the way this is accomplished varies. In some studies, foveal gaze tracking continues late in the pitch trajectory, whereas in other studies, anticipatory saccades occur. The relative advantages of these discrepant gaze strategies on perceptual processing and motor planning speed and accuracy are discussed, and other variables that may influence anticipatory saccades including the predictability of the pitch and the level of batter expertise are described. Further studies involving larger groups with different levels of expertise under game conditions are required to determine which gaze tracking strategies are most beneficial for baseball batting.
Collapse
|
32
|
Drewes J, Feder S, Einhäuser W. Gaze During Locomotion in Virtual Reality and the Real World. Front Neurosci 2021; 15:656913. [PMID: 34108857 PMCID: PMC8180583 DOI: 10.3389/fnins.2021.656913] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 04/27/2021] [Indexed: 11/20/2022] Open
Abstract
How vision guides gaze in realistic settings has been researched for decades. Human gaze behavior is typically measured in laboratory settings that are well controlled but feature-reduced and movement-constrained, in sharp contrast to real-life gaze control that combines eye, head, and body movements. Previous real-world research has shown environmental factors such as terrain difficulty to affect gaze; however, real-world settings are difficult to control or replicate. Virtual reality (VR) offers the experimental control of a laboratory, yet approximates freedom and visual complexity of the real world (RW). We measured gaze data in 8 healthy young adults during walking in the RW and simulated locomotion in VR. Participants walked along a pre-defined path inside an office building, which included different terrains such as long corridors and flights of stairs. In VR, participants followed the same path in a detailed virtual reconstruction of the building. We devised a novel hybrid control strategy for movement in VR: participants did not actually translate: forward movements were controlled by a hand-held device, rotational movements were executed physically and transferred to the VR. We found significant effects of terrain type (flat corridor, staircase up, and staircase down) on gaze direction, on the spatial spread of gaze direction, and on the angular distribution of gaze-direction changes. The factor world (RW and VR) affected the angular distribution of gaze-direction changes, saccade frequency, and head-centered vertical gaze direction. The latter effect vanished when referencing gaze to a world-fixed coordinate system, and was likely due to specifics of headset placement, which cannot confound any other analyzed measure. Importantly, we did not observe a significant interaction between the factors world and terrain for any of the tested measures. This indicates that differences between terrain types are not modulated by the world. The overall dwell time on navigational markers did not differ between worlds. The similar dependence of gaze behavior on terrain in the RW and in VR indicates that our VR captures real-world constraints remarkably well. High-fidelity VR combined with naturalistic movement control therefore has the potential to narrow the gap between the experimental control of a lab and ecologically valid settings.
Collapse
Affiliation(s)
- Jan Drewes
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Sascha Feder
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
33
|
Goettker A. Retinal error signals and fluctuations in eye velocity influence oculomotor behavior in subsequent trials. J Vis 2021; 21:28. [PMID: 34036299 PMCID: PMC8164369 DOI: 10.1167/jov.21.5.28] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 05/01/2021] [Indexed: 01/07/2023] Open
Abstract
The oculomotor system makes use of an integration of previous stimulus velocities (the prior) and current sensory inputs to adjust initial eye speeds. The present study extended this research by investigating the roles of different retinal or extra-retinal signals for this process. To test for this, participants viewed movement sequences that all ended with the same test trial. Earlier in the sequence, the prior was manipulated by presenting targets that either had different velocities, different starting positions, or target movements designed to elicit differential oculomotor behavior (tracked with or without additional corrective saccades). Additionally, these prior targets could vary in terms of contrast to manipulate reliability. When the velocity of prior trials differed from test trials, the reliability-weighted integration of prior information was replicated. When the prior trials differed in starting position, significant effects on subsequent oculomotor behavior were only observed for the reliable target. Although there were also differences in eye velocity across the different manipulations, they could not explain the observed reliability-weighted integration. When comparing the same physical prior trials but tracked with additional corrective saccades, the eye velocity in the test trial also differed systematically (slower for forward saccades, and faster for backward saccades). The direction of the observed effect contradicts the expectations based on perceived speed and eye velocity, but can be predicted by a combination of retinal velocity and position error signals. Together, these results suggest that general fluctuations in eye velocity as well as retinal error signals are related to oculomotor behavior in subsequent trials.
Collapse
|
34
|
|
35
|
Fooken J, Kreyenmeier P, Spering M. The role of eye movements in manual interception: A mini-review. Vision Res 2021; 183:81-90. [PMID: 33743442 DOI: 10.1016/j.visres.2021.02.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/28/2021] [Accepted: 02/04/2021] [Indexed: 10/21/2022]
Abstract
When we catch a moving object in mid-flight, our eyes and hands are directed toward the object. Yet, the functional role of eye movements in guiding interceptive hand movements is not yet well understood. This review synthesizes emergent views on the importance of eye movements during manual interception with an emphasis on laboratory studies published since 2015. We discuss the role of eye movements in forming visual predictions about a moving object, and for enhancing the accuracy of interceptive hand movements through feedforward (extraretinal) and feedback (retinal) signals. We conclude by proposing a framework that defines the role of human eye movements for manual interception accuracy as a function of visual certainty and object motion predictability.
Collapse
Affiliation(s)
- Jolande Fooken
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, Canada; Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
36
|
Yurkovic JR, Lisandrelli G, Shaffer RC, Dominick KC, Pedapati EV, Erickson CA, Kennedy DP, Yu C. Using head-mounted eye tracking to examine visual and manual exploration during naturalistic toy play in children with and without autism spectrum disorder. Sci Rep 2021; 11:3578. [PMID: 33574367 PMCID: PMC7878779 DOI: 10.1038/s41598-021-81102-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 12/22/2020] [Indexed: 11/18/2022] Open
Abstract
Multimodal exploration of objects during toy play is important for a child's development and is suggested to be abnormal in children with autism spectrum disorder (ASD) due to either atypical attention or atypical action. However, little is known about how children with ASD coordinate their visual attention and manual actions during toy play. The current study aims to understand if and in what ways children with ASD generate exploratory behaviors to toys in natural, unconstrained contexts by utilizing head-mounted eye tracking to quantify moment-by-moment attention. We found no differences in how 24- to 48-mo children with and without ASD distribute their visual attention, generate manual action, or coordinate their visual and manual behaviors during toy play with a parent. Our findings suggest an intact ability and willingness of children with ASD to explore toys and suggest that context is important when studying child behavior.
Collapse
Affiliation(s)
- Julia R Yurkovic
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, 47401, USA.
| | - Grace Lisandrelli
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, 47401, USA
| | - Rebecca C Shaffer
- Department of Pediatrics, Cincinnati Children's Hospital, Cincinnati, OH, 45229, USA
- School of Medicine, University of Cincinnati, Cincinnati, OH, 45229, USA
| | - Kelli C Dominick
- Department of Psychiatry and Behavioral Neuroscience, Cincinnati Children's Hospital, Cincinnati, OH, 45229, USA
- School of Medicine, University of Cincinnati, Cincinnati, OH, 45229, USA
| | - Ernest V Pedapati
- Department of Psychiatry and Behavioral Neuroscience, Cincinnati Children's Hospital, Cincinnati, OH, 45229, USA
- School of Medicine, University of Cincinnati, Cincinnati, OH, 45229, USA
| | - Craig A Erickson
- Department of Psychiatry and Behavioral Neuroscience, Cincinnati Children's Hospital, Cincinnati, OH, 45229, USA
- School of Medicine, University of Cincinnati, Cincinnati, OH, 45229, USA
| | - Daniel P Kennedy
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, 47401, USA.
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, 47401, USA.
- Department of Psychological and Brain Sciences, University of Texas at Austin, Austin, Texas, 78712, USA.
| |
Collapse
|
37
|
Bakst L, McGuire JT. Eye movements reflect adaptive predictions and predictive precision. J Exp Psychol Gen 2020; 150:915-929. [PMID: 33048566 DOI: 10.1037/xge0000977] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Successful decision-making depends on the ability to form predictions about uncertain future events. Existing evidence suggests predictive representations are not limited to point estimates but also include information about the associated level of predictive uncertainty. Estimates of predictive uncertainty have an important role in governing the rate at which beliefs are updated in response to new observations. It is not yet known, however, whether the same form of uncertainty-modulated learning occurs naturally and spontaneously when there is no task requirement to express predictions explicitly. Here, we used a gaze-based predictive inference paradigm to show that (a) predictive inference manifested in spontaneous gaze dynamics, (b) feedback-driven updating of spontaneous gaze-based predictions reflected adaptation to environmental statistics, and (c) anticipatory gaze variability tracked predictive uncertainty in an event-by-event manner. Our results demonstrate that sophisticated predictive inference can occur spontaneously and that oculomotor behavior can provide a multidimensional readout of internal predictive beliefs. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Leah Bakst
- Department of Psychological and Brain Sciences, Boston University
| | - Joseph T McGuire
- Department of Psychological and Brain Sciences, Boston University
| |
Collapse
|
38
|
Papinutto M, Lao J, Lalanne D, Caldara R. Watchers do not follow the eye movements of Walkers. Vision Res 2020; 176:130-140. [PMID: 32882595 DOI: 10.1016/j.visres.2020.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 08/03/2020] [Accepted: 08/05/2020] [Indexed: 11/27/2022]
Abstract
Eye movements are a functional signature of how the visual system effectively decodes and adapts to the environment. However, scientific knowledge in eye movements mostly arises from studies conducted in laboratories, with well-controlled stimuli presented in constrained unnatural settings. Only a few studies have attempted to directly compare and assess whether eye movement data acquired in the real world generalize with those in laboratory settings, with same visual inputs. However, none of these studies controlled for both the auditory signals typical of real-world settings and the top-down task effects across conditions, leaving this question unresolved. To minimize this inherent gap across conditions, we compared the eye movements recorded from observers during ecological spatial navigation in the wild (the Walkers) with those recorded in laboratory (the Watchers) on the same visual and auditory inputs, with both groups performing the very same active cognitive task. We derived robust data-driven statistical saliency and motion maps. The Walkers and Watchers differed in terms of eye movement characteristics: fixation number and duration, saccade amplitude. The Watchers relied significantly more on saliency and motion than the Walkers. Interestingly, both groups exhibited similar fixation patterns towards social agents and objects. Altogether, our data show that eye movements patterns obtained in laboratory do not fully generalize to real world, even when task and auditory information is controlled. These observations invite to caution when generalizing the eye movements obtained in laboratory with those of ecological spatial navigation.
Collapse
Affiliation(s)
- M Papinutto
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Switzerland; Human-IST Institute, Department of Informatics, University of Fribourg, Switzerland.
| | - J Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Switzerland
| | - D Lalanne
- Human-IST Institute, Department of Informatics, University of Fribourg, Switzerland
| | - R Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Switzerland
| |
Collapse
|
39
|
O'Connell MN, Barczak A, McGinnis T, Mackin K, Mowery T, Schroeder CE, Lakatos P. The Role of Motor and Environmental Visual Rhythms in Structuring Auditory Cortical Excitability. iScience 2020; 23:101374. [PMID: 32738615 PMCID: PMC7394914 DOI: 10.1016/j.isci.2020.101374] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 06/14/2020] [Accepted: 07/13/2020] [Indexed: 10/26/2022] Open
Abstract
Previous studies indicate that motor sampling patterns modulate neuronal excitability in sensory brain regions by entraining brain rhythms, a process termed motor-initiated entrainment. In addition, rhythms of the external environment are also capable of entraining brain rhythms. Our first goal was to investigate the properties of motor-initiated entrainment in the auditory system using a prominent visual motor sampling pattern in primates, saccades. Second, we wanted to determine whether/how motor-initiated entrainment interacts with visual environmental entrainment. We examined laminar profiles of neuronal ensemble activity in primary auditory cortex and found that whereas motor-initiated entrainment has a suppressive effect, visual environmental entrainment has an enhancive effect. We also found that these processes are temporally coupled, and their temporal relationship ensures that their effect on excitability is complementary rather than interfering. Altogether, our results demonstrate that motor and sensory systems continuously interact in orchestrating the brain's context for the optimal sampling of our multisensory environment.
Collapse
Affiliation(s)
- Monica N O'Connell
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA.
| | - Annamaria Barczak
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | - Tammy McGinnis
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | - Kieran Mackin
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | - Todd Mowery
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA
| | - Charles E Schroeder
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA; Departments of Neurological Surgery and Psychiatry, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA
| | - Peter Lakatos
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA; Department of Psychiatry, New York University School of Medicine, New York, NY 10016, USA.
| |
Collapse
|
40
|
Kishita Y, Ueda H, Kashino M. Temporally Coupled Coordination of Eye and Body Movements in Baseball Batting for a Wide Range of Ball Speeds. Front Sports Act Living 2020; 2:64. [PMID: 33345055 PMCID: PMC7739824 DOI: 10.3389/fspor.2020.00064] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Accepted: 05/11/2020] [Indexed: 11/17/2022] Open
Abstract
We investigated the visuomotor strategies of baseball batting, in particular, the relationship between eye and body (head and hip) movements during batting for a wide range of ball speeds. Nine college baseball players participated in the experiment and hit balls projected by a pitching machine operating at four different ball speeds (80, 100, 120, 140 km/h). Eye movements were measured with a wearable eye tracker, and body movements were measured with an optical motion capture system. In the early period of the ball's flight, batters foveated the ball with overshooting head movements in the direction of the ball's flight while compensating for the overshooting head movements with eye movements for the two slower ball speeds (80 and 100 km/h) and only head rotations for the two faster ball speeds (120 and 140 km/h). After that, batters made a predictive saccade and a quick head rotation to the future ball position before the angular velocity of the ball drastically increased. We also found that regardless of the ball speed, the onsets of the predictive saccade and the quick head movement were temporally aligned with the bat-ball contact and rotation of the hip (swing motion), but were not correlated with the elapsed time from the ball's release or the ball's location. These results indicate that the gaze movements in baseball batting are not solely driven by external visual information (ball position or velocity) but are determined in relation to other body movements.
Collapse
Affiliation(s)
- Yuki Kishita
- Department of Information and Communications Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan
| | - Hiroshi Ueda
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanagawa, Japan
| | - Makio Kashino
- Department of Information and Communications Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanagawa, Japan
| |
Collapse
|
41
|
Mann DL, Nakamoto H, Logt N, Sikkink L, Brenner E. Predictive eye movements when hitting a bouncing ball. J Vis 2020; 19:28. [PMID: 31891654 DOI: 10.1167/19.14.28] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Predictive eye movements targeted toward the direction of ball bounce are a feature of gaze behavior when intercepting a target soon after it has bounced. However, there is conjecture over the exact location toward which these predictive eye movements are directed, and whether gaze during this period is moving or instead "lies in wait" for the ball to arrive. Therefore, the aim of this study was to further examine the location toward which predictive eye movements are made when hitting a bouncing ball. We tracked the eye and head movements of 23 novice participants who attempted to hit approaching tennis balls in a virtual environment. The balls differed in time from bounce to contact (300, 550, and 800 ms). Results revealed that participants made predictive saccades shortly before the ball bounced in two-thirds of all trials. These saccades were directed several degrees above the position at which the ball bounced, rather than toward the position at which it bounced or toward a position the ball would occupy shortly after the bounce. After the saccade, a separation of roles for the eyes and head ensured that gaze continued to change so that it was as close as possible to the ball soon after bounce. Smooth head movements were responsible for the immediate and ongoing changes in gaze to align it with the ball in the lateral direction, while eye movements realigned gaze with the ball in the vertical direction from approximately 100 ms after the ball changed its direction of motion after bounce. We conclude that predictive saccades direct gaze above the location at which the ball will bounce, presumably in order to facilitate ball tracking after the bounce.
Collapse
Affiliation(s)
- David L Mann
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Hiroki Nakamoto
- Faculty of Physical Education, National Institute of Fitness and Sports in Kanoya, Kagoshima, Japan
| | - Nadine Logt
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Lieke Sikkink
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
42
|
Karimpur H, Kurz J, Fiehler K. The role of perception and action on the use of allocentric information in a large-scale virtual environment. Exp Brain Res 2020; 238:1813-1826. [PMID: 32500297 PMCID: PMC7438369 DOI: 10.1007/s00221-020-05839-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 05/23/2020] [Indexed: 01/10/2023]
Abstract
In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual (“view where the ball landed while standing still”—Experiment 1) or involved an action (“intercept the ball with the foot just before it lands”—Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball’s landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Johannes Kurz
- NemoLab-Neuromotor Behavior Laboratory, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
43
|
Binaee K, Diaz G. Movements of the eyes and hands are coordinated by a common predictive strategy. J Vis 2020; 19:3. [PMID: 31585462 DOI: 10.1167/19.12.3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Although attempts to intercept a ball in flight are often preceded by predictive gaze behavior, the relationship between the predictive control of gaze and the effector is largely unexplored. The present study was designed to investigate the influence of the spatiotemporal demands of the task on a switch to the predictive control. Ten subjects immersed in a virtual environment attempted to intercept a ball that disappeared for 500 ms of its parabolic approach. The timing of the blank was varied through manipulation of the post-blank duration prior to the ball's arrival, and the shape of the trajectory was manipulated through variation of the pre-blank duration. Results reveal that the gaze movement trajectory during the blank was curvilinear, appropriately scaled to the curvature of the invisible moving ball, and the gaze vector was within 4° of the ball upon reappearance, despite 10° to 13° of ball movement. The timing of the blank did not influence the accuracy of predictive positioning of the paddle at the time of ball reappearance, indicated by the distance of the paddle relative to the ball's eventual passing location. However, analysis of trial-by-trial covariations revealed that, when the gaze vector more accurately predicted the ball's trajectory at reappearance, the paddle was also held closer to the ball's eventual passing location. This suggests that predictive strategies for paddle placement are more strongly mediated by the accuracy of gaze behavior than by the observed range of trajectories, or the timing of the blank.
Collapse
Affiliation(s)
- Kamran Binaee
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
| | - Gabriel Diaz
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
| |
Collapse
|
44
|
Kothari R, Yang Z, Kanan C, Bailey R, Pelz JB, Diaz GJ. Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities. Sci Rep 2020; 10:2539. [PMID: 32054884 PMCID: PMC7018838 DOI: 10.1038/s41598-020-59251-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 01/23/2020] [Indexed: 11/21/2022] Open
Abstract
The study of gaze behavior has primarily been constrained to controlled environments in which the head is fixed. Consequently, little effort has been invested in the development of algorithms for the categorization of gaze events (e.g. fixations, pursuits, saccade, gaze shifts) while the head is free, and thus contributes to the velocity signals upon which classification algorithms typically operate. Our approach was to collect a novel, naturalistic, and multimodal dataset of eye + head movements when subjects performed everyday tasks while wearing a mobile eye tracker equipped with an inertial measurement unit and a 3D stereo camera. This Gaze-in-the-Wild dataset (GW) includes eye + head rotational velocities (deg/s), infrared eye images and scene imagery (RGB + D). A portion was labelled by coders into gaze motion events with a mutual agreement of 0.74 sample based Cohen's κ. This labelled data was used to train and evaluate two machine learning algorithms, Random Forest and a Recurrent Neural Network model, for gaze event classification. Assessment involved the application of established and novel event based performance metrics. Classifiers achieve ~87% human performance in detecting fixations and saccades but fall short (50%) on detecting pursuit movements. Moreover, pursuit classification is far worse in the absence of head movement information. A subsequent analysis of feature significance in our best performing model revealed that classification can be done using only the magnitudes of eye and head movements, potentially removing the need for calibration between the head and eye tracking systems. The GW dataset, trained classifiers and evaluation metrics will be made publicly available with the intention of facilitating growth in the emerging area of head-free gaze event classification.
Collapse
Affiliation(s)
- Rakshit Kothari
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA.
| | - Zhizhuo Yang
- Golisano College of Computing and Information Sciences, RIT, Rochester, NY, USA
| | - Christopher Kanan
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| | - Reynold Bailey
- Golisano College of Computing and Information Sciences, RIT, Rochester, NY, USA
| | - Jeff B Pelz
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| | - Gabriel J Diaz
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| |
Collapse
|
45
|
Kishita Y, Ueda H, Kashino M. Eye and Head Movements of Elite Baseball Players in Real Batting. Front Sports Act Living 2020; 2:3. [PMID: 33344998 PMCID: PMC7739578 DOI: 10.3389/fspor.2020.00003] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 01/07/2020] [Indexed: 01/08/2023] Open
Abstract
In baseball, batters swing in response to a ball moving at high speed within a limited amount of time—about 0. 5 s. In order to make such movement possible, quick and accurate trajectory prediction followed by accurate swing motion with optimal body-eye coordination is considered essential, but the mechanisms involved are not clearly understood. The present study aims to clarify the strategies of eye and head movements adopted by elite baseball batters in actual game situations. In our experiment, six current professional baseball batters faced former professional baseball pitchers in a scenario close to a real game (i.e., without the batters informed about pitch type in advance). We measured eye movements with a wearable eye-tracker and head movements and bat trajectories with an optical motion capture system while the batters hit. In the eye movement measurements, contrary to previous studies, we found distinctive predictive saccades directed toward the predicted trajectory, of which the first saccades were initiated approximately 80–220 ms before impact for all participants. Predictive saccades were initiated significantly later when batters knew the types of pitch in advance compared to when they did not. We also found that the best three batters started predictive saccades significantly later and tended to have fewer gaze-ball errors than the other three batters. This result suggests that top batters spend slightly more time obtaining visual information by delaying the initiation of saccades. Furthermore, although all batters showed positive correlations between bat location and head direction at the time of impact, the better batters showed no correlation between bat location and gaze direction at that time. These results raise the possibility of differences in the coding process for the location of bat-ball contact; namely, that top batters might utilize head direction to encode impact locations.
Collapse
Affiliation(s)
- Yuki Kishita
- Department of Information and Communications Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan
| | - Hiroshi Ueda
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Atsugi, Japan
| | - Makio Kashino
- Department of Information and Communications Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan.,NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Atsugi, Japan
| |
Collapse
|
46
|
Zhao H, Straub D, Rothkopf CA. The visual control of interceptive steering: How do people steer a car to intercept a moving target? J Vis 2019; 19:11. [PMID: 31830240 DOI: 10.1167/19.14.11] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The visually guided interception of a moving target is a fundamental visuomotor task that humans can do with ease. But how humans carry out this task is still unclear despite numerous empirical investigations. Measurements of angular variables during human interception have suggested three possible strategies: the pursuit strategy, the constant bearing angle strategy, and the constant target-heading strategy. Here, we review previous experimental paradigms and show that some of them do not allow one to distinguish among the three strategies. Based on this analysis, we devised a virtual driving task that allows investigating which of the three strategies best describes human interception. Crucially, we measured participants' steering, head, and gaze directions over time for three different target velocities. Subjects initially aligned head and gaze in the direction of the car's heading. When the target appeared, subjects centered their gaze on the target, pointed their head slightly off the heading direction toward the target, and maintained an approximately constant target-heading angle, whose magnitude varied across participants, while the target's bearing angle continuously changed. With a second condition, in which the target was partially occluded, we investigated several alternative hypotheses about participants' visual strategies. Overall, the results suggest that interceptive steering is best described by the constant target-heading strategy and that gaze and head are coordinated to continuously acquire visual information to achieve successful interception.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Dominik Straub
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany.,Center for Cognitive Science, Technical University Darmstadt, Germany.,Frankfurt Institute for Advanced Studies, Goethe University, Germany
| |
Collapse
|
47
|
Gregori V, Cognolato M, Saetta G, Atzori M, Gijsberts A. On the Visuomotor Behavior of Amputees and Able-Bodied People During Grasping. Front Bioeng Biotechnol 2019; 7:316. [PMID: 31799243 PMCID: PMC6874164 DOI: 10.3389/fbioe.2019.00316] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 10/24/2019] [Indexed: 11/15/2022] Open
Abstract
Visual attention is often predictive for future actions in humans. In manipulation tasks, the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. Some recent studies have proposed to exploit this anticipatory gaze behavior to improve the control of dexterous upper limb prostheses. This requires a detailed understanding of visuomotor coordination to determine in which temporal window gaze may provide helpful information. In this paper, we verify and quantify the gaze and motor behavior of 14 transradial amputees who were asked to grasp and manipulate common household objects with their missing limb. For comparison, we also include data from 30 able-bodied subjects who executed the same protocol with their right arm. The dataset contains gaze, first person video, angular velocities of the head, and electromyography and accelerometry of the forearm. To analyze the large amount of video, we developed a procedure based on recent deep learning methods to automatically detect and segment all objects of interest. This allowed us to accurately determine the pixel distances between the gaze point, the target object, and the limb in each individual frame. Our analysis shows a clear coordination between the eyes and the limb in the reach-to-grasp phase, confirming that both intact and amputated subjects precede the grasp with their eyes by more than 500 ms. Furthermore, we note that the gaze behavior of amputees was remarkably similar to that of the able-bodied control group, despite their inability to physically manipulate the objects.
Collapse
Affiliation(s)
- Valentina Gregori
- Department of Computer, Control, and Management Engineering, University of Rome La Sapienza, Rome, Italy.,VANDAL Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Matteo Cognolato
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland.,Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Gianluca Saetta
- Department of Neurology, University Hospital of Zurich, Zurich, Switzerland
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | | | - Arjan Gijsberts
- VANDAL Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
48
|
Dynamic task observation: A gaze-mediated complement to traditional action observation treatment? Behav Brain Res 2019; 379:112351. [PMID: 31726070 DOI: 10.1016/j.bbr.2019.112351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 10/22/2019] [Accepted: 11/08/2019] [Indexed: 11/21/2022]
Abstract
Action observation elicits changes in primary motor cortex known as motor resonance, a phenomenon thought to underpin several functions, including our ability to understand and imitate others' actions. Motor resonance is modulated not only by the observer's motor expertise, but also their gaze behaviour. The aim of the present study was to investigate motor resonance and eye movements during observation of a dynamic goal-directed action, relative to an everyday one - a reach-grasp-lift (RGL) action, commonly used in action-observation-based neurorehabilitation protocols. Skilled and novice golfers watched videos of a golf swing and an RGL action as we recorded MEPs from three forearm muscles; gaze behaviour was concurrently monitored. Corticospinal excitability increased during golf swing observation, but it was not modulated by expertise, relative to baseline; no such changes were observed for the RGL task. MEP amplitudes were related to participants' gaze behaviour: in the RGL condition, target viewing was associated with lower MEP amplitudes; in the golf condition, MEP amplitudes were positively correlated with time spent looking at the effector or neighbouring regions. Viewing of a dynamic action such as the golf swing may enhance action observation treatment, especially when concurrent physical practice is not possible.
Collapse
|
49
|
Affiliation(s)
- Katja Fiehler
- Department of Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), Universities of Marburg and Giessen, Germany
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
50
|
Tuhkanen S, Pekkanen J, Rinkkala P, Mole C, Wilkie RM, Lappi O. Humans Use Predictive Gaze Strategies to Target Waypoints for Steering. Sci Rep 2019; 9:8344. [PMID: 31171850 PMCID: PMC6554351 DOI: 10.1038/s41598-019-44723-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Accepted: 05/15/2019] [Indexed: 12/22/2022] Open
Abstract
A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.
Collapse
Affiliation(s)
- Samuel Tuhkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Jami Pekkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Paavo Rinkkala
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Callum Mole
- School of Psychology, University of Leeds, Leeds, UK
| | | | - Otto Lappi
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland. .,TRUlab, University of Helsinki, Helsinki, Finland.
| |
Collapse
|