1
|
Sinha O, Rosenquist T, Fedorshak A, Kpankpa J, Albenze E, T Bonnet C, Bertucco M, Kurtzer I, Singh T. Predictive posture stabilization before contact with moving objects: equivalence of smooth pursuit tracking and peripheral vision. J Neurophysiol 2024; 132:695-709. [PMID: 39018017 DOI: 10.1152/jn.00158.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 06/19/2024] [Accepted: 07/10/2024] [Indexed: 07/18/2024] Open
Abstract
Postural stabilization is essential to effectively interact with our environment. Humans preemptively adjust their posture to counteract impending disturbances, such as those encountered during interactions with moving objects, a phenomenon known as anticipatory postural adjustments (APAs). APAs are thought to be influenced by predictive models that incorporate object motion via retinal motion and extraretinal signals. Building on our previous work that examined APAs in relation to the perceived momentum of moving objects, here we explored the impact of object motion within different visual field sectors on the human capacity to anticipate motion and prepare APAs for contact between virtual moving objects and the limb. Participants interacted with objects moving toward them under different gaze conditions. In one condition, participants fixated on either a central point (central fixation) or left-right of the moving object (peripheral fixation), whereas in another, they followed the moving object with smooth pursuit eye movements (SPEMs). We found that APAs had the smallest magnitude in the central fixation condition and that no notable differences in APAs were apparent between the SPEM and peripheral fixation conditions. This suggests that the visual system can accurately perceive motion of objects in peripheral vision for posture stabilization. Using Bayesian model averaging, we also evaluated the contribution of different gaze variables, such as eye velocity and gain (ratio of eye and object velocity) and showed that both eye velocity and gain signals were significant predictors of APAs. Taken together, our study underscores the roles of oculomotor signals in the modulation of APAs.NEW & NOTEWORTHY We show that the human visuomotor system can detect motion in peripheral vision and make anticipatory adjustments to posture before contact with moving objects, just as effectively as when the eye movement system tracks those objects with smooth pursuit eye movements. These findings pave the way for research into how age-induced changes in spatial vision, eye movements, and motion perception could affect the control of limb movements and postural stability during motion-mediated interactions with objects.
Collapse
Affiliation(s)
- Oindrila Sinha
- Department of Kinesiology, The Pennsylvania State University, University Park, Pennsylvania, United States
| | - Taylor Rosenquist
- Department of Kinesiology, The Pennsylvania State University, University Park, Pennsylvania, United States
| | - Alyssa Fedorshak
- Department of Kinesiology, The Pennsylvania State University, University Park, Pennsylvania, United States
| | - John Kpankpa
- Department of Kinesiology, The Pennsylvania State University, University Park, Pennsylvania, United States
| | - Eliza Albenze
- Department of Kinesiology, The Pennsylvania State University, University Park, Pennsylvania, United States
| | - Cédrick T Bonnet
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000, Lille, France
| | - Matteo Bertucco
- Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Isaac Kurtzer
- Department of Biomedical Science, College of Osteopathic Medicine, New York Institute of Technology, New York City, New York, United States
| | - Tarkeshwar Singh
- Department of Kinesiology, The Pennsylvania State University, University Park, Pennsylvania, United States
| |
Collapse
|
2
|
Illamperuma NH, Fooken J. Towards a functional understanding of gaze in goal-directed action. J Neurophysiol 2024; 132:767-769. [PMID: 39110515 DOI: 10.1152/jn.00342.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Accepted: 08/05/2024] [Indexed: 08/30/2024] Open
|
3
|
Luabeya GN, Yan X, Freud E, Crawford JD. Influence of gaze, vision, and memory on hand kinematics in a placement task. J Neurophysiol 2024; 132:147-161. [PMID: 38836297 DOI: 10.1152/jn.00362.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024] Open
Abstract
People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.
Collapse
Affiliation(s)
- Gaelle N Luabeya
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
| | - Erez Freud
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
- Department of Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Arthur T, Vine S, Wilson M, Harris D. The role of prediction and visual tracking strategies during manual interception: An exploration of individual differences. J Vis 2024; 24:4. [PMID: 38842836 PMCID: PMC11160954 DOI: 10.1167/jov.24.6.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 04/10/2024] [Indexed: 06/07/2024] Open
Abstract
The interception (or avoidance) of moving objects is a common component of various daily living tasks; however, it remains unclear whether precise alignment of foveal vision with a target is important for motor performance. Furthermore, there has also been little examination of individual differences in visual tracking strategy and the use of anticipatory gaze adjustments. We examined the importance of in-flight tracking and predictive visual behaviors using a virtual reality environment that required participants (n = 41) to intercept tennis balls projected from one of two possible locations. Here, we explored whether different tracking strategies spontaneously arose during the task, and which were most effective. Although indices of closer in-flight tracking (pursuit gain, tracking coherence, tracking lag, and saccades) were predictive of better interception performance, these relationships were rather weak. Anticipatory gaze shifts toward the correct release location of the ball provided no benefit for subsequent interception. Nonetheless, two interceptive strategies were evident: 1) early anticipation of the ball's onset location followed by attempts to closely track the ball in flight (i.e., predictive strategy); or 2) positioning gaze between possible onset locations and then using peripheral vision to locate the moving ball (i.e., a visual pivot strategy). Despite showing much poorer in-flight foveal tracking of the ball, participants adopting a visual pivot strategy performed slightly better in the task. Overall, these results indicate that precise alignment of the fovea with the target may not be critical for interception tasks, but that observers can adopt quite varied visual guidance approaches.
Collapse
Affiliation(s)
- Tom Arthur
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, EX1 2LU, UK
| | - Samuel Vine
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, EX1 2LU, UK
| | - Mark Wilson
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, EX1 2LU, UK
| | - David Harris
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, EX1 2LU, UK
| |
Collapse
|
5
|
Kreyenmeier P, Spering M. A unifying framework for studying discrete and continuous human movements. J Neurophysiol 2024; 131:1112-1114. [PMID: 38718413 DOI: 10.1152/jn.00186.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Accepted: 05/03/2024] [Indexed: 06/05/2024] Open
Affiliation(s)
- Philipp Kreyenmeier
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Miriam Spering
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
- Edwin S.H. Leong Centre for Healthy Aging, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
6
|
Coudiere A, Danion FR. Eye-hand coordination all the way: from discrete to continuous hand movements. J Neurophysiol 2024; 131:652-667. [PMID: 38381528 DOI: 10.1152/jn.00314.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 01/31/2024] [Accepted: 02/18/2024] [Indexed: 02/23/2024] Open
Abstract
The differentiation between continuous and discrete actions is key for behavioral neuroscience. Although many studies have characterized eye-hand coordination during discrete (e.g., reaching) and continuous (e.g., pursuit tracking) actions, all these studies were conducted separately, using different setups and participants. In addition, how eye-hand coordination might operate at the frontier between discrete and continuous movements remains unexplored. Here we filled these gaps by means of a task that could elicit different movement dynamics. Twenty-eight participants were asked to simultaneously track with their eyes and a joystick a visual target that followed an unpredictable trajectory and whose position was updated at different rates (from 1.5 to 240 Hz). This procedure allowed us to examine actions ranging from discrete point-to-point movements (low refresh rate) to continuous pursuit (high refresh rate). For comparison, we also tested a manual tracking condition with the eyes fixed and a pure eye tracking condition (hand fixed). The results showed an abrupt transition between discrete and continuous hand movements around 3 Hz contrasting with a smooth trade-off between fixations and smooth pursuit. Nevertheless, hand and eye tracking accuracy remained strongly correlated, with each of these depending on whether the other effector was recruited. Moreover, gaze-cursor distance and lag were smaller when eye and hand performed the task conjointly than separately. Altogether, despite some dissimilarities in eye and hand dynamics when transitioning between discrete and continuous movements, our results emphasize that eye-hand coordination continues to smoothly operate and support the notion of synergies across eye movement types.NEW & NOTEWORTHY The differentiation between continuous and discrete actions is key for behavioral neuroscience. By using a visuomotor task in which we manipulate the target refresh rate to trigger different movement dynamics, we explored eye-hand coordination all the way from discrete to continuous actions. Despite abrupt changes in hand dynamics, eye-hand coordination continues to operate via a gradual trade-off between fixations and smooth pursuit, an observation confirming the notion of synergies across eye movement types.
Collapse
Affiliation(s)
- Adrien Coudiere
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| | - Frederic R Danion
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| |
Collapse
|
7
|
Tolentino-Castro JW, Schroeger A, Cañal-Bruland R, Raab M. Increasing auditory intensity enhances temporal but deteriorates spatial accuracy in a virtual interception task. Exp Brain Res 2024:10.1007/s00221-024-06787-x. [PMID: 38334793 DOI: 10.1007/s00221-024-06787-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 01/15/2024] [Indexed: 02/10/2024]
Abstract
Humans are quite accurate and precise in interception performance. So far, it is still unclear what role auditory information plays in spatiotemporal accuracy and consistency during interception. In the current study, interception performance was measured as the spatiotemporal accuracy and consistency of when and where a virtual ball was intercepted on a visible line displayed on a screen based on auditory information alone. We predicted that participants would more accurately indicate when the ball would cross a target line than where it would cross the line, because human hearing is particularly sensitive to temporal parameters. In a within-subject design, we manipulated auditory intensity (52, 61, 70, 79, 88 dB) using a sound stimulus programmed to be perceived over the screen in an inverted C-shape trajectory. Results showed that the louder the sound, the better was temporal accuracy, but the worse was spatial accuracy. We argue that louder sounds increased attention toward auditory information when performing interception judgments. How balls are intercepted and practically how intensity of sound may add to temporal accuracy and consistency is discussed from a theoretical perspective of modality-specific interception behavior.
Collapse
Affiliation(s)
- J Walter Tolentino-Castro
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany
| | - Anna Schroeger
- Department for General Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Institute of Sport Science, Friedrich Schiller University Jena, Jena, Germany
| | - Markus Raab
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany.
- School of Applied Sciences, London South Bank University, London, England.
| |
Collapse
|
8
|
Rubinstein JF, Singh M, Kowler E. Bayesian approaches to smooth pursuit of random dot kinematograms: effects of varying RDK noise and the predictability of RDK direction. J Neurophysiol 2024; 131:394-416. [PMID: 38149327 DOI: 10.1152/jn.00116.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 12/28/2023] Open
Abstract
Smooth pursuit eye movements respond on the basis of both immediate and anticipated target motion, where anticipations may be derived from either memory or perceptual cues. To study the combined influence of both immediate sensory motion and anticipation, subjects pursued clear or noisy random dot kinematograms (RDKs) whose mean directions were chosen from Gaussian distributions with SDs = 10° (narrow prior) or 45° (wide prior). Pursuit directions were consistent with Bayesian theory in that transitions over time from dependence on the prior to near total dependence on immediate sensory motion (likelihood) took longer with the noisier RDKs and with the narrower, more reliable, prior. Results were fit to Bayesian models in which parameters representing the variability of the likelihood either were or were not constrained to be the same for both priors. The unconstrained model provided a statistically better fit, with the influence of the prior in the constrained model smaller than predicted from strict reliability-based weighting of prior and likelihood. Factors that may have contributed to this outcome include prior variability different from nominal values, low-level sensorimotor learning with the narrow prior, or departures of pursuit from strict adherence to reliability-based weighting. Although modifications of, or alternatives to, the normative Bayesian model will be required, these results, along with previous studies, suggest that Bayesian approaches are a promising framework to understand how pursuit combines immediate sensory motion, past history, and informative perceptual cues to accurately track the target motion that is most likely to occur in the immediate future.NEW & NOTEWORTHY Smooth pursuit eye movements respond on the basis of anticipated, as well as immediate, target motions. Bayesian models using reliability-based weighting of previous (prior) and immediate target motions (likelihood) accounted for many, but not all, aspects of pursuit of clear and noisy random dot kinematograms with different levels of predictability. Bayesian approaches may solve the long-standing problem of how pursuit combines immediate sensory motion and anticipation of future motion to configure an effective response.
Collapse
Affiliation(s)
- Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Manish Singh
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| |
Collapse
|
9
|
D'Aquino A, Frank C, Hagan JE, Schack T. Eye movements during motor imagery and execution reveal different visuomotor control strategies in manual interception. Psychophysiology 2023; 60:e14401. [PMID: 37515410 DOI: 10.1111/psyp.14401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023]
Abstract
Previous research has investigated the degree of congruency in gaze metrics between action execution (AE) and motor imagery (MI) for similar manual tasks. Although eye movement dynamics seem to be limited to relatively simple actions toward static objects, there is little evidence of how gaze parameters change during imagery as a function of more dynamic spatial and temporal task demands. This study examined the similarities and differences in eye movements during AE and MI for an interception task. Twenty-four students were asked to either mentally simulate or physically intercept a moving target on a computer display. Smooth pursuit, saccades, and response time were compared between the two conditions. The results show that MI was characterized by higher smooth pursuit gain and duration while no meaningful differences were found in the other parameters. The findings indicate that eye movements during imagery are not simply a duplicate of what happens during actual performance. Instead, eye movements appear to vary as a function of the interaction between visuomotor control strategies and task demands.
Collapse
Affiliation(s)
- Alessio D'Aquino
- Neurocognition and Action Biomechanics Group, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Cornelia Frank
- Institute for Sport and Movement Science, Osnabrück University, Osnabrück, Germany
| | - John Elvis Hagan
- Neurocognition and Action Biomechanics Group, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Thomas Schack
- Neurocognition and Action Biomechanics Group, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
10
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
11
|
de la Malla C, Goettker A. The effect of impaired velocity signals on goal-directed eye and hand movements. Sci Rep 2023; 13:13646. [PMID: 37607970 PMCID: PMC10444871 DOI: 10.1038/s41598-023-40394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
Information about position and velocity is essential to predict where moving targets will be in the future, and to accurately move towards them. But how are the two signals combined over time to complete goal-directed movements? We show that when velocity information is impaired due to using second-order motion stimuli, saccades directed towards moving targets land at positions where targets were ~ 100 ms before saccade initiation, but hand movements are accurate. Importantly, the longer latencies of hand movements allow for additional time to process the sensory information available. When increasing the period of time one sees the moving target before making the saccade, saccades become accurate. In line with that, hand movements with short latencies show higher curvature, indicating corrections based on an update of incoming sensory information. These results suggest that movements are controlled by an independent and evolving combination of sensory information about the target's position and velocity.
Collapse
Affiliation(s)
- Cristina de la Malla
- Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institute of Neurosciences, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
12
|
Kreyenmeier P, Schroeger A, Cañal-Bruland R, Raab M, Spering M. Rapid Audiovisual Integration Guides Predictive Actions. eNeuro 2023; 10:ENEURO.0134-23.2023. [PMID: 37591732 PMCID: PMC10464656 DOI: 10.1523/eneuro.0134-23.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 07/19/2023] [Accepted: 07/22/2023] [Indexed: 08/19/2023] Open
Abstract
Natural movements, such as catching a ball or capturing prey, typically involve multiple senses. Yet, laboratory studies on human movements commonly focus solely on vision and ignore sound. Here, we ask how visual and auditory signals are integrated to guide interceptive movements. Human observers tracked the brief launch of a simulated baseball, randomly paired with batting sounds of varying intensities, and made a quick pointing movement at the ball. Movement end points revealed systematic overestimation of target speed when the ball launch was paired with a loud versus a quiet sound, although sound was never informative. This effect was modulated by the availability of visual information; sounds biased interception when the visual presentation duration of the ball was short. Amplitude of the first catch-up saccade, occurring ∼125 ms after target launch, revealed early integration of audiovisual information for trajectory estimation. This sound-induced bias was reversed during later predictive saccades when more visual information was available. Our findings suggest that auditory and visual signals are integrated to guide interception and that this integration process must occur early at a neural site that receives auditory and visual signals within an ultrashort time span.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Colombia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Colombia V6T 1Z2, Canada
| | - Anna Schroeger
- Department of Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
- Department for the Psychology of Human Movement and Sport, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Markus Raab
- Department of Performance Psychology, German Sport University Cologne, 50933 Cologne, Germany
- School of Applied Sciences, London South Bank University, London SE1 0AA, United Kingdom
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Colombia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Colombia V6T 1Z2, Canada
- Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Colombia V6T 1Z3, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, British Colombia V6T 1Z4, Canada
| |
Collapse
|
13
|
Li N, Liu J, Xie Y, Ji W, Chen Z. Age-related decline of online visuomotor adaptation: a combined effect of deteriorations of motor anticipation and execution. Front Aging Neurosci 2023; 15:1147079. [PMID: 37409009 PMCID: PMC10318141 DOI: 10.3389/fnagi.2023.1147079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 05/30/2023] [Indexed: 07/07/2023] Open
Abstract
The literature has established that the capability of visuomotor adaptation decreases with aging. However, the underlying mechanisms of this decline are yet to be fully understood. The current study addressed this issue by examining how aging affected visuomotor adaptation in a continuous manual tracking task with delayed visual feedback. To distinguish separate contributions of the declined capability of motor anticipation and deterioration of motor execution to this age-related decline, we recorded and analyzed participants' manual tracking performances and their eye movements during tracking. Twenty-nine older people and twenty-three young adults (control group) participated in this experiment. The results showed that the age-related decline of visuomotor adaptation was strongly linked to degraded performance in predictive pursuit eye movement, indicating that declined capability motor anticipation with aging had critical influences on the age-related decline of visuomotor adaptation. Additionally, deterioration of motor execution, measured by random error after controlling for the lag between target and cursor, was found to have an independent contribution to the decline of visuomotor adaptation. Taking these findings together, we see a picture that the age-related decline of visuomotor adaptation is a joint effect of the declined capability of motor anticipation and the deterioration of motor execution with aging.
Collapse
Affiliation(s)
- Na Li
- Shanghai Changning Mental Health Center, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics, Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Junsheng Liu
- Shanghai Changning Mental Health Center, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics, Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yong Xie
- Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, China
| | - Weidong Ji
- Shanghai Changning Mental Health Center, Shanghai, China
| | - Zhongting Chen
- Shanghai Key Laboratory of Brain Functional Genomics, Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| |
Collapse
|
14
|
Menceloglu M, Song JH. Motion duration is overestimated behind an occluder in action and perception tasks. J Vis 2023; 23:11. [PMID: 37171804 PMCID: PMC10184779 DOI: 10.1167/jov.23.5.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/13/2023] Open
Abstract
Motion estimation behind an occluder is a common task in situations like crossing the street or passing another car. People tend to overestimate the duration of an object's motion when it gets occluded for subsecond motion durations. Here, we explored (a) whether this bias depended on the type of interceptive action: discrete keypress versus continuous reach and (b) whether it was present in a perception task without an interceptive action. We used a prediction-motion task and presented a bar moving across the screen with a constant velocity that later became occluded. In the action task, participants stopped the occluded bar when they thought the bar reached the goal position via keypress or reach. They were more likely to stop the bar after it passed the goal position regardless of the action type, suggesting that the duration of occluded motion was overestimated (or its speed was underestimated). In the perception task, where participants judged whether a tone was presented before or after the bar reached the goal position, a similar bias was observed. In both tasks, the bias was near constant across motion durations and directions and grew over trials. We speculate that this robust bias may be due to a temporal illusion, Bayesian slow-motion prior, or the processing of the visible-occluded boundary crossing. Understanding its exact mechanism, the conditions on which it depends, and the relative roles of speed and time perception requires further research.
Collapse
Affiliation(s)
- Melisa Menceloglu
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
| | - Joo-Hyun Song
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
| |
Collapse
|
15
|
Mei Chow H, Spering M. Eye movements during optic flow perception. Vision Res 2023; 204:108164. [PMID: 36566560 DOI: 10.1016/j.visres.2022.108164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 11/22/2022] [Accepted: 12/07/2022] [Indexed: 12/24/2022]
Abstract
Optic flow is an important visual cue for human perception and locomotion and naturally triggers eye movements. Here we investigate whether the perception of optic flow direction is limited or enhanced by eye movements. In Exp. 1, 23 human observers localized the focus of expansion (FOE) of an optic flow pattern; in Exp. 2, 18 observers had to detect brief visual changes at the FOE. Both tasks were completed during free viewing and fixation conditions while eye movements were recorded. Task difficulty was varied by manipulating the coherence of radial motion from the FOE (4 %-90 %). During free viewing, observers tracked the optic flow pattern with a combination of saccades and smooth eye movements. During fixation, observers nevertheless made small-scale eye movements. Despite differences in spatial scale, eye movements during free viewing and fixation were similarly directed toward the FOE (saccades) and away from the FOE (smooth tracking). Whereas FOE localization sensitivity was not affected by eye movement instructions (Exp. 1), observers' sensitivity to detect brief changes at the FOE was 27 % higher (p <.001) during free-viewing compared to fixation (Exp. 2). This performance benefit was linked to reduced saccade endpoint errors, indicating the direct beneficial impact of foveating eye movements on performance in a fine-grain perceptual task, but not during coarse perceptual localization.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Dept. of Psychology, St. Thomas University, Fredericton, Canada; Dept. of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Miriam Spering
- Dept. of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, Canada; Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
16
|
Zhao Z, Ahissar E, Victor JD, Rucci M. Inferring visual space from ultra-fine extra-retinal knowledge of gaze position. Nat Commun 2023; 14:269. [PMID: 36650146 PMCID: PMC9845343 DOI: 10.1038/s41467-023-35834-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 01/03/2023] [Indexed: 01/18/2023] Open
Abstract
It has long been debated how humans resolve fine details and perceive a stable visual world despite the incessant fixational motion of their eyes. Current theories assume these processes to rely solely on the visual input to the retina, without contributions from motor and/or proprioceptive sources. Here we show that contrary to this widespread assumption, the visual system has access to high-resolution extra-retinal knowledge of fixational eye motion and uses it to deduce spatial relations. Building on recent advances in gaze-contingent display control, we created a spatial discrimination task in which the stimulus configuration was entirely determined by oculomotor activity. Our results show that humans correctly infer geometrical relations in the absence of spatial information on the retina and accurately combine high-resolution extraretinal monitoring of gaze displacement with retinal signals. These findings reveal a sensory-motor strategy for encoding space, in which fine oculomotor knowledge is used to interpret the fixational input to the retina.
Collapse
Affiliation(s)
- Zhetuo Zhao
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ehud Ahissar
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY, USA
| | - Michele Rucci
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
17
|
Ida H, Fukuhara K, Ogata T. Virtual reality modulates the control of upper limb motion in one-handed ball catching. Front Sports Act Living 2022; 4:926542. [PMID: 36275439 PMCID: PMC9582424 DOI: 10.3389/fspor.2022.926542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
There remains a question about whether and to what extent perception-action coupled response in virtual reality are equal/unequal to those in the real world or physical reality. The purpose of this study was to identify the differences in the environmental effect of virtual presentation on the motor responses of a one-handed ball catching. Thirteen healthy participants were instructed to catch an approaching ball projected at three speeds in a real laboratory room and in a room-sized virtual reality system (CAVE) that simulated those real situations with two- or three-dimensional display settings. The results showed that the arm movement time, which denotes the duration of arm-raising motion (shoulder flexion), was significantly longer in the virtual reality than that in the physical reality at the fast ball speed condition. The shoulder flexion velocities, calculated as the average angular velocity of shoulder flexion over the arm movement time, were significantly lower in the virtual reality than in the physical reality at the medium and fast ball speed conditions. The electromyography onsets, derived from anterior deltoid, biceps brachii, and flexor carpi radialis muscles of the catching arm, appeared before and significantly closer to the initiation of arm raising in the two-dimensional virtual reality than both in the physical reality and in the three-dimensional virtual reality. The findings suggest that simulation of virtual reality may induce a modulation in the motor responses of the catching arm, which is different from natural motion that appeared in the real world. On the contrary, the effect of ball speed generally found in real setting was maintained in the current CAVE experiment.
Collapse
Affiliation(s)
- Hirofumi Ida
- Department of Sports and Health Management, Jobu University, Isesaki, Japan,*Correspondence: Hirofumi Ida
| | - Kazunobu Fukuhara
- Department of Health Promotion Science, Tokyo Metropolitan University, Hachioji, Japan
| | - Takahiro Ogata
- Department of Sport and Medical Science, Teikyo University, Hachioji, Japan
| |
Collapse
|
18
|
Vater C, Mann DL. Are predictive saccades linked to the processing of peripheral information? PSYCHOLOGICAL RESEARCH 2022; 87:1501-1519. [PMID: 36167931 DOI: 10.1007/s00426-022-01743-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 09/15/2022] [Indexed: 11/29/2022]
Abstract
High-level athletes can predict the actions of an opposing player. Interestingly, such predictions are also reflected by the athlete's gaze behavior. In cricket, for example, players first pursue the ball with their eyes before they very often initiate two predictive saccades: one to the predicted ball-bounce point and a second to the predicted ball-bat-contact point. That means, they move their eyes ahead of the ball and "wait" for the ball at the new fixation location, potentially using their peripheral vision to update information about the ball's trajectory. In this study, we investigated whether predictive saccades are linked to the processing of information in peripheral vision and if predictive saccades are superior to continuously following the ball with foveal vision using smooth-pursuit eye-movements (SPEMs). In the first two experiments, we evoked the typical eye-movements observed in cricket and showed that the information gathered during SPEMs is sufficient to predict when the moving object will hit the target location and that (additional) peripheral monitoring of the object does not help to improve performance. In a third experiment, we show that it could actually be beneficial to use SPEMs rather than predictive saccades to improve performance. Thus, predictive saccades ahead of a target are unlikely to be performed to enhance the peripheral monitoring of target.
Collapse
Affiliation(s)
- Christian Vater
- Institute of Sport Science, University of Bern, Bremgartenstrasse 145, 3012, Bern, Switzerland.
| | - David L Mann
- Faculty of Behavioural and Movement Sciences, Motor Learning and Performance, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
19
|
D'Aquino A, Frank C, Hagan JE, Schack T. Imagining interceptions: Eye movements as an online indicator of covert motor processes during motor imagery. Front Neurosci 2022; 16:940772. [PMID: 35968367 PMCID: PMC9372347 DOI: 10.3389/fnins.2022.940772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/13/2022] [Indexed: 11/21/2022] Open
Abstract
The analysis of eye movements during motor imagery has been used to understand the influence of covert motor processes on visual-perceptual activity. There is evidence showing that gaze metrics seem to be affected by motor planning often dependent on the spatial and temporal characteristics of a task. However, previous research has focused on simulated actions toward static targets with limited empirical evidence of how eye movements change in more dynamic environments. The study examined the characteristics of eye movements during motor imagery for an interception task. Twenty-four participants were asked to track a moving target over a computer display and either mentally simulate an interception or rest. The results showed that smooth pursuit variables, such as duration and gain, were lower during motor imagery when compared to passive observation. These findings indicate that motor plans integrate visual-perceptual information based on task demands and that eye movements during imagery reflect such constraint.
Collapse
Affiliation(s)
- Alessio D'Aquino
- Faculty of Psychology and Sports Science, Neurocognition and Action Biomechanics Group, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Cornelia Frank
- Institute for Sport and Movement Science, Osnabrück University, Osnabrück, Germany
| | - John Elvis Hagan
- Faculty of Psychology and Sports Science, Neurocognition and Action Biomechanics Group, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Thomas Schack
- Faculty of Psychology and Sports Science, Neurocognition and Action Biomechanics Group, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
20
|
Tau and kappa in interception - how perceptual spatiotemporal interrelations affect movements. Atten Percept Psychophys 2022; 84:1925-1943. [PMID: 35705842 PMCID: PMC9338162 DOI: 10.3758/s13414-022-02516-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2022] [Indexed: 11/08/2022]
Abstract
Batting and catching are real-life examples of interception. Due to latencies between the processing of sensory input and the corresponding motor response, successful interception requires accurate spatiotemporal prediction. However, spatiotemporal predictions can be subject to bias. For instance, the more spatially distant two sequentially presented objects are, the longer the interval between their presentations is perceived (kappa effect) and vice versa (tau effect). In this study, we deployed these phenomena to test in two sensory modalities whether temporal representations depend asymmetrically on spatial representations, or whether both are symmetrically interrelated. We adapted the tau and kappa paradigms to an interception task by presenting four stimuli (visually or auditorily) one after another on four locations, from left to right, with constant spatial and temporal intervals in between. In two experiments, participants were asked to touch the screen where and when they predicted a fifth stimulus to appear. In Exp. 2, additional predictive gaze measures were examined. Across experiments, auditory but not visual stimuli produced a tau effect for interception, supporting the idea that the relationship between space and time is moderated by the sensory modality. Results did not reveal classical auditory or visual kappa effects and no visual tau effects. Gaze data in Exp. 2 showed that the (spatial) gaze orientation depended on temporal intervals while the timing of fixations was modulated by spatial intervals, thereby indicating tau and kappa effects across modalities. Together, the results suggest that sensory modality plays an important role in spatiotemporal predictions in interception.
Collapse
|
21
|
Harris DJ, Arthur T, Broadbent DP, Wilson MR, Vine SJ, Runswick OR. An Active Inference Account of Skilled Anticipation in Sport: Using Computational Models to Formalise Theory and Generate New Hypotheses. Sports Med 2022; 52:2023-2038. [PMID: 35503403 PMCID: PMC9388417 DOI: 10.1007/s40279-022-01689-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2022] [Indexed: 11/30/2022]
Abstract
Optimal performance in time-constrained and dynamically changing environments depends on making reliable predictions about future outcomes. In sporting tasks, performers have been found to employ multiple information sources to maximise the accuracy of their predictions, but questions remain about how different information sources are weighted and integrated to guide anticipation. In this paper, we outline how predictive processing approaches, and active inference in particular, provide a unifying account of perception and action that explains many of the prominent findings in the sports anticipation literature. Active inference proposes that perception and action are underpinned by the organism’s need to remain within certain stable states. To this end, decision making approximates Bayesian inference and actions are used to minimise future prediction errors during brain–body–environment interactions. Using a series of Bayesian neurocomputational models based on a partially observable Markov process, we demonstrate that key findings from the literature can be recreated from the first principles of active inference. In doing so, we formulate a number of novel and empirically falsifiable hypotheses about human anticipation capabilities that could guide future investigations in the field.
Collapse
Affiliation(s)
- David J Harris
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK.
| | - Tom Arthur
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - David P Broadbent
- Division of Sport, Health and Exercise Sciences, Department of Life Sciences, Brunel University London, London, UK
| | - Mark R Wilson
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Samuel J Vine
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Oliver R Runswick
- Department of Psychology, Institute of Psychiatry, Psychology, and Neuroscience, King's College London, London, UK
| |
Collapse
|
22
|
de Brouwer AJ, Spering M. Eye-hand coordination during online reach corrections is task dependent. J Neurophysiol 2022; 127:885-895. [PMID: 35294273 DOI: 10.1152/jn.00270.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To produce accurate movements, the human motor system needs to deal with errors that can occur due to inherent noise, changes in the body, or disturbances in the environment. Here, we investigated the temporal coupling of rapid corrections of the eye and hand in response to a change in visual target location during the movement. In addition to a "classic" double-step task in which the target stepped to a new position, participants performed a set of modified double-step tasks in which the change in movement goal was indicated by the appearance of an additional target, or by a spatial or symbolic cue. We found that both the absolute correction latencies of the eye and hand and the relative eye-hand correction latencies were dependent on the visual characteristics of the target change, with increasingly longer latencies in tasks that required more visual and cognitive processing. Typically, the hand started correcting slightly earlier than the eye, especially when the target change was indicated by a symbolic cue, and in conditions where visual feedback of the hand position was provided during the reach. Our results indicate that the oculomotor and limb-motor system can be differentially influenced by processing requirements of the task and emphasize that temporal eye-hand coupling is flexible rather than rigid.NEW & NOTEWORTHY Eye movements support hand movements in many situations. Here, we used variations of a double-step task to investigate temporal coupling of corrective hand and eye movements in response to target displacements. Correction latency coupling depended on the visual and cognitive processing demands of the task. The hand started correcting before the eye, especially when the task required decoding a symbolic cue. These findings highlight the flexibility and task dependency of eye-hand coordination.
Collapse
Affiliation(s)
- Anouk J de Brouwer
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada.,Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, British Columbia, Canada.,Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
23
|
Miyamoto T, Numasawa K, Ono S. Changes in visual speed perception induced by anticipatory smooth eye movements. J Neurophysiol 2022; 127:1198-1207. [PMID: 35353633 DOI: 10.1152/jn.00498.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Expectations about forthcoming visual motion shaped by observers' experiences are known to induce anticipatory smooth eye movements (ASEM) and changes in visual perception. Previous studies have demonstrated discrete effects of expectations on the control of ASEM and perception. However, the tasks designed in these studies were not able to segregate the effects of expectations and execution of ASEM itself on perception. In the current study, we attempted to directly examine the effect of ASEM itself on visual speed perception using a two-alternative forced-choice task (2AFC task), in which observers were asked to track a pair of sequentially presented visual motion stimuli with their eyes and to judge whether the second stimulus (test stimulus) was faster or slower than the first (reference stimulus). Our results showed that observers' visual speed perception, quantified by a psychometric function, shifted according to ASEM velocity. This was the case, even though there was no difference in the steady-state eye velocity. Further analyses revealed that the observers' perceptual decisions could be explained by a difference in the magnitude of retinal slip velocity in the initial phase of ocular tracking when the reference and test stimuli were presented, rather than in the steady-state phase. Our results provide psychophysical evidence of the importance of initial ocular tracking in visual speed perception and the strong impact of ASEM.
Collapse
Affiliation(s)
- Takeshi Miyamoto
- Faculty of Health and Sport Sciences, University of Tsukuba, Ibaraki, Japan
| | - Kosuke Numasawa
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, Japan
| | - Seiji Ono
- Faculty of Health and Sport Sciences, University of Tsukuba, Ibaraki, Japan
| |
Collapse
|
24
|
Disrupting Short-Term Memory Maintenance in Premotor Cortex Affects Serial Dependence in Visuomotor Integration. J Neurosci 2021; 41:9392-9402. [PMID: 34607968 DOI: 10.1523/jneurosci.0380-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/13/2021] [Accepted: 09/19/2021] [Indexed: 11/21/2022] Open
Abstract
Human behavior is biased by past experience. For example, when intercepting a moving target, the speed of previous targets will bias responses in future trials. Neural mechanisms underlying this so-called serial dependence are still under debate. Here, we tested the hypothesis that the previous trial leaves a neural trace in brain regions associated with encoding task-relevant information in visual and/or motor regions. We reasoned that injecting noise by means of transcranial magnetic stimulation (TMS) over premotor and visual areas would degrade such memory traces and hence reduce serial dependence. To test this hypothesis, we applied bursts of TMS pulses to right visual motion processing region hV5/MT+ and to left dorsal premotor cortex (PMd) during intertrial intervals of a coincident timing task performed by twenty healthy human participants (15 female). Without TMS, participants presented a bias toward the speed of the previous trial when intercepting moving targets. TMS over PMd decreased serial dependence in comparison to the control Vertex stimulation, whereas TMS applied over hV5/MT+ did not. In addition, TMS seems to have specifically affected the memory trace that leads to serial dependence, as we found no evidence that participants' behavior worsened after applying TMS. These results provide causal evidence that an implicit short-term memory mechanism in premotor cortex keeps information from one trial to the next, and that this information is blended with current trial information so that it biases behavior in a visuomotor integration task with moving objects.SIGNIFICANCE STATEMENT Human perception and action are biased by the recent past. The origin of such serial bias is still not fully understood, but a few components seem to be fundamental for its emergence: the brain needs to keep previous trial information in short-term memory and blend it with incoming information. Here, we present evidence that a premotor area has a potential role in storing previous trial information in short-term memory in a visuomotor task and that this information is responsible for biasing ongoing behavior. These results corroborate the perspective that areas associated with processing information of a stimulus or task also participate in maintaining that information in short-term memory even when this information is no longer relevant for current behavior.
Collapse
|
25
|
López-Moliner J, de la Malla C. Motion-in-depth effects on interceptive timing errors in an immersive environment. Sci Rep 2021; 11:21961. [PMID: 34754000 PMCID: PMC8578488 DOI: 10.1038/s41598-021-01397-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 10/22/2021] [Indexed: 11/08/2022] Open
Abstract
We often need to interact with targets that move along arbitrary trajectories in the 3D scene. In these situations, information of parameters like speed, time-to-contact, or motion direction is required to solve a broad class of timing tasks (e.g., shooting, or interception). There is a large body of literature addressing how we estimate different parameters when objects move both in the fronto-parallel plane and in depth. However, we do not know to which extent the timing of interceptive actions is affected when motion-in-depth (MID) is involved. Unlike previous studies that have looked at the timing of interceptive actions using constant distances and fronto-parallel motion, we here use immersive virtual reality to look at how differences in the above-mentioned variables influence timing errors in a shooting task performed in a 3D environment. Participants had to shoot at targets that moved following different angles of approach with respect to the observer when those reached designated shooting locations. We recorded the shooting time, the temporal and spatial errors and the head's position and orientation in two conditions that differed in the interval between the shot and the interception of the target's path. Results show a consistent change in the temporal error across approaching angles: the larger the angle, the earlier the error. Interestingly, we also found different error patterns within a given angle that depended on whether participants tracked the whole target's trajectory or only its end-point. These differences had larger impact when the target moved in depth and are consistent with underestimating motion-in-depth in the periphery. We conclude that the strategy participants use to track the target's trajectory interacts with MID and affects timing performance.
Collapse
Affiliation(s)
- Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Cristina de la Malla
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain
| |
Collapse
|
26
|
The effect of explicit cues on smooth pursuit termination. Vision Res 2021; 189:27-32. [PMID: 34509706 DOI: 10.1016/j.visres.2021.08.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 08/17/2021] [Accepted: 08/29/2021] [Indexed: 11/21/2022]
Abstract
Predictive deceleration of eye motion during smooth pursuit is induced by explicit cues indicating the timing of the visual target offset. The first aim of this study (experiment 1) was to determine whether the timing of the onset of cue-based predictive pursuit termination depends on spatial or temporal information using three target velocities. The second aim (experiment 2) was to examine whether an unexpected offset of the target affects the pursuit termination. We conducted a pursuit termination task where participants tracked a moving target and then stopped tracking after the target disappeared. The results of experiment 1 showed that the onset times of predictive eye deceleration were consistent regardless of target velocity, indicating that its timing is controlled by the temporal estimation, rather than the spatial distance between the target and cue positions. In experiment 2, we compared pursuit termination between the following two conditions. One condition did not present any cues (unknown condition), whereas a second condition included a same cue as experiment 1 but the target disappeared 500 ms before the timing indicated by the cue unpredictably (unexpected condition). As a result, the unexpected condition showed significant delays in the onset of eye deceleration, but no difference in the total time for completion of pursuit termination. Therefore, our findings suggest that the cue-based pursuit termination is controlled by the predictive pursuit system, and an unexpected offset of the target yields delays in the onset of eye deceleration, while does not affect the duration of pursuit termination.
Collapse
|
27
|
Niechwiej-Szwedo E, Wu S, Nouredanesh M, Tung J, Christian LW. Development of eye-hand coordination in typically developing children and adolescents assessed using a reach-to-grasp sequencing task. Hum Mov Sci 2021; 80:102868. [PMID: 34509902 DOI: 10.1016/j.humov.2021.102868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 06/11/2021] [Accepted: 08/31/2021] [Indexed: 11/18/2022]
Abstract
Eye-hand coordination is required to accurately perform daily activities that involve reaching, grasping and manipulating objects. Studies using aiming, grasping or sequencing tasks have shown a stereotypical temporal coupling pattern where the eyes are directed to the object in advance of the hand movement, which may facilitate the planning and execution required for reaching. While the temporal coordination between the ocular and manual systems has been extensively investigated in adults, relatively little is known about the typical development of eye-hand coordination. Therefore, the current study addressed an important knowledge gap by characterizing the profile of eye-hand coupling in typically developing school-age children (n = 57) and in a cohort of adults (n = 30). Eye and hand movements were recorded concurrently during the performance of a bead threading task which consists of four distinct movements: reach to bead, grasp, reach to needle, and thread. Results showed a moderate to high correlation between eye and hand latencies in children and adults, supporting that both movements were planned in parallel. Eye and reach latencies, latency differences, and dwell time during grasping and threading, showed significant age-related differences, suggesting eye-hand coupling becomes more efficient in adolescence. Furthermore, visual acuity, stereoacuity and accommodative facility were also found to be associated with the efficiency of eye-hand coordination in children. Results from this study can serve as reference values when examining eye and hand movement during the performance of fine motor skills in children with neurodevelopmental disorders.
Collapse
Affiliation(s)
- Ewa Niechwiej-Szwedo
- Kinesiology, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada.
| | - Susana Wu
- Kinesiology, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada
| | - Mina Nouredanesh
- Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada
| | - James Tung
- Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada
| | - Lisa W Christian
- School of Optometry and Vision Science, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada
| |
Collapse
|
28
|
Goettker A, Gegenfurtner KR. A change in perspective: The interaction of saccadic and pursuit eye movements in oculomotor control and perception. Vision Res 2021; 188:283-296. [PMID: 34489101 DOI: 10.1016/j.visres.2021.08.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 07/26/2021] [Accepted: 08/16/2021] [Indexed: 11/17/2022]
Abstract
Due to the close relationship between oculomotor behavior and visual processing, eye movements have been studied in many different areas of research over the last few decades. While these studies have brought interesting insights, specialization within each research area comes at the potential cost of a narrow and isolated view of the oculomotor system. In this review, we want to expand this perspective by looking at the interactions between the two most important types of voluntary eye movements: saccades and pursuit. Recent evidence indicates multiple interactions and shared signals at the behavioral and neurophysiological level for oculomotor control and for visual perception during pursuit and saccades. Oculomotor control seems to be based on shared position- and velocity-related information, which leads to multiple behavioral interactions and synergies. The distinction between position- and velocity-related information seems to be also present at the neurophysiological level. In addition, visual perception seems to be based on shared efferent signals about upcoming eye positions and velocities, which are to some degree independent of the actual oculomotor response. This review suggests an interactive perspective on the oculomotor system, based mainly on different types of sensory input, and less so on separate subsystems for saccadic or pursuit eye movements.
Collapse
Affiliation(s)
- Alexander Goettker
- Abteilung Allgemeine Psychologie and Center for Mind, Brain & Behavior, Justus-Liebig University Giessen, Germany.
| | - Karl R Gegenfurtner
- Abteilung Allgemeine Psychologie and Center for Mind, Brain & Behavior, Justus-Liebig University Giessen, Germany
| |
Collapse
|
29
|
Effects of visual blur and contrast on spatial and temporal precision in manual interception. Exp Brain Res 2021; 239:3343-3358. [PMID: 34480594 PMCID: PMC8542000 DOI: 10.1007/s00221-021-06184-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Accepted: 07/22/2021] [Indexed: 12/04/2022]
Abstract
The visual system is said to be especially sensitive towards spatial but lesser so towards temporal information. To test this, in two experiments, we systematically reduced the acuity and contrast of a visual stimulus and examined the impact on spatial and temporal precision (and accuracy) in a manual interception task. In Experiment 1, we blurred a virtual, to-be-intercepted moving circle (ball). Participants were asked to indicate (i.e., finger tap) on a touchscreen where and when the virtual ball crossed a ground line. As a measure of spatial and temporal accuracy and precision, we analyzed the constant and variable errors, respectively. With increasing blur, the spatial and temporal variable error, as well as the spatial constant error increased, while the temporal constant error decreased. Because in the first experiment, blur was potentially confounded with contrast, in Experiment 2, we re-ran the experiment with one difference: instead of blur, we included five levels of contrast matched to the blur levels. We found no systematic effects of contrast. Our findings confirm that blurring vision decreases spatial precision and accuracy and that the effects were not mediated by concomitant changes in contrast. However, blurring vision also affected temporal precision and accuracy, thereby questioning the generalizability of the theoretical predictions to the applied interception task.
Collapse
|