1
|
Coudiere A, Danion FR. Eye-hand coordination all the way: from discrete to continuous hand movements. J Neurophysiol 2024; 131:652-667. [PMID: 38381528 DOI: 10.1152/jn.00314.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 01/31/2024] [Accepted: 02/18/2024] [Indexed: 02/23/2024] Open
Abstract
The differentiation between continuous and discrete actions is key for behavioral neuroscience. Although many studies have characterized eye-hand coordination during discrete (e.g., reaching) and continuous (e.g., pursuit tracking) actions, all these studies were conducted separately, using different setups and participants. In addition, how eye-hand coordination might operate at the frontier between discrete and continuous movements remains unexplored. Here we filled these gaps by means of a task that could elicit different movement dynamics. Twenty-eight participants were asked to simultaneously track with their eyes and a joystick a visual target that followed an unpredictable trajectory and whose position was updated at different rates (from 1.5 to 240 Hz). This procedure allowed us to examine actions ranging from discrete point-to-point movements (low refresh rate) to continuous pursuit (high refresh rate). For comparison, we also tested a manual tracking condition with the eyes fixed and a pure eye tracking condition (hand fixed). The results showed an abrupt transition between discrete and continuous hand movements around 3 Hz contrasting with a smooth trade-off between fixations and smooth pursuit. Nevertheless, hand and eye tracking accuracy remained strongly correlated, with each of these depending on whether the other effector was recruited. Moreover, gaze-cursor distance and lag were smaller when eye and hand performed the task conjointly than separately. Altogether, despite some dissimilarities in eye and hand dynamics when transitioning between discrete and continuous movements, our results emphasize that eye-hand coordination continues to smoothly operate and support the notion of synergies across eye movement types.NEW & NOTEWORTHY The differentiation between continuous and discrete actions is key for behavioral neuroscience. By using a visuomotor task in which we manipulate the target refresh rate to trigger different movement dynamics, we explored eye-hand coordination all the way from discrete to continuous actions. Despite abrupt changes in hand dynamics, eye-hand coordination continues to operate via a gradual trade-off between fixations and smooth pursuit, an observation confirming the notion of synergies across eye movement types.
Collapse
Affiliation(s)
- Adrien Coudiere
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| | - Frederic R Danion
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| |
Collapse
|
2
|
D'Aquino A, Frank C, Hagan JE, Schack T. Eye movements during motor imagery and execution reveal different visuomotor control strategies in manual interception. Psychophysiology 2023; 60:e14401. [PMID: 37515410 DOI: 10.1111/psyp.14401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023]
Abstract
Previous research has investigated the degree of congruency in gaze metrics between action execution (AE) and motor imagery (MI) for similar manual tasks. Although eye movement dynamics seem to be limited to relatively simple actions toward static objects, there is little evidence of how gaze parameters change during imagery as a function of more dynamic spatial and temporal task demands. This study examined the similarities and differences in eye movements during AE and MI for an interception task. Twenty-four students were asked to either mentally simulate or physically intercept a moving target on a computer display. Smooth pursuit, saccades, and response time were compared between the two conditions. The results show that MI was characterized by higher smooth pursuit gain and duration while no meaningful differences were found in the other parameters. The findings indicate that eye movements during imagery are not simply a duplicate of what happens during actual performance. Instead, eye movements appear to vary as a function of the interaction between visuomotor control strategies and task demands.
Collapse
Affiliation(s)
- Alessio D'Aquino
- Neurocognition and Action Biomechanics Group, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Cornelia Frank
- Institute for Sport and Movement Science, Osnabrück University, Osnabrück, Germany
| | - John Elvis Hagan
- Neurocognition and Action Biomechanics Group, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Thomas Schack
- Neurocognition and Action Biomechanics Group, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
3
|
Sato K, Fukuhara K, Higuchi T. Age-Related Changes in the Utilization of Visual Information for Collision Prediction: A Study Using an Affordance-Based Model. Exp Aging Res 2023:1-17. [PMID: 37942547 DOI: 10.1080/0361073x.2023.2278985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/28/2023] [Indexed: 11/10/2023]
Abstract
The ability to predict collisions with moving objects deteriorates with aging. We followed the affordance-based model to identify optical variables that older adults had difficulty using for collision prediction. We reproduced a modified version of the interception task used in Steinmetz (Steinmetz, Layton, Powell, & Fajen, 2020, "Affordance-based versus current - future accounts of choosing whether to pursue or abandon the chase of a moving target," Journal of Vision, 20(3), 8) in a virtual reality (VR) environment and newly introduced perturbation for each of three optical variables (vertical and horizontal expansions of a moving object and the bearing angle produced between participants and a moving object). We expected that perturbation would negatively affect the performance only for those who rely on the optical variable to perform the interception task effectively. We tested 18 older and 15 younger adults and showed that older participants were not negatively affected by the perturbation for the vertical and horizontal expansion of a moving object, while they showed decreased performance when the perturbation was introduced with a bearing angle. These findings suggest that predicting collisions with moving objects deteriorates with aging because the perception of object expansion is impaired with aging.
Collapse
Affiliation(s)
- Kazuyuki Sato
- Department of Health Promotion Science, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
| | - Kazunobu Fukuhara
- Department of Health Promotion Science, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
| | - Takahiro Higuchi
- Department of Health Promotion Science, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
| |
Collapse
|
4
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
5
|
Kreyenmeier P, Schroeger A, Cañal-Bruland R, Raab M, Spering M. Rapid Audiovisual Integration Guides Predictive Actions. eNeuro 2023; 10:ENEURO.0134-23.2023. [PMID: 37591732 PMCID: PMC10464656 DOI: 10.1523/eneuro.0134-23.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 07/19/2023] [Accepted: 07/22/2023] [Indexed: 08/19/2023] Open
Abstract
Natural movements, such as catching a ball or capturing prey, typically involve multiple senses. Yet, laboratory studies on human movements commonly focus solely on vision and ignore sound. Here, we ask how visual and auditory signals are integrated to guide interceptive movements. Human observers tracked the brief launch of a simulated baseball, randomly paired with batting sounds of varying intensities, and made a quick pointing movement at the ball. Movement end points revealed systematic overestimation of target speed when the ball launch was paired with a loud versus a quiet sound, although sound was never informative. This effect was modulated by the availability of visual information; sounds biased interception when the visual presentation duration of the ball was short. Amplitude of the first catch-up saccade, occurring ∼125 ms after target launch, revealed early integration of audiovisual information for trajectory estimation. This sound-induced bias was reversed during later predictive saccades when more visual information was available. Our findings suggest that auditory and visual signals are integrated to guide interception and that this integration process must occur early at a neural site that receives auditory and visual signals within an ultrashort time span.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Colombia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Colombia V6T 1Z2, Canada
| | - Anna Schroeger
- Department of Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
- Department for the Psychology of Human Movement and Sport, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Markus Raab
- Department of Performance Psychology, German Sport University Cologne, 50933 Cologne, Germany
- School of Applied Sciences, London South Bank University, London SE1 0AA, United Kingdom
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Colombia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Colombia V6T 1Z2, Canada
- Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Colombia V6T 1Z3, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, British Colombia V6T 1Z4, Canada
| |
Collapse
|
6
|
Crowe EM, Smeets JBJ, Brenner E. Online updating of obstacle positions when intercepting a virtual target. Exp Brain Res 2023:10.1007/s00221-023-06634-5. [PMID: 37244877 DOI: 10.1007/s00221-023-06634-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 05/11/2023] [Indexed: 05/29/2023]
Abstract
People rely upon sensory information in the environment to guide their actions. Ongoing goal-directed arm movements are constantly adjusted to the latest estimate of both the target and hand's positions. Does the continuous guidance of ongoing arm movements also consider the latest visual information of the position of obstacles in the surrounding? To find out, we asked participants to slide their finger across a screen to intercept a laterally moving virtual target while moving through a gap that was created by two virtual circular obstacles. At a fixed time during each trial, the target suddenly jumped slightly laterally while still continuing to move. In half the trials, the size of the gap changed at the same moment as the target jumped. As expected, participants adjusted their movements in response to the target jump. Importantly, the magnitude of this response depended on the new size of the gap. If participants were told that the circles were irrelevant, changing the gap between them had no effect on the responses. This shows that obstacles' instantaneous positions can be considered when visually guiding goal-directed movements.
Collapse
Affiliation(s)
- Emily M Crowe
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, 1081 BT, Amsterdam, The Netherlands.
- School of Psychology, University of Nottingham, University Park, Nottingham, NG7 2RD, UK.
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, 1081 BT, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, 1081 BT, Amsterdam, The Netherlands
| |
Collapse
|
7
|
Menceloglu M, Song JH. Motion duration is overestimated behind an occluder in action and perception tasks. J Vis 2023; 23:11. [PMID: 37171804 PMCID: PMC10184779 DOI: 10.1167/jov.23.5.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/13/2023] Open
Abstract
Motion estimation behind an occluder is a common task in situations like crossing the street or passing another car. People tend to overestimate the duration of an object's motion when it gets occluded for subsecond motion durations. Here, we explored (a) whether this bias depended on the type of interceptive action: discrete keypress versus continuous reach and (b) whether it was present in a perception task without an interceptive action. We used a prediction-motion task and presented a bar moving across the screen with a constant velocity that later became occluded. In the action task, participants stopped the occluded bar when they thought the bar reached the goal position via keypress or reach. They were more likely to stop the bar after it passed the goal position regardless of the action type, suggesting that the duration of occluded motion was overestimated (or its speed was underestimated). In the perception task, where participants judged whether a tone was presented before or after the bar reached the goal position, a similar bias was observed. In both tasks, the bias was near constant across motion durations and directions and grew over trials. We speculate that this robust bias may be due to a temporal illusion, Bayesian slow-motion prior, or the processing of the visible-occluded boundary crossing. Understanding its exact mechanism, the conditions on which it depends, and the relative roles of speed and time perception requires further research.
Collapse
Affiliation(s)
- Melisa Menceloglu
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
| | - Joo-Hyun Song
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
| |
Collapse
|
8
|
Brenner E, van Straaten CAG, de Vries AJ, Baas TRD, Bröring KM, Smeets JBJ. How the timing of visual feedback influences goal-directed arm movements: delays and presentation rates. Exp Brain Res 2023; 241:1447-1457. [PMID: 37067561 PMCID: PMC10129945 DOI: 10.1007/s00221-023-06617-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 04/11/2023] [Indexed: 04/18/2023]
Abstract
Visual feedback normally helps guide movements to their goal. When moving one's hand, such guidance has to deal with a sensorimotor delay of about 100 ms. When moving a cursor, it also has to deal with a delay of tens of milliseconds that arises between the hand moving the mouse and the cursor moving on the screen. Moreover, the cursor is presented at a certain rate, so only positions corresponding with the position of the mouse at certain moments are presented. How does the additional delay and the rate at which cursor positions are updated influence how well the cursor can be guided to the goal? We asked participants to move a cursor to consecutive targets as quickly as they could. They did so for various additional delays and presentation rates. It took longer for the mouse to reach the target when the additional delay was longer. It also took longer when a lower presentation rate was achieved by not presenting the cursor all the time. The fraction of the time during which the cursor was present was more important than the rate at which the cursor's position was updated. We conclude that the way human arm movements are guided benefits from continuous access to recent visual feedback.
Collapse
Affiliation(s)
- Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands.
| | - Chris A G van Straaten
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| | - A Julia de Vries
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| | - Tobias R D Baas
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| | - Kirsten M Bröring
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| |
Collapse
|
9
|
Brenner E, de la Malla C, Smeets JBJ. Tapping on a target: dealing with uncertainty about its position and motion. Exp Brain Res 2023; 241:81-104. [PMID: 36371477 PMCID: PMC9870842 DOI: 10.1007/s00221-022-06503-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
Reaching movements are guided by estimates of the target object's location. Since the precision of instantaneous estimates is limited, one might accumulate visual information over time. However, if the object is not stationary, accumulating information can bias the estimate. How do people deal with this trade-off between improving precision and reducing the bias? To find out, we asked participants to tap on targets. The targets were stationary or moving, with jitter added to their positions. By analysing the response to the jitter, we show that people continuously use the latest available information about the target's position. When the target is moving, they combine this instantaneous target position with an extrapolation based on the target's average velocity during the last several hundred milliseconds. This strategy leads to a bias if the target's velocity changes systematically. Having people tap on accelerating targets showed that the bias that results from ignoring systematic changes in velocity is removed by compensating for endpoint errors if such errors are consistent across trials. We conclude that combining simple continuous updating of visual information with the low-pass filter characteristics of muscles, and adjusting movements to compensate for errors made in previous trials, leads to the precise and accurate human goal-directed movements.
Collapse
Affiliation(s)
- Eli Brenner
- grid.12380.380000 0004 1754 9227Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT Amsterdam, The Netherlands
| | - Cristina de la Malla
- grid.12380.380000 0004 1754 9227Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT Amsterdam, The Netherlands ,grid.5841.80000 0004 1937 0247Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Jeroen B. J. Smeets
- grid.12380.380000 0004 1754 9227Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT Amsterdam, The Netherlands
| |
Collapse
|
10
|
Gonzalez Polanco P, Mrotek LA, Nielson KA, Beardsley SA, Scheidt RA. When intercepting moving targets, mid-movement error corrections reflect distinct responses to visual and haptic perturbations. Exp Brain Res 2023; 241:231-247. [PMID: 36469052 PMCID: PMC10440829 DOI: 10.1007/s00221-022-06515-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 11/20/2022] [Indexed: 12/09/2022]
Abstract
We examined a key aspect of sensorimotor skill: the capability to correct performance errors that arise mid-movement. Participants grasped the handle of a robot that imposed a nominal viscous resistance to hand movement. They watched a target move pseudo-randomly just above the horizontal plane of hand motion and initiated quick interception movements when cued. On some trials, the robot's viscosity or the target's speed changed without warning coincident with the GO cue. We fit a sum-of-Gaussians model to mechanical power measured at the handle to determine the number, magnitude, and relative timing of submovements occurring in each interception attempt. When a single submovement successfully intercepted the target, capture times averaged 410 ms. Sometimes, two or more submovements were required. Initial error corrections typically occurred before feedback could indicate the target had been captured or missed. Error corrections occurred sooner after movement onset in response to mechanical viscosity increases (at 154 ms) than to unprovoked errors on control trials (215 ms). Corrections occurred later (272 ms) in response to viscosity decreases. The latency of corrections for target speed changes did not differ from those in control trials. Remarkably, these early error corrections accommodated the altered testing conditions; speed/viscosity increases elicited more vigorous corrections than in control trials with unprovoked errors; speed/viscosity decreases elicited less vigorous corrections. These results suggest that the brain monitors and predicts the outcome of evolving movements, rapidly infers causes of mid-movement errors, and plans and executes corrections-all within 300 ms of movement onset.
Collapse
Affiliation(s)
- Pablo Gonzalez Polanco
- Biomedical Engineering, Marquette University and Medical College of Wisconsin, Olin Engineering Center Rm 206, 1515 W. Wisconsin Ave, Milwaukee, WI, 53233, USA
| | - Leigh A Mrotek
- Biomedical Engineering, Marquette University and Medical College of Wisconsin, Olin Engineering Center Rm 206, 1515 W. Wisconsin Ave, Milwaukee, WI, 53233, USA
| | - Kristy A Nielson
- Psychology, Marquette University and Neurology, Medical College of Wisconsin, Milwaukee, WI, 53233, USA
| | - Scott A Beardsley
- Biomedical Engineering, Marquette University and Medical College of Wisconsin, Olin Engineering Center Rm 206, 1515 W. Wisconsin Ave, Milwaukee, WI, 53233, USA
| | - Robert A Scheidt
- Biomedical Engineering, Marquette University and Medical College of Wisconsin, Olin Engineering Center Rm 206, 1515 W. Wisconsin Ave, Milwaukee, WI, 53233, USA.
| |
Collapse
|
11
|
Kato M, Yanai T. Pulled fly balls are harder to catch: a game analysis with a machine learning approach. SPORTS ENGINEERING 2022. [DOI: 10.1007/s12283-022-00373-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractTwo hypotheses were tested: (1) the deflecting motion of fly balls caused by aerodynamic effects varies between the pull side and opposite side of the fair territory, and (2) the probability of flyout is lower on the pull side than the opposite side in Japan’s professional baseball games. From all radar-tracking outputs of official games in 2018–2019, fly balls that resulted in outs or base hits were selected for analysis (N = 25,413), and indices representing horizontal and vertical deflecting motions of fly balls were computed and compared between the pull side and opposite side. A machine learning algorithm was used to construct a model to predict the probability of flyout from the kinematic characteristics of fly balls. Flyout zones where the probability of flyout was > 0.6 were computed for a systematically constructed set of fly balls having identical distribution between the pull side and opposite side. The results showed that: (1) most fly balls landing on the opposite side deflected in the same direction whereas the pulled fly balls deflected to either direction, (2) the pulled low fly balls had greater variability in the deflecting motions than the opposite side counterpart, (3) overall probability of flyout of the low fly balls was lower in the pull side (0.41) than the opposite side (0.49), and (4) the flyout zone of an outfielder in the pull side (mean = 698 m2) for low fly balls was smaller than that of the others (≥ 779 m2). The hypotheses were supported. The pulled low fly balls had substantial variations in the direction and magnitude of deflections, which might have reduced the flyout zone on the pull side.
Collapse
|
12
|
Manzone DM, Tremblay L, Chua R. Tactile facilitation during actual and mere expectation of object reception. Sci Rep 2022; 12:17514. [PMID: 36266418 PMCID: PMC9585022 DOI: 10.1038/s41598-022-22133-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 10/10/2022] [Indexed: 01/12/2023] Open
Abstract
During reaching and grasping movements tactile processing is typically suppressed. However, during a reception or catching task, the object can still be acquired but without suppressive processes related to movement execution. Rather, tactile information may be facilitated as the object approaches in anticipation of object contact and the utilization of tactile feedback. Therefore, the current study investigated tactile processing during a reception task. Participants sat with their upper limb still as an object travelled to and contacted their fingers. At different points along the object's trajectory and prior to contact, participants were asked to detect tactile stimuli delivered to their index finger. To understand if the expectation of object contact contributed to any modulation in tactile processing, the object stopped prematurely on 20% of trials. Compared to a pre-object movement baseline, relative perceptual thresholds were decreased throughout the object's trajectory, and even when the object stopped prematurely. Further, there was no evidence for modulation when the stimulus was presented shortly before object contact. The former results suggest that tactile processing is facilitated as an object approaches an individual's hand. As well, we purport that the expectation of tactile feedback drives this modulation. Finally, the latter results suggest that peripheral masking may have reduced/abolished any facilitation.
Collapse
Affiliation(s)
- Damian M. Manzone
- grid.17063.330000 0001 2157 2938Perceptual Motor Behaviour Laboratory, Centre for Motor Control, Faculty of Kinesiology and Physical Education, University of Toronto, 55 Harbord Street, Toronto, ON M5S 2W6 Canada
| | - Luc Tremblay
- grid.17063.330000 0001 2157 2938Perceptual Motor Behaviour Laboratory, Centre for Motor Control, Faculty of Kinesiology and Physical Education, University of Toronto, 55 Harbord Street, Toronto, ON M5S 2W6 Canada
| | - Romeo Chua
- grid.17091.3e0000 0001 2288 9830School of Kinesiology, University of British Columbia, Vancouver, BC Canada
| |
Collapse
|
13
|
Having several options does not increase the time it takes to make a movement to an adequate end point. Exp Brain Res 2022; 240:1849-1871. [PMID: 35551429 PMCID: PMC9142465 DOI: 10.1007/s00221-022-06376-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 04/19/2022] [Indexed: 12/02/2022]
Abstract
Throughout the day, people constantly make choices such as where to direct their gaze or place their foot. When making such movement choices, there are usually multiple acceptable options, although some are more advantageous than others. How much time does it take to make such choices and to what extent is the most advantageous option chosen from the available alternatives? To find out, we asked participants to collect points by tapping on any of several targets with their index finger. It did not take participants more time to direct their movements to an advantageous target when there were more options. Participants chose targets that were advantageous because they were easier to reach. Targets could be easier to reach because the finger was already moving in their direction when they appeared, or because they were larger or oriented along the movement direction so that the finger could move faster towards them without missing them. When the target’s colour indicated that it was worth more points they chose it slightly less fast, presumably because it generally takes longer to respond to colour than to respond to attributes such as size. They also chose it less often than they probably should have, presumably because the advantage of choosing it was established arbitrarily. We conclude that having many options does not increase the time it takes to move to an adequate target.
Collapse
|
14
|
Pickavance JP, Giles OT, Morehead JR, Mushtaq F, Wilkie RM, Mon-Williams M. Sensorimotor ability and inhibitory control independently predict attainment in mathematics in children and adolescents. J Neurophysiol 2022; 127:1026-1039. [PMID: 35196148 DOI: 10.1152/jn.00365.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 01/25/2022] [Accepted: 02/13/2022] [Indexed: 11/22/2022] Open
Abstract
We previously linked interceptive timing performance to mathematics attainment in 5- to 11-yr-old children, which we attributed to the neural overlap between spatiotemporal and numerical operations. This explanation implies that the relationship should persist through the teenage years. Here, we replicated this finding in adolescents (n = 200, 11-15 yr). However, an alternative explanation is that sensorimotor proficiency and academic attainment are both consequences of executive function. To assess this competing hypothesis, we developed a measure of a core executive function, inhibitory control, from the kinematic data. We combined our new adolescent data with the original children's data (total n = 568), performing a novel analysis controlling for our marker of executive function. We found that the relationship between mathematics and interceptive timing persisted at all ages. These results suggest a distinct functional link between interceptive timing and mathematics that operates independently of our measure of executive function.NEW & NOTEWORTHY Previous research downplays the role of sensorimotor skills in the development of higher-order cognitive domains such as mathematics: using inadequate sensorimotor measures, differences in "executive function" account for any shared variance. Utilizing a high-resolution, kinematic measure of a sensorimotor skill previously linked to mathematics attainment, we show that inhibitory control alone cannot account for this relationship. The practical implication is that the development of children's sensorimotor skills must be considered in their intellectual development.
Collapse
Affiliation(s)
- John P Pickavance
- School of Psychology, University of Leeds, Leeds, United Kingdom
- Centre for Applied Education Research, Bradford Teaching Hospitals NHS Trust, Bradford, United Kingdom
| | - Oscar T Giles
- School of Psychology, University of Leeds, Leeds, United Kingdom
| | - J Ryan Morehead
- School of Psychology, University of Leeds, Leeds, United Kingdom
| | - Faisal Mushtaq
- School of Psychology, University of Leeds, Leeds, United Kingdom
| | - Richard M Wilkie
- School of Psychology, University of Leeds, Leeds, United Kingdom
| | - Mark Mon-Williams
- School of Psychology, University of Leeds, Leeds, United Kingdom
- Centre for Applied Education Research, Bradford Teaching Hospitals NHS Trust, Bradford, United Kingdom
| |
Collapse
|
15
|
Miyamoto T, Numasawa K, Ono S. Changes in visual speed perception induced by anticipatory smooth eye movements. J Neurophysiol 2022; 127:1198-1207. [PMID: 35353633 DOI: 10.1152/jn.00498.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Expectations about forthcoming visual motion shaped by observers' experiences are known to induce anticipatory smooth eye movements (ASEM) and changes in visual perception. Previous studies have demonstrated discrete effects of expectations on the control of ASEM and perception. However, the tasks designed in these studies were not able to segregate the effects of expectations and execution of ASEM itself on perception. In the current study, we attempted to directly examine the effect of ASEM itself on visual speed perception using a two-alternative forced-choice task (2AFC task), in which observers were asked to track a pair of sequentially presented visual motion stimuli with their eyes and to judge whether the second stimulus (test stimulus) was faster or slower than the first (reference stimulus). Our results showed that observers' visual speed perception, quantified by a psychometric function, shifted according to ASEM velocity. This was the case, even though there was no difference in the steady-state eye velocity. Further analyses revealed that the observers' perceptual decisions could be explained by a difference in the magnitude of retinal slip velocity in the initial phase of ocular tracking when the reference and test stimuli were presented, rather than in the steady-state phase. Our results provide psychophysical evidence of the importance of initial ocular tracking in visual speed perception and the strong impact of ASEM.
Collapse
Affiliation(s)
- Takeshi Miyamoto
- Faculty of Health and Sport Sciences, University of Tsukuba, Ibaraki, Japan
| | - Kosuke Numasawa
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, Japan
| | - Seiji Ono
- Faculty of Health and Sport Sciences, University of Tsukuba, Ibaraki, Japan
| |
Collapse
|
16
|
Hand movements respond to any motion near the endpoint. Atten Percept Psychophys 2022; 84:1820-1825. [PMID: 35338448 PMCID: PMC9338106 DOI: 10.3758/s13414-022-02471-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/26/2022] [Indexed: 11/08/2022]
Abstract
Hand movements are pulled in the direction of motion near their planned endpoints. Is this an automatic response to motion signals near those positions, or do we consider what is moving? To find out, we asked participants to hit a target that moved rightward across a patterned surface when it reached an interception zone that was indicated by a circle. The circle was initially at the center of a square. The square was either filled, occluding the patterned surface (tile), or open, such that the patterned surface was not occluded (frame). The square briefly moved leftward or rightward shortly after the target appeared. Thus, participants were either aiming to hit the target on the surface that moved (the tile) or to hit the target on the patterned surface that did not move. Moving the two types of squares produced very similar local motion signals, but for the tile this could be interpreted as motion of an extended surface, while for the frame it could not. Motion onset of the two types of squares yielded very similar responses. Increasing the size of the square, and thus the eccentricity of the local motion signal, reduced the magnitude of the response. Since this reduction was seen for both types of squares, the surface on which the interception zone was presented was clearly not considered. We conclude that the response is driven by local motion signals near the endpoint of the action without considering whether the local surface is moving.
Collapse
|
17
|
Brenner E, Hardon H, Moesman R, Crowe EM, Smeets JBJ. The influences of target size and recent experience on the vigour of adjustments to ongoing movements. Exp Brain Res 2022; 240:1219-1229. [PMID: 35182186 PMCID: PMC9016032 DOI: 10.1007/s00221-022-06325-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 02/05/2022] [Indexed: 11/26/2022]
Abstract
People adjust their on-going movements to changes in the environment. It takes about 100 ms to respond to an abrupt change in a target’s position. Does the vigour of such responses depend on the extent to which responding is beneficial? We asked participants to tap on targets that jumped laterally once their finger started to move. In separate blocks of trials the target either remained at the new position so that it was beneficial to respond to the jump, or jumped back almost immediately so that it was disadvantageous to do so. We also varied the target’s size, because a smaller, less vigorous adjustment is enough to place the finger within a larger target. There was a systematic relationship between the vigour of the response and the remaining time until the tap: the shorter the remaining time the more vigorous the response. This relationship did not depend on the target’s size or whether or not the target jumped back. It was already known that the vigour of responses to target jumps depends on the magnitude of the jump and on the time available for adjusting the movement to that jump. We show that the vigour of the response is precisely tuned to the time available for making the required adjustment irrespective of whether responding in this manner is beneficial.
Collapse
Affiliation(s)
- Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands.
| | - Hidde Hardon
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands
| | - Ryan Moesman
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands
| | - Emily M Crowe
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands
| |
Collapse
|
18
|
Diurnal and nocturnal mosquitoes escape looming threats using distinct flight strategies. Curr Biol 2022; 32:1232-1246.e5. [DOI: 10.1016/j.cub.2022.01.036] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 12/21/2021] [Accepted: 01/12/2022] [Indexed: 11/21/2022]
|
19
|
Crowe EM, Smeets JBJ, Brenner E. The response to background motion: Characteristics of a movement stabilization mechanism. J Vis 2021; 21:3. [PMID: 34617956 PMCID: PMC8504189 DOI: 10.1167/jov.21.11.3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When making goal-directed movements toward a target, our hand deviates from its path in the direction of sudden background motion. We propose that this manual following response arises because ongoing movements are constantly guided toward the planned movement endpoint. Such guidance is needed to compensate for modest, unexpected self-motion. Our proposal is that the compensation for such self-motion does not involve a sophisticated analysis of the global optic flow. Instead, we propose that any motion in the vicinity of the planned endpoint is attributed to the endpoint's egocentric position having shifted in the direction of the motion. The ongoing movement is then stabilized relative to the shifted endpoint. In six experiments, we investigate what aspects of motion determine this shift of planned endpoint. We asked participants to intercept a moving target when it reached a certain area. During the target's motion, background structures briefly moved either leftward or rightward. Participants’ hands responded to background motion even when each background structure was only briefly visible or when the vast majority of background structures remained static. The response was not restricted to motion along the target's path but was most sensitive to motion close to where the target was to be hit, both in the visual field and in depth. In this way, a movement stabilization mechanism provides a comprehensive explanation of many aspects of the manual following response.
Collapse
Affiliation(s)
- Emily M Crowe
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.,
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.,
| | - Eli Brenner
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.,
| |
Collapse
|
20
|
Tsutsui K, Fujii K, Kudo K, Takeda K. Flexible prediction of opponent motion with internal representation in interception behavior. BIOLOGICAL CYBERNETICS 2021; 115:473-485. [PMID: 34379183 PMCID: PMC8551111 DOI: 10.1007/s00422-021-00891-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/30/2021] [Indexed: 06/13/2023]
Abstract
Skilled interception behavior often relies on accurate predictions of external objects because of a large delay in our sensorimotor systems. To deal with the sensorimotor delay, the brain predicts future states of the target based on the current state available, but it is still debated whether internal representations acquired from prior experience are used as well. Here we estimated the predictive manner by analyzing the response behavior of a pursuer to a sudden directional change of the evasive target, providing strong evidence that prediction of target motion by the pursuer was incompatible with a linear extrapolation based solely on the current state of the target. Moreover, using neural network models, we validated that nonlinear extrapolation as estimated was computationally feasible and useful even against unknown opponents. These results support the use of internal representations in predicting target motion, suggesting the usefulness and versatility of predicting external object motion through internal representations.
Collapse
Affiliation(s)
- Kazushi Tsutsui
- Graduate School of Informatics, Nagoya University, Nagoya, Japan.
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan.
| | - Keisuke Fujii
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
- PRESTO, Japan Science and Technology Agency, Tokyo, Japan
| | - Kazutoshi Kudo
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- Graduate School of Interdisciplinary Information Studies, The University of Tokyo, Tokyo, Japan
| | - Kazuya Takeda
- Institutes of Innovation for Future Society, Nagoya University, Nagoya, Japan
| |
Collapse
|
21
|
The effect of explicit cues on smooth pursuit termination. Vision Res 2021; 189:27-32. [PMID: 34509706 DOI: 10.1016/j.visres.2021.08.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 08/17/2021] [Accepted: 08/29/2021] [Indexed: 11/21/2022]
Abstract
Predictive deceleration of eye motion during smooth pursuit is induced by explicit cues indicating the timing of the visual target offset. The first aim of this study (experiment 1) was to determine whether the timing of the onset of cue-based predictive pursuit termination depends on spatial or temporal information using three target velocities. The second aim (experiment 2) was to examine whether an unexpected offset of the target affects the pursuit termination. We conducted a pursuit termination task where participants tracked a moving target and then stopped tracking after the target disappeared. The results of experiment 1 showed that the onset times of predictive eye deceleration were consistent regardless of target velocity, indicating that its timing is controlled by the temporal estimation, rather than the spatial distance between the target and cue positions. In experiment 2, we compared pursuit termination between the following two conditions. One condition did not present any cues (unknown condition), whereas a second condition included a same cue as experiment 1 but the target disappeared 500 ms before the timing indicated by the cue unpredictably (unexpected condition). As a result, the unexpected condition showed significant delays in the onset of eye deceleration, but no difference in the total time for completion of pursuit termination. Therefore, our findings suggest that the cue-based pursuit termination is controlled by the predictive pursuit system, and an unexpected offset of the target yields delays in the onset of eye deceleration, while does not affect the duration of pursuit termination.
Collapse
|
22
|
Aguado B, López-Moliner J. Gravity and Known Size Calibrate Visual Information to Time Parabolic Trajectories. Front Hum Neurosci 2021; 15:642025. [PMID: 34497497 PMCID: PMC8420811 DOI: 10.3389/fnhum.2021.642025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 07/28/2021] [Indexed: 11/13/2022] Open
Abstract
Catching a ball in a parabolic flight is a complex task in which the time and area of interception are strongly coupled, making interception possible for a short period. Although this makes the estimation of time-to-contact (TTC) from visual information in parabolic trajectories very useful, previous attempts to explain our precision in interceptive tasks circumvent the need to estimate TTC to guide our action. Obtaining TTC from optical variables alone in parabolic trajectories would imply very complex transformations from 2D retinal images to a 3D layout. We propose based on previous work and show by using simulations that exploiting prior distributions of gravity and known physical size makes these transformations much simpler, enabling predictive capacities from minimal early visual information. Optical information is inherently ambiguous, and therefore, it is necessary to explain how these prior distributions generate predictions. Here is where the role of prior information comes into play: it could help to interpret and calibrate visual information to yield meaningful predictions of the remaining TTC. The objective of this work is: (1) to describe the primary sources of information available to the observer in parabolic trajectories; (2) unveil how prior information can be used to disambiguate the sources of visual information within a Bayesian encoding-decoding framework; (3) show that such predictions might be robust against complex dynamic environments; and (4) indicate future lines of research to scrutinize the role of prior knowledge calibrating visual information and prediction for action control.
Collapse
Affiliation(s)
- Borja Aguado
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
23
|
The Potential Role of Dopamine in Mediating Motor Function and Interpersonal Synchrony. Biomedicines 2021; 9:biomedicines9040382. [PMID: 33916451 PMCID: PMC8066519 DOI: 10.3390/biomedicines9040382] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/29/2021] [Accepted: 03/31/2021] [Indexed: 12/28/2022] Open
Abstract
Motor functions in general and motor planning in particular are crucial for our ability to synchronize our movements with those of others. To date, these co-occurring functions have been studied separately, and as yet it is unclear whether they share a common biological mechanism. Here, we synthesize disparate recent findings on motor functioning and interpersonal synchrony and propose that these two functions share a common neurobiological mechanism and adhere to the same principles of predictive coding. Critically, we describe the pivotal role of the dopaminergic system in modulating these two distinct functions. We present attention deficit hyperactivity disorder (ADHD) as an example of a disorder that involves the dopaminergic system and describe deficits in motor and interpersonal synchrony. Finally, we suggest possible directions for future studies emphasizing the role of dopamine modulation as a link between social and motor functioning.
Collapse
|
24
|
Fooken J, Kreyenmeier P, Spering M. The role of eye movements in manual interception: A mini-review. Vision Res 2021; 183:81-90. [PMID: 33743442 DOI: 10.1016/j.visres.2021.02.007] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/28/2021] [Accepted: 02/04/2021] [Indexed: 10/21/2022]
Abstract
When we catch a moving object in mid-flight, our eyes and hands are directed toward the object. Yet, the functional role of eye movements in guiding interceptive hand movements is not yet well understood. This review synthesizes emergent views on the importance of eye movements during manual interception with an emphasis on laboratory studies published since 2015. We discuss the role of eye movements in forming visual predictions about a moving object, and for enhancing the accuracy of interceptive hand movements through feedforward (extraretinal) and feedback (retinal) signals. We conclude by proposing a framework that defines the role of human eye movements for manual interception accuracy as a function of visual certainty and object motion predictability.
Collapse
Affiliation(s)
- Jolande Fooken
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, Canada; Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
25
|
Cámara C, López-Moliner J, Brenner E, de la Malla C. Looking away from a moving target does not disrupt the way in which the movement toward the target is guided. J Vis 2021; 20:5. [PMID: 32407436 PMCID: PMC7409596 DOI: 10.1167/jov.20.5.5] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
People usually follow a moving object with their gaze if they intend to interact with it. What would happen if they did not? We recorded eye and finger movements while participants moved a cursor toward a moving target. An unpredictable delay in updating the position of the cursor on the basis of that of the invisible finger made it essential to use visual information to guide the finger's ongoing movement. Decreasing the contrast between the cursor and the background from trial to trial made it difficult to see the cursor without looking at it. In separate experiments, either participants were free to hit the target anywhere along its trajectory or they had to move along a specified path. In the two experiments, participants tracked the cursor rather than the target with their gaze on 13% and 32% of the trials, respectively. They hit fewer targets when the contrast was low or a path was imposed. Not looking at the target did not disrupt the visual guidance that was required to deal with the delays that we imposed. Our results suggest that peripheral vision can be used to guide one item to another, irrespective of which item one is looking at.
Collapse
|
26
|
de Brouwer AJ, Flanagan JR, Spering M. Functional Use of Eye Movements for an Acting System. Trends Cogn Sci 2021; 25:252-263. [PMID: 33436307 DOI: 10.1016/j.tics.2020.12.006] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 12/05/2020] [Accepted: 12/07/2020] [Indexed: 10/22/2022]
Abstract
Movements of the eyes assist vision and support hand and body movements in a cooperative way. Despite their strong functional coupling, different types of movements are usually studied independently. We integrate knowledge from behavioral, neurophysiological, and clinical studies on how eye movements are coordinated with goal-directed hand movements and how they facilitate motor learning. Understanding the coordinated control of eye and hand movements can provide important insights into brain functions that are essential for performing or learning daily tasks in health and disease. This knowledge can also inform applications such as robotic manipulation and clinical rehabilitation.
Collapse
Affiliation(s)
- Anouk J de Brouwer
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Canada; Department of Psychology, Queen's University, Kingston, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, Canada
| |
Collapse
|
27
|
Schiatti L, Cappagli G, Martolini C, Maviglia A, Signorini S, Gori M, Crepaldi M. A Novel Wearable and Wireless Device to Investigate Perception in Interactive Scenarios. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:3252-3255. [PMID: 33018698 DOI: 10.1109/embc44109.2020.9176167] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The aim of the present work is to introduce a novel wearable device suitable to be used to investigate perception in interactive tasks, on individuals with and without sensory disabilities. The system is composed by small units embedded with sensors and actuators that allows emitting different kind of stimuli (light, haptic, sound) and to record the user response, thanks to a capacitive sensor. We validated the system by implementing an interception task in three different sensory modalities: visual, tactile and auditory. Six subjects with normal sight were asked to tap either a static or a moving stimulus generated by 6 units placed on their forearm. Results suggest that the system can effectively provide new insights in characterizing how perception principles vary when perceptual judgement occurs through different senses. This confirms the device potential in contributing to the design of rehabilitation protocols rooted on neuroscientific findings, for people with sensory impairments.
Collapse
|
28
|
Barany DA, Gómez-Granados A, Schrayer M, Cutts SA, Singh T. Perceptual decisions about object shape bias visuomotor coordination during rapid interception movements. J Neurophysiol 2020; 123:2235-2248. [DOI: 10.1152/jn.00098.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Visual processing for perception and for action is thought to be mediated by two specialized neural pathways. Using a visuomotor decision-making task, we show that participants differentially utilized online perceptual decision-making in reaching and interception and that eye movements necessary for perception influenced motor decision strategies. These results provide evidence that task complexity modulates how pathways processing perception versus action information interact during the visual control of movement.
Collapse
Affiliation(s)
| | | | | | - Sarah A. Cutts
- Department of Kinesiology, University of Georgia, Athens, Georgia
| | - Tarkeshwar Singh
- Department of Kinesiology, University of Georgia, Athens, Georgia
| |
Collapse
|
29
|
Langridge RW, Marotta JJ. Grasping a 2D virtual target: The influence of target position and movement on gaze and digit placement. Hum Mov Sci 2020; 71:102625. [PMID: 32452441 DOI: 10.1016/j.humov.2020.102625] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 03/06/2020] [Accepted: 04/16/2020] [Indexed: 10/24/2022]
Abstract
While much has been learned about the visual pursuit and motor strategies used to intercept a moving object, less research has focused on the coordination of gaze and digit placement when grasping moving stimuli. Participants grasped 2D computer generated square targets that either encouraged placement of the index finger and thumb along the horizontal midline (Control targets) or had narrow "notches" in the top and bottom surfaces of the target, intended to discourage digit placement near the midline (Experimental targets). In Experiment 1, targets remained stationary at the left, middle, or right side of the screen. Gaze and digit placement were biased toward the closest side of non-central targets, and toward the midline of center targets. These locations were shifted rightward when grasping Experimental targets, suggesting participants prioritized visibility of the target. In Experiment 2, participants grasped horizontally translating targets at early, middle, or late stages of travel. Average gaze and digit placement were consistently positioned behind the moving target's horizontal midline when grasping. Gaze was directed farther behind the midline of Experimental targets, suggesting the absence of a flat central grasp location pulled participants' gaze toward the trailing edge. Participants placed their digits at positions closer to the horizontal midline of leftward moving targets, suggesting participants were compensating for the added mechanical constraints associated with grasping targets moving in a direction contralateral to the grasping hand. These results suggest participants minimize the effort associated with reaching to non-central targets by grasping the nearest side when the target is stationary, but grasp the trailing side of moving targets, even if this means placing the digits at locations on the far side of the target, potentially limiting visibility of the target.
Collapse
Affiliation(s)
- Ryan W Langridge
- Perception and Action Lab, Department of Psychology, 190 Dysart Rd, University of Manitoba, Winnipeg, MB R3T-2N2, Canada.
| | - Jonathan J Marotta
- Perception and Action Lab, Department of Psychology, 190 Dysart Rd, University of Manitoba, Winnipeg, MB R3T-2N2, Canada.
| |
Collapse
|
30
|
Smeets JBJ, van der Kooij K, Brenner E. A review of grasping as the movements of digits in space. J Neurophysiol 2019; 122:1578-1597. [DOI: 10.1152/jn.00123.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is tempting to describe human reach-to-grasp movements in terms of two, more or less independent visuomotor channels, one relating hand transport to the object’s location and the other relating grip aperture to the object’s size. Our review of experimental work questions this framework for reasons that go beyond noting the dependence between the two channels. Both the lack of effect of size illusions on grip aperture and the finding that the variability in grip aperture does not depend on the object’s size indicate that size information is not used to control grip aperture. An alternative is to describe grip formation as emerging from controlling the movements of the digits in space. Each digit’s trajectory when grasping an object is remarkably similar to its trajectory when moving to tap the same position on its own. The similarity is also evident in the fast responses when the object is displaced. This review develops a new description of the speed-accuracy trade-off for multiple effectors that is applied to grasping. The most direct support for the digit-in-space framework is that prism-induced adaptation of each digit’s tapping movements transfers to that digit’s movements when grasping, leading to changes in grip aperture for adaptation in opposite directions for the two digits. We conclude that although grip aperture and hand transport are convenient variables to describe grasping, treating grasping as movements of the digits in space is a more suitable basis for understanding the neural control of grasping.
Collapse
Affiliation(s)
- Jeroen B. J. Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Katinka van der Kooij
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
31
|
Affiliation(s)
- Katja Fiehler
- Department of Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), Universities of Marburg and Giessen, Germany
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
32
|
Abstract
When intercepting a moving target, we typically rely on vision to determine where
the target is and where it will soon be. The accuracy of visually guided
interception can be represented by a model that combines the perceived position
and velocity of the target to estimate when and where to hit it and guides the
finger accordingly with a short delay. We might expect the accuracy of
interception to similarly depend on haptic judgments of position and velocity.
To test this, we conducted separate experiments to measure the precision and any
biases in tactile perception of position and velocity and used our findings to
predict the precision and biases that would be present in an interception task
if it were performed according to the principle described earlier. We then
performed a tactile interception task to test our predictions. We found that
interception of tactile targets is guided by similar principles as interception
of visual targets.
Collapse
Affiliation(s)
- J S Nelson
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - G Baud-Bovy
- Department of Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy
| | | | - E Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| |
Collapse
|