1
|
Fooken J, Balalaie P, Park K, Flanagan JR, Scott SH. Rapid eye and hand responses in an interception task are differentially modulated by context-dependent predictability. J Vis 2024; 24:10. [PMID: 39556082 DOI: 10.1167/jov.24.12.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2024] Open
Abstract
When catching a falling ball or avoiding a collision with traffic, humans can quickly generate eye and limb responses to unpredictable changes in their environment. Mechanisms of limb and oculomotor control when responding to sudden changes in the environment have mostly been investigated independently. Here, we investigated eye-hand coordination in a rapid interception task where human participants used a virtual paddle to intercept a moving target. The target moved vertically down a computer screen and could suddenly jump to the left or right. In high-certainty blocks, the target always jumped; in low-certainty blocks, the target only jumped in a portion of the trials. Further, we manipulated response urgency by varying the time of target jumps, with early jumps requiring less urgent responses and late jumps requiring more urgent responses. Our results highlight differential effects of certainty and urgency on eye-hand coordination. Participants initiated both eye and hand responses earlier for high-certainty compared with low-certainty blocks. Hand reaction times decreased and response vigor increased with increasing urgency levels. However, eye reaction times were lowest for medium-urgency levels and eye vigor was unaffected by urgency. Across all trials, we found a weak positive correlation between eye and hand responses. Taken together, these results suggest that the limb and oculomotor systems use similar early sensorimotor processing; however, rapid responses are modulated differentially to attain system-specific sensorimotor goals.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
- Department of Psychology, Queen's University, Kingston, ON, Canada
- Department of Psychology and Centre for Cognitive Science, Technical University of Darmstadt, Darmstadt, Germany
| | - Parsa Balalaie
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - Kayne Park
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
- Department of Psychology, Queen's University, Kingston, ON, Canada
| | - Stephen H Scott
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
- Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
- Department of Medicine, Queen's University, Kingston, ON, Canada
| |
Collapse
|
2
|
Naffrechoux M, Koun E, Volland F, Farnè A, Roy AC, Pélisson D. Eyes and hand are both reliable at localizing somatosensory targets. Exp Brain Res 2024; 242:2653-2664. [PMID: 39340566 DOI: 10.1007/s00221-024-06922-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 09/04/2024] [Indexed: 09/30/2024]
Abstract
Body representations (BR) for action are critical to perform accurate movements. Yet, behavioral measures suggest that BR are distorted even in healthy people. However, the upper limb has mostly been used as a probe so far, making difficult to decide whether BR are truly distorted or whether this depends on the effector used as a readout. Here, we aimed to assess in healthy humans the accuracy of the eye and hand effectors in localizing somatosensory targets, to determine whether they may probe BR similarly. Twenty-six participants completed two localization tasks in which they had to localize an unseen target (proprioceptive or tactile) with either their eyes or hand. Linear mixed model revealed in both tasks a larger horizontal (but not vertical) localization error for the ocular than for the manual localization performance. However, despite better hand mean accuracy, manual and ocular localization performance positively correlated to each other in both tasks. Moreover, target position also affected localization performance for both eye and hand responses: accuracy was higher for the more flexed position of the elbow in the proprioceptive task and for the thumb than for the index finger in the tactile task, thus confirming previous results of better performance for the thumb. These findings indicate that the hand seems to beat the eyes along the horizontal axis when localizing somatosensory targets, but the localization patterns revealed by the two effectors seemed to be related and characterized by the same target effect, opening the way to assess BR with the eyes when upper limb motor control is disturbed.
Collapse
Affiliation(s)
- Marion Naffrechoux
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France.
- Laboratoire Dynamique Du Langage CNRS, UMR 5596 University Lyon 2, Lyon, France.
| | - Eric Koun
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France
| | - Frederic Volland
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France
| | - Alessandro Farnè
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France
| | - Alice Catherine Roy
- Laboratoire Dynamique Du Langage CNRS, UMR 5596 University Lyon 2, Lyon, France
| | - Denis Pélisson
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France
| |
Collapse
|
3
|
Vrzáková H, Koskinen J, Andberg S, Lee A, Amon MJ. Towards Automatic Object Detection and Activity Recognition in Indoor Climbing. SENSORS (BASEL, SWITZERLAND) 2024; 24:6479. [PMID: 39409520 PMCID: PMC11479384 DOI: 10.3390/s24196479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Revised: 09/16/2024] [Accepted: 09/24/2024] [Indexed: 10/20/2024]
Abstract
Rock climbing has propelled from niche sport to mainstream free-time activity and Olympic sport. Moreover, climbing can be studied as an example of a high-stakes perception-action task. However, understanding what constitutes an expert climber is not simple or straightforward. As a dynamic and high-risk activity, climbing requires a precise interplay between cognition, perception, and precise action execution. While prior research has predominantly focused on the movement aspect of climbing (i.e., skeletal posture and individual limb movements), recent studies have also examined the climber's visual attention and its links to their performance. To associate the climber's attention with their actions, however, has traditionally required frame-by-frame manual coding of the recorded eye-tracking videos. To overcome this challenge and automatically contextualize the analysis of eye movements in indoor climbing, we present deep learning-driven (YOLOv5) hold detection that facilitates automatic grasp recognition. To demonstrate the framework, we examined the expert climber's eye movements and egocentric perspective acquired from eye-tracking glasses (SMI and Tobii Glasses 2). Using the framework, we observed that the expert climber's grasping duration was positively correlated with total fixation duration (r = 0.807) and fixation count (r = 0.864); however, it was negatively correlated with the fixation rate (r = -0.402) and saccade rate (r = -0.344). The findings indicate the moments of cognitive processing and visual search that occurred during decision making and route prospecting. Our work contributes to research on eye-body performance and coordination in high-stakes contexts, and informs the sport science and expands the applications, e.g., in training optimization, injury prevention, and coaching.
Collapse
Affiliation(s)
- Hana Vrzáková
- School of Computing, University of Eastern Finland, FI-80101 Joensuu, Finland
| | - Jani Koskinen
- School of Computing, University of Eastern Finland, FI-80101 Joensuu, Finland
| | - Sami Andberg
- School of Computing, University of Eastern Finland, FI-80101 Joensuu, Finland
| | - Ahreum Lee
- Samsung Electronics, Suwon 16677, Republic of Korea
| | - Mary Jean Amon
- Department of Informatics, Indiana University Bloomington, Bloomington, IN 47408, USA
| |
Collapse
|
4
|
Bloch C, Tepest R, Koeroglu S, Feikes K, Jording M, Vogeley K, Falter-Wagner CM. Interacting with autistic virtual characters: intrapersonal synchrony of nonverbal behavior affects participants' perception. Eur Arch Psychiatry Clin Neurosci 2024; 274:1585-1599. [PMID: 38270620 PMCID: PMC11422267 DOI: 10.1007/s00406-023-01750-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 12/18/2023] [Indexed: 01/26/2024]
Abstract
Temporal coordination of communicative behavior is not only located between but also within interaction partners (e.g., gaze and gestures). This intrapersonal synchrony (IaPS) is assumed to constitute interpersonal alignment. Studies show systematic variations in IaPS in individuals with autism, which may affect the degree of interpersonal temporal coordination. In the current study, we reversed the approach and mapped the measured nonverbal behavior of interactants with and without ASD from a previous study onto virtual characters to study the effects of the differential IaPS on observers (N = 68), both with and without ASD (crossed design). During a communication task with both characters, who indicated targets with gaze and delayed pointing gestures, we measured response times, gaze behavior, and post hoc impression formation. Results show that character behavior indicative of ASD resulted in overall enlarged decoding times in observers and this effect was even pronounced in observers with ASD. A classification of observer's gaze types indicated differentiated decoding strategies. Whereas non-autistic observers presented with a rather consistent eyes-focused strategy associated with efficient and fast responses, observers with ASD presented with highly variable decoding strategies. In contrast to communication efficiency, impression formation was not influenced by IaPS. The results underline the importance of timing differences in both production and perception processes during multimodal nonverbal communication in interactants with and without ASD. In essence, the current findings locate the manifestation of reduced reciprocity in autism not merely in the person, but in the interactional dynamics of dyads.
Collapse
Affiliation(s)
- Carola Bloch
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Clinic, Ludwig-Maximilians-University, 80336, Munich, Germany.
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany.
| | - Ralf Tepest
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany
| | - Sevim Koeroglu
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany
| | - Kyra Feikes
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany
| | - Mathis Jording
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Juelich, 52425, Juelich, Germany
| | - Kai Vogeley
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Juelich, 52425, Juelich, Germany
| | - Christine M Falter-Wagner
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Clinic, Ludwig-Maximilians-University, 80336, Munich, Germany
| |
Collapse
|
5
|
Illamperuma NH, Fooken J. Towards a functional understanding of gaze in goal-directed action. J Neurophysiol 2024; 132:767-769. [PMID: 39110515 DOI: 10.1152/jn.00342.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Accepted: 08/05/2024] [Indexed: 08/30/2024] Open
|
6
|
Brand TK, Schütz AC, Müller H, Maurer H, Hegele M, Maurer LK. Sensorimotor prediction is used to direct gaze toward task-relevant locations in a goal-directed throwing task. J Neurophysiol 2024; 132:485-500. [PMID: 38919149 DOI: 10.1152/jn.00052.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 06/17/2024] [Accepted: 06/19/2024] [Indexed: 06/27/2024] Open
Abstract
Previous research has shown that action effects of self-generated movements are internally predicted before outcome feedback becomes available. To test whether these sensorimotor predictions are used to facilitate visual information uptake for feedback processing, we measured eye movements during the execution of a goal-directed throwing task. Participants could fully observe the effects of their throwing actions (ball trajectory and either hitting or missing a target) in most of the trials. In a portion of the trials, the ball trajectory was not visible, and participants only received static information about the outcome. We observed a large proportion of predictive saccades, shifting gaze toward the goal region before the ball arrived and outcome feedback became available. Fixation locations after predictive saccades systematically covaried with future ball positions in trials with continuous ball flight information, but notably also in trials with static outcome feedback and only efferent and proprioceptive information about the movement that could be used for predictions. Fixation durations at the chosen positions after feedback onset were modulated by action outcome (longer durations for misses than for hits) and outcome uncertainty (longer durations for narrow vs. clear outcomes). Combining both effects, durations were longest for narrow errors and shortest for clear hits, indicating that the chosen locations offer informational value for feedback processing. Thus, humans are able to use sensorimotor predictions to direct their gaze toward task-relevant feedback locations. Outcome-dependent saccade latency differences (miss vs. hit) indicate that also predictive valuation processes are involved in planning predictive saccades.NEW & NOTEWORTHY We elucidate the potential benefits of sensorimotor predictions, focusing on how the system actually uses this information to optimize feedback processing in goal-directed actions. Sensorimotor information is used to predict spatial parameters of movement outcomes, guiding predictive saccades toward future action effects. Saccade latencies and fixation durations are modulated by outcome quality, indicating that predictive valuation processes are considered and that the locations chosen are of high informational value for feedback processing.
Collapse
Affiliation(s)
- Theresa K Brand
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Alexander C Schütz
- General and Biological Psychology, Department of Psychology, Philipps University Marburg, Marburg, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Hermann Müller
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Heiko Maurer
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
| | - Mathias Hegele
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| | - Lisa K Maurer
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
| |
Collapse
|
7
|
Intoy J, Li YH, Bowers NR, Victor JD, Poletti M, Rucci M. Consequences of eye movements for spatial selectivity. Curr Biol 2024; 34:3265-3272.e4. [PMID: 38981478 PMCID: PMC11348862 DOI: 10.1016/j.cub.2024.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 05/17/2024] [Accepted: 06/07/2024] [Indexed: 07/11/2024]
Abstract
What determines spatial tuning in the visual system? Standard views rely on the assumption that spatial information is directly inherited from the relative position of photoreceptors and shaped by neuronal connectivity.1,2 However, human eyes are always in motion during fixation,3,4,5,6 so retinal neurons receive temporal modulations that depend on the interaction of the spatial structure of the stimulus with eye movements. It has long been hypothesized that these modulations might contribute to spatial encoding,7,8,9,10,11,12 a proposal supported by several recent observations.13,14,15,16 A fundamental, yet untested, consequence of this encoding strategy is that spatial tuning is not hard-wired in the visual system but critically depends on how the fixational motion of the eye shapes the temporal structure of the signals impinging onto the retina. Here we used high-resolution techniques for eye-tracking17 and gaze-contingent display control18 to quantitatively test this distinctive prediction. We examined how contrast sensitivity, a hallmark of spatial vision, is influenced by fixational motion, both during normal active fixation and when the spatiotemporal stimulus on the retina is altered to mimic changes in fixational control. We showed that visual sensitivity closely follows the strength of the luminance modulations delivered within a narrow temporal bandwidth, so changes in fixational motion have opposite visual effects at low and high spatial frequencies. By identifying a key role for oculomotor activity in spatial selectivity, these findings have important implications for the perceptual consequences of abnormal eye movements, the sources of perceptual variability, and the function of oculomotor control.
Collapse
Affiliation(s)
- Janis Intoy
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Y Howard Li
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Norick R Bowers
- Department of Psychology, Justus-Liebig University, Giessen, Germany
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York City, NY, USA
| | - Martina Poletti
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Michele Rucci
- Center for Visual Science, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
8
|
Kreyenmeier P, Spering M. A unifying framework for studying discrete and continuous human movements. J Neurophysiol 2024; 131:1112-1114. [PMID: 38718413 DOI: 10.1152/jn.00186.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Accepted: 05/03/2024] [Indexed: 06/05/2024] Open
Affiliation(s)
- Philipp Kreyenmeier
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Miriam Spering
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
- Edwin S.H. Leong Centre for Healthy Aging, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
9
|
Heins F, Lappe M. Oculomotor behavior can be adjusted on the basis of artificial feedback signals indicating externally caused errors. PLoS One 2024; 19:e0302872. [PMID: 38768134 PMCID: PMC11104623 DOI: 10.1371/journal.pone.0302872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 04/15/2024] [Indexed: 05/22/2024] Open
Abstract
Whether a saccade is accurate and has reached the target cannot be evaluated during its execution, but relies on post-saccadic feedback. If the eye has missed the target object, a secondary corrective saccade has to be made to align the fovea with the target. If a systematic post-saccadic error occurs, adaptive changes to the oculomotor behavior are made, such as shortening or lengthening the saccade amplitude. Systematic post-saccadic errors are typically attributed internally to erroneous motor commands. The corresponding adaptive changes to the motor command reduce the error and the need for secondary corrective saccades, and, in doing so, restore accuracy and efficiency. However, adaptive changes to the oculomotor behavior also occur if a change in saccade amplitude is beneficial for task performance, or if it is rewarded. Oculomotor learning thus is more complex than reducing a post-saccadic position error. In the current study, we used a novel oculomotor learning paradigm and investigated whether human participants are able to adapt their oculomotor behavior to improve task performance even when they attribute the error externally. The task was to indicate the intended target object among several objects to a simulated human-machine interface by making eye movements. The participants were informed that the system itself could make errors. The decoding process depended on a distorted landing point of the saccade, resulting in decoding errors. Two different types of visual feedback were added to the post-saccadic scene and we compared how participants used the different feedback types to adjust their oculomotor behavior to avoid errors. We found that task performance improved over time, regardless of the type of feedback. Thus, error feedback from the simulated human-machine interface was used for post-saccadic error evaluation. This indicates that 1) artificial visual feedback signals and 2) externally caused errors might drive adaptive changes to oculomotor behavior.
Collapse
Affiliation(s)
- Frauke Heins
- Institute for Psychology and Otto-Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Markus Lappe
- Institute for Psychology and Otto-Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
10
|
Coudiere A, Danion FR. Eye-hand coordination all the way: from discrete to continuous hand movements. J Neurophysiol 2024; 131:652-667. [PMID: 38381528 DOI: 10.1152/jn.00314.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 01/31/2024] [Accepted: 02/18/2024] [Indexed: 02/23/2024] Open
Abstract
The differentiation between continuous and discrete actions is key for behavioral neuroscience. Although many studies have characterized eye-hand coordination during discrete (e.g., reaching) and continuous (e.g., pursuit tracking) actions, all these studies were conducted separately, using different setups and participants. In addition, how eye-hand coordination might operate at the frontier between discrete and continuous movements remains unexplored. Here we filled these gaps by means of a task that could elicit different movement dynamics. Twenty-eight participants were asked to simultaneously track with their eyes and a joystick a visual target that followed an unpredictable trajectory and whose position was updated at different rates (from 1.5 to 240 Hz). This procedure allowed us to examine actions ranging from discrete point-to-point movements (low refresh rate) to continuous pursuit (high refresh rate). For comparison, we also tested a manual tracking condition with the eyes fixed and a pure eye tracking condition (hand fixed). The results showed an abrupt transition between discrete and continuous hand movements around 3 Hz contrasting with a smooth trade-off between fixations and smooth pursuit. Nevertheless, hand and eye tracking accuracy remained strongly correlated, with each of these depending on whether the other effector was recruited. Moreover, gaze-cursor distance and lag were smaller when eye and hand performed the task conjointly than separately. Altogether, despite some dissimilarities in eye and hand dynamics when transitioning between discrete and continuous movements, our results emphasize that eye-hand coordination continues to smoothly operate and support the notion of synergies across eye movement types.NEW & NOTEWORTHY The differentiation between continuous and discrete actions is key for behavioral neuroscience. By using a visuomotor task in which we manipulate the target refresh rate to trigger different movement dynamics, we explored eye-hand coordination all the way from discrete to continuous actions. Despite abrupt changes in hand dynamics, eye-hand coordination continues to operate via a gradual trade-off between fixations and smooth pursuit, an observation confirming the notion of synergies across eye movement types.
Collapse
Affiliation(s)
- Adrien Coudiere
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| | - Frederic R Danion
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| |
Collapse
|
11
|
Lavoie E, Hebert JS, Chapman CS. Comparing eye-hand coordination between controller-mediated virtual reality, and a real-world object interaction task. J Vis 2024; 24:9. [PMID: 38393742 PMCID: PMC10905649 DOI: 10.1167/jov.24.2.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 11/30/2023] [Indexed: 02/25/2024] Open
Abstract
Virtual reality (VR) technology has advanced significantly in recent years, with many potential applications. However, it is unclear how well VR simulations mimic real-world experiences, particularly in terms of eye-hand coordination. This study compares eye-hand coordination from a previously validated real-world object interaction task to the same task re-created in controller-mediated VR. We recorded eye and body movements and segmented participants' gaze data using the movement data. In the real-world condition, participants wore a head-mounted eye tracker and motion capture markers and moved a pasta box into and out of a set of shelves. In the VR condition, participants wore a VR headset and moved a virtual box using handheld controllers. Unsurprisingly, VR participants took longer to complete the task. Before picking up or dropping off the box, participants in the real world visually fixated the box about half a second before their hand arrived at the area of action. This 500-ms minimum fixation time before the hand arrived was preserved in VR. Real-world participants disengaged their eyes from the box almost immediately after their hand initiated or terminated the interaction, but VR participants stayed fixated on the box for much longer after it was picked up or dropped off. We speculate that the limited haptic feedback during object interactions in VR forces users to maintain visual fixation on objects longer than in the real world, altering eye-hand coordination. These findings suggest that current VR technology does not replicate real-world experience in terms of eye-hand coordination.
Collapse
Affiliation(s)
- Ewen Lavoie
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Biomedical Engineering, University of Alberta, Edmonton, AB, Canada
- Glenrose Rehabiliation Hospital, Alberta Health Services, Edmonton, AB, Canada
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
12
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
13
|
Stewart EEM, Fleming RW. The eyes anticipate where objects will move based on their shape. Curr Biol 2023; 33:R894-R895. [PMID: 37699342 DOI: 10.1016/j.cub.2023.07.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 07/18/2023] [Accepted: 07/18/2023] [Indexed: 09/14/2023]
Abstract
Imagine staring into a clear river, starving, desperately searching for a fish to spear and cook. You see a dark shape lurking beneath the surface. It doesn't resemble any sort of fish you've encountered before - but you're hungry. To catch it, you need to anticipate which way it will move when you lunge for it, to compensate for your own sensory and motor processing delays1,2,3. Yet you know nothing about the behaviour of this creature, and do not know in which direction it will try to escape. What cues do you then use to drive such anticipatory responses? Fortunately, many species4, including humans, have the remarkable ability to predict the directionality of objects based on their shape - even if they are unfamiliar and so we cannot rely on semantic knowledge about their movements5. While it is known that such directional inferences can guide attention5, we do not yet fully understand how such causal inferences are made, or the extent to which they enable anticipatory behaviours. Does the oculomotor system, which moves our eyes to optimise visual input, use directional inferences from shape to anticipate upcoming motion direction? Such anticipation is necessary to stabilise the moving object on the high-resolution fovea of the retina while tracking the shape, a primary goal of the oculomotor system6, and to guide any future interactions7,8. Here, we leveraged a well-known behaviour of the oculomotor system: anticipatory smooth eye movements (ASEM), where an increase in eye velocity is observed in the direction of a stimulus' expected motion, before the stimulus actually moves3, to show that the oculomotor system extracts directional information from shape, and uses this inference to predict and anticipate upcoming motion.
Collapse
Affiliation(s)
- Emma E M Stewart
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10 F, D-35394 Giessen, Germany; Centre for Mind, Brain and Behaviour (CMBB), University of Marburg and Justus Liebig University Giessen, Germany.
| | - Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10 F, D-35394 Giessen, Germany; Centre for Mind, Brain and Behaviour (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
14
|
Bloch C, Viswanathan S, Tepest R, Jording M, Falter-Wagner CM, Vogeley K. Differentiated, rather than shared, strategies for time-coordinated action in social and non-social domains in autistic individuals. Cortex 2023; 166:207-232. [PMID: 37393703 DOI: 10.1016/j.cortex.2023.05.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 05/15/2023] [Accepted: 05/19/2023] [Indexed: 07/04/2023]
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental condition with a highly heterogeneous adult phenotype that includes social and non-social behavioral characteristics. The link between the characteristics assignable to the different domains remains unresolved. One possibility is that social and non-social behaviors in autism are modulated by a common underlying deficit. However, here we report evidence supporting an alternative concept that is individual-centered rather than deficit-centered. Individuals are assumed to have a distinctive style in the strategies they adopt to perform social and non-social tasks with these styles presumably being structured differently between autistic individuals and typically-developed (TD) individuals. We tested this hypothesis for the execution of time-coordinated (synchronized) actions. Participants performed (i) a social task that required synchronized gaze and pointing actions to interact with another person, and (ii) a non-social task that required finger-tapping actions synchronized to periodic stimuli at different time-scales and sensory modalities. In both tasks, synchronization behavior differed between ASD and TD groups. However, a principal component analysis of individual behaviors across tasks revealed associations between social and non-social features for the TD persons but such cross-domain associations were strikingly absent for autistic individuals. The highly differentiated strategies between domains in ASD are inconsistent with a general synchronization deficit and instead highlight the individualized developmental heterogeneity in the acquisition of domain-specific behaviors. We propose a cognitive model to help disentangle individual-centered from deficit-centered effects in other domains. Our findings reinforce the importance to identify individually differentiated phenotypes to personalize autism therapies.
Collapse
Affiliation(s)
- Carola Bloch
- Department of Psychiatry and Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany; Department of Psychiatry, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.
| | - Shivakumar Viswanathan
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich, Jülich, Germany
| | - Ralf Tepest
- Department of Psychiatry, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Mathis Jording
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich, Jülich, Germany
| | | | - Kai Vogeley
- Department of Psychiatry, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany; Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich, Jülich, Germany
| |
Collapse
|
15
|
de la Malla C, Goettker A. The effect of impaired velocity signals on goal-directed eye and hand movements. Sci Rep 2023; 13:13646. [PMID: 37607970 PMCID: PMC10444871 DOI: 10.1038/s41598-023-40394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
Information about position and velocity is essential to predict where moving targets will be in the future, and to accurately move towards them. But how are the two signals combined over time to complete goal-directed movements? We show that when velocity information is impaired due to using second-order motion stimuli, saccades directed towards moving targets land at positions where targets were ~ 100 ms before saccade initiation, but hand movements are accurate. Importantly, the longer latencies of hand movements allow for additional time to process the sensory information available. When increasing the period of time one sees the moving target before making the saccade, saccades become accurate. In line with that, hand movements with short latencies show higher curvature, indicating corrections based on an update of incoming sensory information. These results suggest that movements are controlled by an independent and evolving combination of sensory information about the target's position and velocity.
Collapse
Affiliation(s)
- Cristina de la Malla
- Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institute of Neurosciences, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
16
|
Pawlowsky C, Thénault F, Bernier PM. Implicit Sensorimotor Adaptation Proceeds in Absence of Movement Execution. eNeuro 2023; 10:ENEURO.0508-22.2023. [PMID: 37463743 PMCID: PMC10405882 DOI: 10.1523/eneuro.0508-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 06/19/2023] [Accepted: 07/07/2023] [Indexed: 07/20/2023] Open
Abstract
In implicit sensorimotor adaptation, a mismatch between the predicted and actual sensory feedback results in a sensory prediction error (SPE). Sensory predictions have long been thought to be linked to descending motor commands, implying a necessary contribution of movement execution to adaptation. However, recent work has shown that mere motor imagery (MI) also engages predictive mechanisms, opening up the possibility that MI might be sufficient to drive implicit adaptation. In a within-subject design in humans (n = 30), implicit adaptation was assessed in a center-out reaching task, following a single exposure to a visuomotor rotation. It was hypothesized that performing MI of a reaching movement while being provided with an animation of rotated visual feedback (MI condition) would lead to postrotation biases (PRBs) similar to those observed when the movement is executed (Execution condition). Results revealed that both the MI and Execution conditions led to significant directional biases following rotated trials. Yet the magnitude of these biases was significantly larger in the Execution condition. To further probe the contribution of MI to adaptation, a Control condition was conducted in which participants were presented with the same rotated visual animation as in the MI condition, but in which they were prevented from performing MI. Surprisingly, significant biases were also observed in the Control condition, suggesting that MI per se may not have accounted for adaptation. Overall, these results suggest that implicit adaptation can be partially supported by processes other than those that strictly pertain to generating motor commands, although movement execution does potentiate it.
Collapse
Affiliation(s)
- Constance Pawlowsky
- Département de kinanthropologie, Faculté des Sciences de l'Activité Physique, Université de Sherbrooke, Sherbrooke, Québec, J1K 2R1, Canada
| | - François Thénault
- Département de kinanthropologie, Faculté des Sciences de l'Activité Physique, Université de Sherbrooke, Sherbrooke, Québec, J1K 2R1, Canada
| | - Pierre-Michel Bernier
- Département de kinanthropologie, Faculté des Sciences de l'Activité Physique, Université de Sherbrooke, Sherbrooke, Québec, J1K 2R1, Canada
| |
Collapse
|
17
|
Cataldo A, Di Luca M, Deroy O, Hayward V. Touching with the eyes: Oculomotor self-touch induces illusory body ownership. iScience 2023; 26:106180. [PMID: 36895648 PMCID: PMC9988563 DOI: 10.1016/j.isci.2023.106180] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 11/22/2022] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
Self-touch plays a central role in the construction and plasticity of the bodily self. But which mechanisms support this role? Previous accounts emphasize the convergence of proprioceptive and tactile signals from the touching and the touched body parts. Here, we hypothesise that proprioceptive information is not necessary for self-touch modulation of body-ownership. Because eye movements do not rely on proprioceptive signals as limb movements do, we developed a novel oculomotor self-touch paradigm where voluntary eye movements generated corresponding tactile sensations. We then compared the effectiveness of eye versus hand self-touch movements in generating an illusion of owning a rubber hand. Voluntary oculomotor self-touch was as effective as hand-driven self-touch, suggesting that proprioception does not contribute to body ownership during self-touch. Self-touch may contribute to a unified sense of bodily self by binding voluntary actions toward our own body with their tactile consequences.
Collapse
Affiliation(s)
- Antonio Cataldo
- Institute of Philosophy, School of Advanced Study, University of London, Senate House, London WC1E 7HU, UK.,Cognition, Values and Behaviour, Ludwig Maximilian University, 80333 München, Germany.,Institute of Cognitive Neuroscience, University College London, Alexandra House 17 Queen Square, London WC1N 3AZ, UK
| | - Massimiliano Di Luca
- Formerly with Facebook Reality Labs, Redmond, WA, USA.,School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - Ophelia Deroy
- Institute of Philosophy, School of Advanced Study, University of London, Senate House, London WC1E 7HU, UK.,Cognition, Values and Behaviour, Ludwig Maximilian University, 80333 München, Germany
| | - Vincent Hayward
- Institute of Philosophy, School of Advanced Study, University of London, Senate House, London WC1E 7HU, UK.,Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, 75005 Paris, France
| |
Collapse
|
18
|
Niederhauser L, Gunser S, Waser M, Mast FW, Caversaccio M, Anschuetz L. Training and proficiency level in endoscopic sinus surgery change residents' eye movements. Sci Rep 2023; 13:79. [PMID: 36596830 PMCID: PMC9810736 DOI: 10.1038/s41598-022-25518-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 11/30/2022] [Indexed: 01/04/2023] Open
Abstract
Nose surgery is challenging and needs a lot of training for safe and efficient treatments. Eye tracking can provide an objective assessment to measure residents' learning curve. The aim of the current study was to assess residents' fixation duration and other dependent variables over the course of a dedicated training in functional endoscopic sinus surgery (FESS). Sixteen residents performed a FESS training over 18 sessions, split into three surgical steps. Eye movements in terms of percent fixation on the screen and average fixation duration were measured, in addition to residents' completion time, cognitive load, and surgical performance. Results indicated performance improvements in terms of completion time and surgical performance. Cognitive load and average fixation duration showed a significant change within the last step of training. Percent fixation on screen increased within the first step, and then stagnated. Results showed that eye movements and cognitive load differed between residents of different proficiency levels. In conclusion, eye tracking is a helpful objective measuring tool in FESS. It provides additional insights of the training level and changes with increasing performance. Expert-like gaze was obtained after half of the training sessions and increased proficiency in FESS was associated with increased fixation duration.
Collapse
Affiliation(s)
- Laura Niederhauser
- grid.5734.50000 0001 0726 5157Department of Psychology, University of Bern, Bern, Switzerland
| | - Sandra Gunser
- grid.5734.50000 0001 0726 5157Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Freiburgstrasse 18, University of Bern, 3010 Bern, Switzerland
| | - Manuel Waser
- grid.5734.50000 0001 0726 5157Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Freiburgstrasse 18, University of Bern, 3010 Bern, Switzerland
| | - Fred W. Mast
- grid.5734.50000 0001 0726 5157Department of Psychology, University of Bern, Bern, Switzerland
| | - Marco Caversaccio
- grid.5734.50000 0001 0726 5157Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Freiburgstrasse 18, University of Bern, 3010 Bern, Switzerland
| | - Lukas Anschuetz
- grid.5734.50000 0001 0726 5157Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Freiburgstrasse 18, University of Bern, 3010 Bern, Switzerland
| |
Collapse
|
19
|
Bosco A, Sanz Diez P, Filippini M, Fattori P. The influence of action on perception spans different effectors. Front Syst Neurosci 2023; 17:1145643. [PMID: 37205054 PMCID: PMC10185787 DOI: 10.3389/fnsys.2023.1145643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/10/2023] [Indexed: 05/21/2023] Open
Abstract
Perception and action are fundamental processes that characterize our life and our possibility to modify the world around us. Several pieces of evidence have shown an intimate and reciprocal interaction between perception and action, leading us to believe that these processes rely on a common set of representations. The present review focuses on one particular aspect of this interaction: the influence of action on perception from a motor effector perspective during two phases, action planning and the phase following execution of the action. The movements performed by eyes, hands, and legs have a different impact on object and space perception; studies that use different approaches and paradigms have formed an interesting general picture that demonstrates the existence of an action effect on perception, before as well as after its execution. Although the mechanisms of this effect are still being debated, different studies have demonstrated that most of the time this effect pragmatically shapes and primes perception of relevant features of the object or environment which calls for action; at other times it improves our perception through motor experience and learning. Finally, a future perspective is provided, in which we suggest that these mechanisms can be exploited to increase trust in artificial intelligence systems that are able to interact with humans.
Collapse
Affiliation(s)
- Annalisa Bosco
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
- *Correspondence: Annalisa Bosco
| | - Pablo Sanz Diez
- Carl Zeiss Vision International GmbH, Aalen, Germany
- Institute for Ophthalmic Research, Eberhard Karls University Tüebingen, Tüebingen, Germany
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
| |
Collapse
|
20
|
Bloch C, Tepest R, Jording M, Vogeley K, Falter-Wagner CM. Intrapersonal synchrony analysis reveals a weaker temporal coherence between gaze and gestures in adults with autism spectrum disorder. Sci Rep 2022; 12:20417. [PMID: 36437262 PMCID: PMC9701674 DOI: 10.1038/s41598-022-24605-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 11/17/2022] [Indexed: 11/29/2022] Open
Abstract
The temporal encoding of nonverbal signals within individuals, referred to as intrapersonal synchrony (IaPS), is an implicit process and essential feature of human communication. Based on existing evidence, IaPS is thought to be a marker of nonverbal behavior characteristics in autism spectrum disorders (ASD), but there is a lack of empirical evidence. The aim of this study was to quantify IaPS in adults during an experimentally controlled real-life interaction task. A sample of adults with a confirmed ASD diagnosis and a matched sample of typically-developed adults were tested (N = 48). Participants were required to indicate the appearance of a target invisible to their interaction partner nonverbally through gaze and pointing gestures. Special eye-tracking software allowed automated extraction of temporal delays between nonverbal signals and their intrapersonal variability with millisecond temporal resolution as indices for IaPS. Likelihood ratio tests of multilevel models showed enlarged delays between nonverbal signals in ASD. Larger delays were associated with greater intrapersonal variability in delays. The results provide a quantitative constraint on nonverbal temporality in typically-developed adults and suggest weaker temporal coherence between nonverbal signals in adults with ASD. The results provide a potential diagnostic marker and inspire predictive coding theories about the role of IaPS in interpersonal synchronization processes.
Collapse
Affiliation(s)
- Carola Bloch
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Clinic, Ludwig-Maximilians-University, Nussbaumstraße 7, 80336, Munich, Germany.
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.
| | - Ralf Tepest
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Mathis Jording
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich, Jülich, Germany
| | - Kai Vogeley
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich, Jülich, Germany
| | - Christine M Falter-Wagner
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Clinic, Ludwig-Maximilians-University, Nussbaumstraße 7, 80336, Munich, Germany.
| |
Collapse
|
21
|
Kelly KR, Norouzi DM, Nouredanesh M, Jost RM, Cheng-Patel CS, Beauchamp CL, Dao LM, Luu BA, Stager DR, Tung JY, Niechwiej-Szwedo E. Temporal Eye–Hand Coordination During Visually Guided Reaching in 7- to 12-Year-Old Children With Strabismus. Invest Ophthalmol Vis Sci 2022; 63:10. [DOI: 10.1167/iovs.63.12.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Affiliation(s)
- Krista R. Kelly
- Retina Foundation of the Southwest, Dallas, TX, United States
- Department of Ophthalmology, UT Southwestern Medical Center, Dallas, TX, United States
| | | | - Mina Nouredanesh
- School of Rehabilitation Science, McMaster University, Hamilton, Ontario, Canada
| | - Reed M. Jost
- Retina Foundation of the Southwest, Dallas, TX, United States
| | | | | | - Lori M. Dao
- ABC Eyes Pediatric Ophthalmology, PA, Dallas, TX, United States
| | - Becky A. Luu
- Pediatric Ophthalmology & Adult Strabismus, PA, Plano, TX, United States
| | - David R. Stager
- Pediatric Ophthalmology & Adult Strabismus, PA, Plano, TX, United States
| | - James Y. Tung
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
22
|
Lukashova-Sanz O, Agarwala R, Wahl S. Context matters during pick-and-place in VR: Impact on search and transport phases. Front Psychol 2022; 13:881269. [PMID: 36160516 PMCID: PMC9493493 DOI: 10.3389/fpsyg.2022.881269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 08/19/2022] [Indexed: 11/13/2022] Open
Abstract
When considering external assistive systems for people with motor impairments, gaze has been shown to be a powerful tool as it is anticipatory to motor actions and is promising for understanding intentions of an individual even before the action. Up until now, the vast majority of studies investigating the coordinated eye and hand movement in a grasping task focused on single objects manipulation without placing them in a meaningful scene. Very little is known about the impact of the scene context on how we manipulate objects in an interactive task. In the present study, it was investigated how the scene context affects human object manipulation in a pick-and-place task in a realistic scenario implemented in VR. During the experiment, participants were instructed to find the target object in a room, pick it up, and transport it to a predefined final location. Thereafter, the impact of the scene context on different stages of the task was examined using head and hand movement, as well as eye tracking. As the main result, the scene context had a significant effect on the search and transport phases, but not on the reach phase of the task. The present work provides insights into the development of potential supporting intention predicting systems, revealing the dynamics of the pick-and-place task behavior once it is realized in a realistic context-rich scenario.
Collapse
Affiliation(s)
- Olga Lukashova-Sanz
- Zeiss Vision Science Lab, Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Carl Zeiss Vision International Gesellschaft mit beschränkter Haftung (GmbH), Aalen, Germany
| | - Rajat Agarwala
- Zeiss Vision Science Lab, Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Siegfried Wahl
- Zeiss Vision Science Lab, Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Carl Zeiss Vision International Gesellschaft mit beschränkter Haftung (GmbH), Aalen, Germany
| |
Collapse
|
23
|
D'Aquino A, Frank C, Hagan JE, Schack T. Imagining interceptions: Eye movements as an online indicator of covert motor processes during motor imagery. Front Neurosci 2022; 16:940772. [PMID: 35968367 PMCID: PMC9372347 DOI: 10.3389/fnins.2022.940772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/13/2022] [Indexed: 11/21/2022] Open
Abstract
The analysis of eye movements during motor imagery has been used to understand the influence of covert motor processes on visual-perceptual activity. There is evidence showing that gaze metrics seem to be affected by motor planning often dependent on the spatial and temporal characteristics of a task. However, previous research has focused on simulated actions toward static targets with limited empirical evidence of how eye movements change in more dynamic environments. The study examined the characteristics of eye movements during motor imagery for an interception task. Twenty-four participants were asked to track a moving target over a computer display and either mentally simulate an interception or rest. The results showed that smooth pursuit variables, such as duration and gain, were lower during motor imagery when compared to passive observation. These findings indicate that motor plans integrate visual-perceptual information based on task demands and that eye movements during imagery reflect such constraint.
Collapse
Affiliation(s)
- Alessio D'Aquino
- Faculty of Psychology and Sports Science, Neurocognition and Action Biomechanics Group, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Cornelia Frank
- Institute for Sport and Movement Science, Osnabrück University, Osnabrück, Germany
| | - John Elvis Hagan
- Faculty of Psychology and Sports Science, Neurocognition and Action Biomechanics Group, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Thomas Schack
- Faculty of Psychology and Sports Science, Neurocognition and Action Biomechanics Group, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
24
|
Test of Gross Motor Development-3: Item Difficulty and Item Differential Functioning by Gender and Age with Rasch Analysis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19148667. [PMID: 35886518 PMCID: PMC9322710 DOI: 10.3390/ijerph19148667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 07/09/2022] [Accepted: 07/13/2022] [Indexed: 12/10/2022]
Abstract
The assessment of motor proficiency is essential across childhood to identify children’s strengths and difficulties and to provide adequate instruction and opportunities; assessment is a powerful tool to promote children’s development. This study aimed to investigate the hierarchal order of the Test of Gross Motor Development-Third Edition (TGMD-3) items regarding difficulty levels and the differential item functioning across gender and age group (3 to 5, 6 to 8, and 9 to 10 years old). Participants are 989 children (3 to 10.9 years; girls n = 491) who were assessed using TGMD-3. For locomotor skills, appropriate results reliability (alpha = 1.0), infit (M = 0.99; SD = 0.17), outfit (M = 1.18; SD = 0.64), and point-biserial correlations (rpb values from 0.14 to 0.58) were found; the trend was similar for ball skills: reliability (alpha = 1.0), infit (M = 0.99; SD = 0.13), outfit (M = 1.08; SD = 0.52); point-biserial correlations (rpb values from 0.06 to 0.59) were obtained. Two motor criteria: gallop, item-1, and one-hand forehand strike, item-4, were the most difficult items; in contrast, run, item-2, and two-hand catch, item-2, were the easiest items. Differential item functioning for age was observed in nine locomotor and ten ball skills items. These items were easier for older children compared to younger ones. The TGMD-3 has items with different difficulty levels capable of differential functioning across age groups.
Collapse
|
25
|
Ayala N, Zafar A, Niechwiej-Szwedo E. Gaze behaviour: A window into distinct cognitive processes revealed by the Tower of London test. Vision Res 2022; 199:108072. [PMID: 35623185 DOI: 10.1016/j.visres.2022.108072] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 04/20/2022] [Accepted: 05/07/2022] [Indexed: 10/18/2022]
Abstract
The analysis of gaze behaviour during complex tasks provides a promising non-invasive method to examine how specific eye movement patterns relate to various aspects of cognition and action. Notably, the association between aspects of gaze behaviour and subsequent goal-directed action during high-level visuospatial problem solving remains elusive. Therefore, the current study comprehensively examined gaze behaviour using traditional and entropy-based gaze analyses in healthy adults (N = 27) while they performed the Freiburg version of the Tower of London task. Results demonstrated that both gaze analyses provided crucial temporal and spatial information related to planning, solution elaboration and execution. Specifically, gaze biases toward task-relevant areas (i.e., the work space) and an increase in gaze complexity (i.e., gaze transition entropy) during optimal performance reflected changes in cognitive demands as task difficulty increased. A comparison between optimal and non-optimal performance revealed sub-optimal gaze patterns that occurred in the early stages of planning, which were taken to reflect poor information extraction from the task environment and impaired maintenance of information in visuospatial working memory. Gaze behaviour during movement execution indicated an increased need to extract and process information from the goal space. Consequently, movement execution time increased in order to reverse erroneous movements and re-sequence the problem solution. Taken together, the traditional and entropy-based gaze analyses applied in the present study provide a promising approach to identify eye movement patterns that support neurocognitive performance on tasks relying on visuospatial planning and problem solving.
Collapse
Affiliation(s)
- Naila Ayala
- Department of Kinesiology and Health Sciences, University of Waterloo, Canada
| | - Abdullah Zafar
- Department of Kinesiology and Health Sciences, University of Waterloo, Canada
| | | |
Collapse
|
26
|
Abekawa N, Ito S, Gomi H. Gaze-specific motor memories for hand-reaching. Curr Biol 2022; 32:2747-2753.e6. [DOI: 10.1016/j.cub.2022.04.065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/23/2022] [Accepted: 04/22/2022] [Indexed: 10/18/2022]
|
27
|
Making a saccade enhances Stroop and Simon conflict control. Atten Percept Psychophys 2022; 84:795-814. [PMID: 35304699 DOI: 10.3758/s13414-022-02458-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2022] [Indexed: 11/08/2022]
Abstract
Cognitive control is an important ability instantiated in many situations such as conflict control (e.g., Stroop/Simon task) and the control of eye movements (e.g., saccades). However, it is unclear whether eye movement control shares a common cognitive control system with the conflict control. In Experiment 1, we asked participants to make a prosaccade or antisaccade and then to identify the color of a lateralized color word (i.e., a Stroop-Simon stimulus). The stimulus onset asynchrony (SOA) between the saccadic cue and the Stroop-Simon stimulus was manipulated to be either short (200 ms) or long (600 ms). Results showed that the Stroop effect at the response level and the (negative) Simon effect were smaller when the SOA was short than long, demonstrating a decline of response control over time after making a saccade. Moreover, this temporal change of the Simon effect was more pronounced in the antisaccade session than in the prosaccade session. Furthermore, individuals who had better performance in the antisaccade task performed better in the response control of Stroop interference. When the saccade task was removed in Experiment 2, the temporal declines of the response control observed in Experiment 1 were absent. Experiment 3 replicated the key results of Experiment 1 by replacing the Stroop-Simon task with a typical Simon task and separately testing the typical Stroop and Simon tasks. Overall, our findings suggest that a common system is shared between the control of eye movements and the conflict control at the response level.
Collapse
|
28
|
de Brouwer AJ, Spering M. Eye-hand coordination during online reach corrections is task dependent. J Neurophysiol 2022; 127:885-895. [PMID: 35294273 DOI: 10.1152/jn.00270.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To produce accurate movements, the human motor system needs to deal with errors that can occur due to inherent noise, changes in the body, or disturbances in the environment. Here, we investigated the temporal coupling of rapid corrections of the eye and hand in response to a change in visual target location during the movement. In addition to a "classic" double-step task in which the target stepped to a new position, participants performed a set of modified double-step tasks in which the change in movement goal was indicated by the appearance of an additional target, or by a spatial or symbolic cue. We found that both the absolute correction latencies of the eye and hand and the relative eye-hand correction latencies were dependent on the visual characteristics of the target change, with increasingly longer latencies in tasks that required more visual and cognitive processing. Typically, the hand started correcting slightly earlier than the eye, especially when the target change was indicated by a symbolic cue, and in conditions where visual feedback of the hand position was provided during the reach. Our results indicate that the oculomotor and limb-motor system can be differentially influenced by processing requirements of the task and emphasize that temporal eye-hand coupling is flexible rather than rigid.NEW & NOTEWORTHY Eye movements support hand movements in many situations. Here, we used variations of a double-step task to investigate temporal coupling of corrective hand and eye movements in response to target displacements. Correction latency coupling depended on the visual and cognitive processing demands of the task. The hand started correcting before the eye, especially when the task required decoding a symbolic cue. These findings highlight the flexibility and task dependency of eye-hand coordination.
Collapse
Affiliation(s)
- Anouk J de Brouwer
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada.,Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, British Columbia, Canada.,Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
29
|
Koskinen J, Torkamani-Azar M, Hussein A, Huotarinen A, Bednarik R. Automated tool detection with deep learning for monitoring kinematics and eye-hand coordination in microsurgery. Comput Biol Med 2021; 141:105121. [PMID: 34968859 DOI: 10.1016/j.compbiomed.2021.105121] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/30/2021] [Accepted: 12/03/2021] [Indexed: 11/03/2022]
Abstract
In microsurgical procedures, surgeons use micro-instruments under high magnifications to handle delicate tissues. These procedures require highly skilled attentional and motor control for planning and implementing eye-hand coordination strategies. Eye-hand coordination in surgery has mostly been studied in open, laparoscopic, and robot-assisted surgeries, as there are no available tools to perform automatic tool detection in microsurgery. We introduce and investigate a method for simultaneous detection and processing of micro-instruments and gaze during microsurgery. We train and evaluate a convolutional neural network for detecting 17 microsurgical tools with a dataset of 7500 frames from 20 videos of simulated and real surgical procedures. Model evaluations result in mean average precision at the 0.5 threshold of 89.5-91.4% for validation and 69.7-73.2% for testing over partially unseen surgical settings, and the average inference time of 39.90 ± 1.2 frames/second. While prior research has mostly evaluated surgical tool detection on homogeneous datasets with limited number of tools, we demonstrate the feasibility of transfer learning, and conclude that detectors that generalize reliably to new settings require data from several different surgical procedures. In a case study, we apply the detector with a microscope eye tracker to investigate tool use and eye-hand coordination during an intracranial vessel dissection task. The results show that tool kinematics differentiate microsurgical actions. The gaze-to-microscissors distances are also smaller during dissection than other actions when the surgeon has more space to maneuver. The presented detection pipeline provides the clinical and research communities with a valuable resource for automatic content extraction and objective skill assessment in various microsurgical environments.
Collapse
Affiliation(s)
- Jani Koskinen
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland.
| | - Mastaneh Torkamani-Azar
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland
| | - Ahmed Hussein
- Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Faculty of Medicine, Assiut University, Assiut, 71111, Egypt
| | - Antti Huotarinen
- Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Institute of Clinical Medicine, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland
| | - Roman Bednarik
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland
| |
Collapse
|
30
|
Willemse C, Abubshait A, Wykowska A. Motor behaviour mimics the gaze response in establishing joint attention, but is moderated by individual differences in adopting the intentional stance towards a robot avatar. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1994494] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Cesco Willemse
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, via Enrico Melen 83, Genova 16152, Italy
| | - Abdulaziz Abubshait
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, via Enrico Melen 83, Genova 16152, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, via Enrico Melen 83, Genova 16152, Italy
| |
Collapse
|
31
|
Arthur T, Harris DJ. Predictive eye movements are adjusted in a Bayes-optimal fashion in response to unexpectedly changing environmental probabilities. Cortex 2021; 145:212-225. [PMID: 34749190 DOI: 10.1016/j.cortex.2021.09.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 08/18/2021] [Accepted: 09/27/2021] [Indexed: 11/30/2022]
Abstract
This study examined the application of active inference to dynamic visuomotor control. Active inference proposes that actions are dynamically planned according to uncertainty about sensory information, prior expectations, and the environment, with motor adjustments serving to minimise future prediction errors. We investigated whether predictive gaze behaviours are indeed adjusted in this Bayes-optimal fashion during a virtual racquetball task. In this task, participants intercepted bouncing balls with varying levels of elasticity, under conditions of higher or lower environmental volatility. Participants' gaze patterns differed between stable and volatile conditions in a manner consistent with generative models of Bayes-optimal behaviour. Partially observable Markov models also revealed an increased rate of associative learning in response to unpredictable shifts in environmental probabilities, although there was no overall effect of volatility on this parameter. Findings extend active inference frameworks into complex and unconstrained visuomotor tasks and present important implications for a neurocomputational understanding of the visual guidance of action.
Collapse
Affiliation(s)
- Tom Arthur
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, Exeter, EX1 2LU, UK; Centre for Applied Autism Research, Department of Psychology, University of Bath, Bath, BA2 7AY, UK
| | - David J Harris
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, Exeter, EX1 2LU, UK.
| |
Collapse
|
32
|
Niechwiej-Szwedo E, Wu S, Nouredanesh M, Tung J, Christian LW. Development of eye-hand coordination in typically developing children and adolescents assessed using a reach-to-grasp sequencing task. Hum Mov Sci 2021; 80:102868. [PMID: 34509902 DOI: 10.1016/j.humov.2021.102868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 06/11/2021] [Accepted: 08/31/2021] [Indexed: 11/18/2022]
Abstract
Eye-hand coordination is required to accurately perform daily activities that involve reaching, grasping and manipulating objects. Studies using aiming, grasping or sequencing tasks have shown a stereotypical temporal coupling pattern where the eyes are directed to the object in advance of the hand movement, which may facilitate the planning and execution required for reaching. While the temporal coordination between the ocular and manual systems has been extensively investigated in adults, relatively little is known about the typical development of eye-hand coordination. Therefore, the current study addressed an important knowledge gap by characterizing the profile of eye-hand coupling in typically developing school-age children (n = 57) and in a cohort of adults (n = 30). Eye and hand movements were recorded concurrently during the performance of a bead threading task which consists of four distinct movements: reach to bead, grasp, reach to needle, and thread. Results showed a moderate to high correlation between eye and hand latencies in children and adults, supporting that both movements were planned in parallel. Eye and reach latencies, latency differences, and dwell time during grasping and threading, showed significant age-related differences, suggesting eye-hand coupling becomes more efficient in adolescence. Furthermore, visual acuity, stereoacuity and accommodative facility were also found to be associated with the efficiency of eye-hand coordination in children. Results from this study can serve as reference values when examining eye and hand movement during the performance of fine motor skills in children with neurodevelopmental disorders.
Collapse
Affiliation(s)
- Ewa Niechwiej-Szwedo
- Kinesiology, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada.
| | - Susana Wu
- Kinesiology, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada
| | - Mina Nouredanesh
- Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada
| | - James Tung
- Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada
| | - Lisa W Christian
- School of Optometry and Vision Science, University of Waterloo, 200 University Ave W, Waterloo ON N2L 3G1, Canada
| |
Collapse
|
33
|
Topical Review: Perceptual-cognitive Skills, Methods, and Skill-based Comparisons in Interceptive Sports. Optom Vis Sci 2021; 98:681-695. [PMID: 34328450 DOI: 10.1097/opx.0000000000001727] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
SIGNIFICANCE We give a comprehensive picture of perceptual-cognitive (PC) skills that could contribute to performance in interceptive sports. Both visual skills that are low level and unlikely influenced by experience and higher-level cognitive-attentional skills are considered, informing practitioners for identification and training and alerting researchers to gaps in the literature.Perceptual-cognitive skills and abilities are keys to success in interceptive sports. The interest in identifying which skills and abilities underpin success and hence should be selected and developed is likely going to grow as technologies for skill testing and training continue to advance. Many different methods and measures have been applied to the study of PC skills in the research laboratory and in the field, and research findings across studies have often been inconsistent. In this article, we provide definitional clarity regarding whether a skill is primarily visual attentional (ranging from fundamental/low-level skills to high-level skills) or cognitive. We review those skills that have been studied using sport-specific stimuli or tests, such as postural cue anticipation in baseball, as well as those that are mostly devoid of sport context, considered general skills, such as dynamic visual acuity. In addition to detailing the PC skills and associated methods, we provide an accompanying table of published research since 1995, highlighting studies (for various skills and sports) that have and have not differentiated across skill groups.
Collapse
|
34
|
Jana S, Gopal A, Murthy A. Computational Mechanisms Mediating Inhibitory Control of Coordinated Eye-Hand Movements. Brain Sci 2021; 11:607. [PMID: 34068477 PMCID: PMC8150398 DOI: 10.3390/brainsci11050607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 05/02/2021] [Accepted: 05/04/2021] [Indexed: 11/17/2022] Open
Abstract
Significant progress has been made in understanding the computational and neural mechanisms that mediate eye and hand movements made in isolation. However, less is known about the mechanisms that control these movements when they are coordinated. Here, we outline our computational approaches using accumulation-to-threshold and race-to-threshold models to elucidate the mechanisms that initiate and inhibit these movements. We suggest that, depending on the behavioral context, the initiation and inhibition of coordinated eye-hand movements can operate in two modes-coupled and decoupled. The coupled mode operates when the task context requires a tight coupling between the effectors; a common command initiates both effectors, and a unitary inhibitory process is responsible for stopping them. Conversely, the decoupled mode operates when the task context demands weaker coupling between the effectors; separate commands initiate the eye and hand, and separate inhibitory processes are responsible for stopping them. We hypothesize that the higher-order control processes assess the behavioral context and choose the most appropriate mode. This computational mechanism can explain the heterogeneous results observed across many studies that have investigated the control of coordinated eye-hand movements and may also serve as a general framework to understand the control of complex multi-effector movements.
Collapse
Affiliation(s)
- Sumitash Jana
- Department of Psychology, University of California San Diego, La Jolla, CA 92093, USA
| | - Atul Gopal
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD 20814, USA
| | - Aditya Murthy
- Centre for Neuroscience, Indian Institute of Science, Bangalore, Karnataka 560012, India;
| |
Collapse
|
35
|
Abstract
Motor adaptation maintains movement accuracy over the lifetime. Saccadic eye movements have been used successfully to study the mechanisms and neural basis of adaptation. Using behaviorally irrelevant targets, it has been shown that saccade adaptation is driven by errors only in a brief temporal interval after movement completion. However, under natural conditions, eye movements are used to extract information from behaviorally relevant objects and to guide actions manipulating these objects. In this case, the action outcome often becomes apparent only long after movement completion, outside the supposed temporal window of error evaluation. Here, we show that saccade adaptation can be driven by error signals long after the movement when using behaviorally relevant targets. Adaptation occurred when a task-relevant target appeared two seconds after the saccade, or when a retro-cue indicated which of two targets, stored in visual working memory, was task-relevant. Our results emphasize the important role of visual working memory for optimal movement control.
Collapse
|
36
|
Fooken J, Kreyenmeier P, Spering M. The role of eye movements in manual interception: A mini-review. Vision Res 2021; 183:81-90. [PMID: 33743442 DOI: 10.1016/j.visres.2021.02.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/28/2021] [Accepted: 02/04/2021] [Indexed: 10/21/2022]
Abstract
When we catch a moving object in mid-flight, our eyes and hands are directed toward the object. Yet, the functional role of eye movements in guiding interceptive hand movements is not yet well understood. This review synthesizes emergent views on the importance of eye movements during manual interception with an emphasis on laboratory studies published since 2015. We discuss the role of eye movements in forming visual predictions about a moving object, and for enhancing the accuracy of interceptive hand movements through feedforward (extraretinal) and feedback (retinal) signals. We conclude by proposing a framework that defines the role of human eye movements for manual interception accuracy as a function of visual certainty and object motion predictability.
Collapse
Affiliation(s)
- Jolande Fooken
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, Canada; Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|