1
|
Kang JU, Mooshagian E, Snyder LH. Functional organization of posterior parietal cortex circuitry based on inferred information flow. Cell Rep 2024; 43:114028. [PMID: 38581681 DOI: 10.1016/j.celrep.2024.114028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 02/09/2024] [Accepted: 03/15/2024] [Indexed: 04/08/2024] Open
Abstract
Many studies infer the role of neurons by asking what information can be decoded from their activity or by observing the consequences of perturbing their activity. An alternative approach is to consider information flow between neurons. We applied this approach to the parietal reach region (PRR) and the lateral intraparietal area (LIP) in posterior parietal cortex. Two complementary methods imply that across a range of reaching tasks, information flows primarily from PRR to LIP. This indicates that during a coordinated reach task, LIP has minimal influence on PRR and rules out the idea that LIP forms a general purpose spatial processing hub for action and cognition. Instead, we conclude that PRR and LIP operate in parallel to plan arm and eye movements, respectively, with asymmetric interactions that likely support eye-hand coordination. Similar methods can be applied to other areas to infer their functional relationships based on inferred information flow.
Collapse
Affiliation(s)
- Jung Uk Kang
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | - Eric Mooshagian
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Lawrence H Snyder
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, USA
| |
Collapse
|
2
|
Irons JY, Williams A, Holland J, Jones J. An Exploration of People Living with Parkinson's Experience of Cardio-Drumming; Parkinson's Beats: A Qualitative Phenomenological Study. Int J Environ Res Public Health 2024; 21:514. [PMID: 38673425 PMCID: PMC11050379 DOI: 10.3390/ijerph21040514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 04/13/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024]
Abstract
Research has shown that physical activity has a range of benefits for people living with Parkinson's (PLwP), improving muscle strength, balance, flexibility, and walking, as well as non-motor symptoms such as mood. Parkinson's Beats is a form of cardio-drumming, specifically adapted for PLwP, and requires no previous experience nor skills. Nineteen PLwP (aged between 55 and 80) took part in the regular Parkinson's Beats sessions in-person or online. Focus group discussions took place after twelve weeks to understand the impacts of Parkinson's Beats. Through the framework analysis, six themes and fifteen subthemes were generated. Participants reported a range of benefits of cardio-drumming, including improved fitness and movement, positive mood, the flow experience, and enhanced social wellbeing. A few barriers to participation were also reported. Future research is justified, and best practice guidelines are needed to inform healthcare professionals, PLwP and their care givers.
Collapse
Affiliation(s)
- J. Yoon Irons
- School of Psychology, College of Health, Psychology and Social Care, University of Derby, Derby DE22 1GB, UK
| | - Alison Williams
- Parkinson’s Scotland Office, 1/14 King James VI Business Centre, Friarton Road, Perth PH2 8DY, UK; (A.W.); (J.H.)
| | - Jo Holland
- Parkinson’s Scotland Office, 1/14 King James VI Business Centre, Friarton Road, Perth PH2 8DY, UK; (A.W.); (J.H.)
| | - Julie Jones
- School of Health Sciences, Robert Gordon University, Garthdee Road, Aberdeen AB10 7QG, UK;
| |
Collapse
|
3
|
Coudiere A, Danion FR. Eye-hand coordination all the way: from discrete to continuous hand movements. J Neurophysiol 2024; 131:652-667. [PMID: 38381528 DOI: 10.1152/jn.00314.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 01/31/2024] [Accepted: 02/18/2024] [Indexed: 02/23/2024] Open
Abstract
The differentiation between continuous and discrete actions is key for behavioral neuroscience. Although many studies have characterized eye-hand coordination during discrete (e.g., reaching) and continuous (e.g., pursuit tracking) actions, all these studies were conducted separately, using different setups and participants. In addition, how eye-hand coordination might operate at the frontier between discrete and continuous movements remains unexplored. Here we filled these gaps by means of a task that could elicit different movement dynamics. Twenty-eight participants were asked to simultaneously track with their eyes and a joystick a visual target that followed an unpredictable trajectory and whose position was updated at different rates (from 1.5 to 240 Hz). This procedure allowed us to examine actions ranging from discrete point-to-point movements (low refresh rate) to continuous pursuit (high refresh rate). For comparison, we also tested a manual tracking condition with the eyes fixed and a pure eye tracking condition (hand fixed). The results showed an abrupt transition between discrete and continuous hand movements around 3 Hz contrasting with a smooth trade-off between fixations and smooth pursuit. Nevertheless, hand and eye tracking accuracy remained strongly correlated, with each of these depending on whether the other effector was recruited. Moreover, gaze-cursor distance and lag were smaller when eye and hand performed the task conjointly than separately. Altogether, despite some dissimilarities in eye and hand dynamics when transitioning between discrete and continuous movements, our results emphasize that eye-hand coordination continues to smoothly operate and support the notion of synergies across eye movement types.NEW & NOTEWORTHY The differentiation between continuous and discrete actions is key for behavioral neuroscience. By using a visuomotor task in which we manipulate the target refresh rate to trigger different movement dynamics, we explored eye-hand coordination all the way from discrete to continuous actions. Despite abrupt changes in hand dynamics, eye-hand coordination continues to operate via a gradual trade-off between fixations and smooth pursuit, an observation confirming the notion of synergies across eye movement types.
Collapse
Affiliation(s)
- Adrien Coudiere
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| | - Frederic R Danion
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| |
Collapse
|
4
|
Alrubaye Z, Hudhud Mughrabi M, Manav B, Batmaz AU. Effects of color cues on eye-hand coordination training with a mirror drawing task in virtual environment. Front Psychol 2024; 14:1307590. [PMID: 38288362 PMCID: PMC10823539 DOI: 10.3389/fpsyg.2023.1307590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 12/22/2023] [Indexed: 01/31/2024] Open
Abstract
Mirror drawing is a motor learning task that is used to evaluate and improve eye-hand coordination of users and can be implemented in immersive Virtual Reality (VR) Head-Mounted Displays (HMDs) for training purposes. In this paper, we investigated the effect of color cues on user motor performance in a mirror-drawing task between Virtual Environment (VE) and Real World (RW), with three different colors. We conducted a 5-day user study with twelve participants. The results showed that the participants made fewer errors in RW compared to VR, except for pre-training, which indicated that hardware and software limitations have detrimental effects on the motor learning of the participants across different realities. Furthermore, participants made fewer errors with the colors close to green, which is usually associated with serenity, contentment, and relaxation. According to our findings, VR headsets can be used to evaluate participants' eye-hand coordination in mirror drawing tasks to evaluate the motor-learning of participants. VE and RW training applications could benefit from our findings in order to enhance their effectiveness.
Collapse
Affiliation(s)
- Zainab Alrubaye
- Architecture Department, Art and Design Faculty, Kadir Has University, Istanbul, Türkiye
| | - Moaaz Hudhud Mughrabi
- Mechatronics Engineering Department, Faculty of Engineering and Natural Sciences, Kadir Has University, Istanbul, Türkiye
| | - Banu Manav
- Interior Architecture and Environmental Design Department, Art and Design Faculty, Kadir Has University, Istanbul, Türkiye
| | - Anil Ufuk Batmaz
- Computer Science and Software Engineering Department, Gina Cody School of Engineering and Computer Science, Concordia University, Montreal, QC, Canada
| |
Collapse
|
5
|
Wong AMF. Vision Beyond Vision: Lessons Learned from Amblyopia. J Binocul Vis Ocul Motil 2023; 73:29-39. [PMID: 36947429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
Amblyopia is characterized by spatiotemporal uncertainty in the visual system. In addition to its effects on vision, amblyopia also exerts a widespread impact on other systems. Many of these changes are observed not only during amblyopic eye viewing but also during fellow eye and binocular viewing. They generally correlate with the severity of visual acuity and stereo acuity loss. The affected systems include: (1) oculomotor control manifested as abnormal fixation, saccades, smooth pursuit, and saccadic adaptation; (2) motor control with altered programming, execution, and temporal dynamics of eye-hand coordination, and decreased ability of the sensorimotor system to adapt to changes in the visual environment; (3) balance control with decreased postural stability; (4) multisensory integration characterized by reduced McGurk effect and altered cross-modal interactions in audiovisual perception; and (5) auditory localization manifested as impaired spatial hearing as a result of abnormal developmental calibration of the auditory map. To detect amblyopia early, a targeted approach is required to identify children from low-income families through in-school visual screening, supplemented by follow-up care and free eyeglasses in high-needs schools.
Collapse
Affiliation(s)
- Agnes M F Wong
- Department of Ophthalmology and Vision Sciences, The Hospital for Sick Children and University of Toronto, Toronto, Canada
| |
Collapse
|
6
|
Kreyenmeier P, Kämmer L, Fooken J, Spering M. Humans Can Track But Fail to Predict Accelerating Objects. eNeuro 2022; 9:ENEURO. [PMID: 36635938 DOI: 10.1523/ENEURO.0185-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 07/10/2022] [Accepted: 07/27/2022] [Indexed: 02/02/2023] Open
Abstract
Objects in our visual environment often move unpredictably and can suddenly speed up or slow down. The ability to account for acceleration when interacting with moving objects can be critical for survival. Here, we investigate how human observers track an accelerating target with their eyes and predict its time of reappearance after a temporal occlusion by making an interceptive hand movement. Before occlusion, observers smoothly tracked the accelerating target with their eyes. At the time of occlusion, observers made a predictive saccade to the location where they subsequently intercepted the target with a quick pointing movement. We tested how observers integrated target motion information by comparing three alternative models that describe time-to-contact (TTC) based on the (1) final target velocity sample before occlusion, (2) average target velocity before occlusion, or (3) final target velocity and the rate of target acceleration. We show that observers were able to accurately track the accelerating target with visually-guided smooth pursuit eye movements. However, the timing of the predictive saccade and manual interception revealed inability to act on target acceleration when predicting TTC. Instead, interception timing was best described by the final velocity model that relies on extrapolating the last available target velocity sample before occlusion. Moreover, predictive saccades and manual interception showed similar insensitivity to target acceleration and were correlated on a trial-by-trial basis. These findings provide compelling evidence for the failure of integrating target acceleration into predictive models of target motion that drive both interceptive eye and hand movements.
Collapse
|
7
|
Wijesundera C, Crewther SG, Wijeratne T, Vingrys AJ. Vision and Visuomotor Performance Following Acute Ischemic Stroke. Front Neurol 2022; 13:757431. [PMID: 35250804 PMCID: PMC8889933 DOI: 10.3389/fneur.2022.757431] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 01/17/2022] [Indexed: 11/18/2022] Open
Abstract
Background As measurable sensory and motor deficits are key to the diagnosis of stroke, we investigated the value of objective tablet based vision and visuomotor capacity assessment in acute mild-moderate ischemic stroke (AIS) patients. Methods Sixty AIS patients (65 ± 14 years, 33 males) without pre-existing visual/neurological disorders and acuity better than 6/12 were tested at their bedside during the first week post-stroke and were compared to 40 controls (64 ± 11 years, 15 males). Visual field sensitivity, quantified as mean deviation (dB) and visual acuity (with and without luminance noise), were tested on MRFn (Melbourne Rapid Field-Neural) iPad application. Visuomotor capacity was assessed with the Lee-Ryan Eye-Hand Coordination (EHC) iPad application using a capacitive stylus for iPad held in the preferred hand.Time to trace 3 shapes and displacement errors (deviations of >3.5 mm from the shape) were recorded. Diagnostic capacity was considered with Receiver Operating Characteristics. Vision test outcomes were correlated with National Institutes of Health Stroke Scale (NIHSS) score at the admission. Results Of the 60 AIS patients, 58 grasped the iPad stylus in their preferred right hand even though 31 had left hemisphere lesions. Forty-one patients (68%) with better than 6/12 visual acuity (19 right, 19 left hemisphere and 3 multi-territorial lesions) returned significantly abnormal visual fields. The stroke group took significantly longer (AIS: 93.4 ± 60.1 s; Controls: 33.1 ± 11.5 s, p < 0.01) to complete EHC tracing and made larger displacements (AIS: 16,388 ± 36,367 mm; Controls: 2,620 ± 1,359 mm, p < 0.01) although both control and stroke groups made similar numbers of errors. EHC time was not significantly different between participants with R (n = 26, 84.3 ± 55.3 s) and L (n = 31, 101.3 ± 64.7 s) hemisphere lesions. NIHSS scores and EHC measures showed low correlations (Spearman R: −0.15, L: 0.17). ROC analysis of EHC and vision tests found high diagnostic specificity and sensitivity for a fail at EHC time, or visual field, or Acuity-in-noise (sensivity: 93%, specificity: 83%) that shows little relationship to NIHSS scores. Conclusions EHC time and vision test outcomes provide an easy and rapid bedside measure that complements existing clinical assessments in AIS. The low correlation between visual function, NIHSS scores and lesion site offers an expanded clinical view of changes following stroke.
Collapse
Affiliation(s)
- Chamini Wijesundera
- School of Psychology and Public Health, La Trobe University, Melbourne, VIC, Australia.,Department of Neurology, Sunshine Hospital, The University of Melbourne, Parkville, VIC, Australia
| | - Sheila G Crewther
- School of Psychology and Public Health, La Trobe University, Melbourne, VIC, Australia.,Department of Neurology, Sunshine Hospital, The University of Melbourne, Parkville, VIC, Australia
| | - Tissa Wijeratne
- School of Psychology and Public Health, La Trobe University, Melbourne, VIC, Australia.,Department of Neurology, Sunshine Hospital, The University of Melbourne, Parkville, VIC, Australia
| | - Algis J Vingrys
- Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, VIC, Australia
| |
Collapse
|
8
|
Olson S, Abd M, Engeberg ED. Human-Inspired Robotic Eye-Hand Coordination Enables New Communication Channels Between Humans and Robots. Int J Soc Robot 2021; 13:1033-1046. [PMID: 34659586 DOI: 10.1007/s12369-020-00693-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
This paper concerns human-inspired robotic eye-hand coordination algorithms using custom built robotic eyes that were interfaced with a Baxter robot. Eye movement was programmed anthropomorphically based on previously reported research on human eye-hand coordination during grasped object transportation. Robotic eye tests were first performed on a component level where accurate position and temporal control were achieved. Next, 11 human subjects were recruited to observe the novel robotic system to quantify the ability of robotic eye-hand coordination algorithms to convey two kinds of information to people during object transportation tasks: first, the transported object's delivery location and second, the level of care exerted by the robot to transport the object. Most subjects correlated decreased frequency in gaze fixations on an object's target location with increased care of transporting an object, although these results were somewhat mixed among the 11 human subjects. Additionally, the human subjects were able to reliably infer the delivery location of the transported object purely by the robotic eye-hand coordination algorithm with an overall success rate of 91.4%. These results suggest that anthropomorphic eye-hand coordination of robotic entities could be useful in pedagogical or industrial settings.
Collapse
Affiliation(s)
- Stephanie Olson
- Department of Ocean and Mechanical Engineering, Florida Atlantic University, Boca Raton, Florida, USA
| | - Moaed Abd
- Department of Ocean and Mechanical Engineering, Florida Atlantic University, Boca Raton, Florida, USA
| | - Erik D Engeberg
- Department of Ocean and Mechanical Engineering, Florida Atlantic University, Boca Raton, Florida, USA
| |
Collapse
|
9
|
Gordon-Murer C, Stöckel T, Sera M, Hughes CML. Developmental Differences in the Relationships Between Sensorimotor and Executive Functions. Front Hum Neurosci 2021; 15:714828. [PMID: 34456700 PMCID: PMC8387672 DOI: 10.3389/fnhum.2021.714828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/22/2021] [Indexed: 11/25/2022] Open
Abstract
Background There is evidence that sensorimotor and executive functions are inherently intertwined, but that the relationship between these functions differ depending on an individual’s stage in development (e.g., childhood, adolescence, adulthood). Objective In this study, sensorimotor and executive function performance was examined in a group of children (n = 40; 8–12 years), adolescents (n = 39; 13–17 years), and young adults (n = 83; 18–24 years) to investigate maturation of these functions, and how the relationships between these functions differ between groups. Results Adults and adolescents outperformed children on all sensorimotor and executive functions. Adults and adolescents exhibited similar levels of executive functioning, but adults outperformed adolescents on two sensorimotor functioning measures (eye-hand coordination spatial precision and proprioceptive variability). Regression analysis demonstrated that executive functions contribute to children’s sensorimotor performance, but do not contribute to adolescent’s sensorimotor performance. Conclusion These findings highlight the key role that developmental stage plays in the relationship between sensorimotor and executive functions. Specifically, executive functions appear to contribute to more successful sensorimotor function performance in childhood, but not during adolescence. It is likely that sensorimotor functions begin to develop independently from executive functions during adolescence, and therefore do not contribute to successful sensorimotor performance. The change in the relationship between sensorimotor and executive functions is important to take into consideration when developing sensorimotor and executive function interventions.
Collapse
Affiliation(s)
- Chloe Gordon-Murer
- Health Equity Institute, San Francisco, CA, United States.,Department of Kinesiology, San Francisco State University, San Francisco, CA, United States.,Sport & Exercise Psychology Unit, Department of Sport Science, University of Rostock, Rostock, Germany
| | - Tino Stöckel
- Sport & Exercise Psychology Unit, Department of Sport Science, University of Rostock, Rostock, Germany
| | - Michael Sera
- Health Equity Institute, San Francisco, CA, United States.,Department of Kinesiology, San Francisco State University, San Francisco, CA, United States
| | - Charmayne M L Hughes
- Health Equity Institute, San Francisco, CA, United States.,Department of Kinesiology, San Francisco State University, San Francisco, CA, United States
| |
Collapse
|
10
|
Abstract
When reaching for an object with the hand, the gaze is usually directed at the target. In a laboratory setting, fixation is strongly maintained at the reach target until the reaching is completed, a phenomenon known as "gaze anchoring." While conventional accounts of such tight eye-hand coordination have often emphasized the internal synergetic linkage between both motor systems, more recent optimal control theories regard motor coordination as the adaptive solution to task requirements. We here investigated to what degree gaze control during reaching is modulated by task demands. We adopted a gaze-anchoring paradigm in which participants had to reach for a target location. During the reach, they additionally had to make a saccadic eye movement to a salient visual cue presented at locations other than the target. We manipulated the task demands by independently changing reward contingencies for saccade reaction time (RT) and reaching accuracy. On average, both saccade RTs and reach error varied systematically with reward condition, with reach accuracy improving when the saccade was delayed. The distribution of the saccade RTs showed two types of eye movements: fast saccades with short RTs, and voluntary saccade with longer RTs. Increased reward for high reach accuracy reduced the probability of fast saccades but left their latency unchanged. The results suggest that gaze anchoring acts through a suppression of fast saccades, a mechanism that can be adaptively adjusted to the current task demands.NEW & NOTEWORTHY During visually guided reaching, our eyes usually fixate the target and saccades elsewhere are delayed ("gaze anchoring"). We here show that the degree of gaze anchoring is flexibly modulated by the reward contingencies of saccade latency and reach accuracy. Reach error became larger when saccades occurred earlier. These results suggest that early saccades are costly for reaching and the brain modulates inhibitory online coordination from the hand to the eye system depending on task requirements.
Collapse
Affiliation(s)
- Naotoshi Abekawa
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa, Japan.,Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa, Japan
| | - Jörn Diedrichsen
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada.,Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
11
|
Staal J, Mattace-Raso F, Daniels HAM, van der Steen J, Pel JJM. To Explore the Predictive Power of Visuomotor Network Dysfunctions in Mild Cognitive Impairment and Alzheimer's Disease. Front Neurosci 2021; 15:654003. [PMID: 34262424 PMCID: PMC8273577 DOI: 10.3389/fnins.2021.654003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 06/07/2021] [Indexed: 12/22/2022] Open
Abstract
Background Research into Alzheimer’s disease has shifted toward the identification of minimally invasive and less time-consuming modalities to define preclinical stages of Alzheimer’s disease. Method Here, we propose visuomotor network dysfunctions as a potential biomarker in AD and its prodromal stage, mild cognitive impairment with underlying the Alzheimer’s disease pathology. The functionality of this network was tested in terms of timing, accuracy, and speed with goal-directed eye-hand tasks. The predictive power was determined by comparing the classification performance of a zero-rule algorithm (baseline), a decision tree, a support vector machine, and a neural network using functional parameters to classify controls without cognitive disorders, mild cognitive impaired patients, and Alzheimer’s disease patients. Results Fair to good classification was achieved between controls and patients, controls and mild cognitive impaired patients, and between controls and Alzheimer’s disease patients with the support vector machine (77–82% accuracy, 57–93% sensitivity, 63–90% specificity, 0.74–0.78 area under the curve). Classification between mild cognitive impaired patients and Alzheimer’s disease patients was poor, as no algorithm outperformed the baseline (63% accuracy, 0% sensitivity, 100% specificity, 0.50 area under the curve). Comparison with Existing Method(s) The classification performance found in the present study is comparable to that of the existing CSF and MRI biomarkers. Conclusion The data suggest that visuomotor network dysfunctions have potential in biomarker research and the proposed eye-hand tasks could add to existing tests to form a clear definition of the preclinical phenotype of AD.
Collapse
Affiliation(s)
- Justine Staal
- Vestibular and Ocular Motor Research Group, Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands.,Section of Geriatric Medicine, Department of Internal Medicine, Erasmus MC, Rotterdam, Netherlands
| | - Francesco Mattace-Raso
- Section of Geriatric Medicine, Department of Internal Medicine, Erasmus MC, Rotterdam, Netherlands
| | - Hennie A M Daniels
- Center for Economic Research, Tilburg University, Tilburg, Netherlands.,Department of Technology and Operations Management, Rotterdam School of Management, Erasmus University, Rotterdam, Netherlands
| | - Johannes van der Steen
- Vestibular and Ocular Motor Research Group, Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands.,Royal Dutch Visio, Huizen, Netherlands
| | - Johan J M Pel
- Vestibular and Ocular Motor Research Group, Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands
| |
Collapse
|
12
|
Abstract
Shared autonomy aims at combining robotic and human control in the execution of remote, teleoperated tasks. This cooperative interaction cannot be brought about without the robot first recognizing the current human intention in a fast and reliable way so that a suitable assisting plan can be quickly instantiated and executed. Eye movements have long been known to be highly predictive of the cognitive agenda unfolding during manual tasks and constitute, hence, the earliest and most reliable behavioral cues for intention estimation. In this study, we present an experiment aimed at analyzing human behavior in simple teleoperated pick-and-place tasks in a simulated scenario and at devising a suitable model for early estimation of the current proximal intention. We show that scan paths are, as expected, heavily shaped by the current intention and that two types of Gaussian Hidden Markov Models, one more scene-specific and one more action-specific, achieve a very good prediction performance, while also generalizing to new users and spatial arrangements. We finally discuss how behavioral and model results suggest that eye movements reflect to some extent the invariance and generality of higher-level planning across object configurations, which can be leveraged by cooperative robotic systems.
Collapse
Affiliation(s)
- Stefan Fuchs
- Honda Research Institute Europe, Offenbach, Germany
| | | |
Collapse
|
13
|
Bernard-Espina J, Beraneck M, Maier MA, Tagliabue M. Multisensory Integration in Stroke Patients: A Theoretical Approach to Reinterpret Upper-Limb Proprioceptive Deficits and Visual Compensation. Front Neurosci 2021; 15:646698. [PMID: 33897359 PMCID: PMC8058201 DOI: 10.3389/fnins.2021.646698] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 03/04/2021] [Indexed: 11/29/2022] Open
Abstract
For reaching and grasping, as well as for manipulating objects, optimal hand motor control arises from the integration of multiple sources of sensory information, such as proprioception and vision. For this reason, proprioceptive deficits often observed in stroke patients have a significant impact on the integrity of motor functions. The present targeted review attempts to reanalyze previous findings about proprioceptive upper-limb deficits in stroke patients, as well as their ability to compensate for these deficits using vision. Our theoretical approach is based on two concepts: first, the description of multi-sensory integration using statistical optimization models; second, on the insight that sensory information is not only encoded in the reference frame of origin (e.g., retinal and joint space for vision and proprioception, respectively), but also in higher-order sensory spaces. Combining these two concepts within a single framework appears to account for the heterogeneity of experimental findings reported in the literature. The present analysis suggests that functional upper limb post-stroke deficits could not only be due to an impairment of the proprioceptive system per se, but also due to deficiencies of cross-references processing; that is of the ability to encode proprioceptive information in a non-joint space. The distinction between purely proprioceptive or cross-reference-related deficits can account for two experimental observations: first, one and the same patient can perform differently depending on specific proprioceptive assessments; and a given behavioral assessment results in large variability across patients. The distinction between sensory and cross-reference deficits is also supported by a targeted literature review on the relation between cerebral structure and proprioceptive function. This theoretical framework has the potential to lead to a new stratification of patients with proprioceptive deficits, and may offer a novel approach to post-stroke rehabilitation.
Collapse
Affiliation(s)
| | | | - Marc A Maier
- Université de Paris, INCC UMR 8002, CNRS, Paris, France
| | | |
Collapse
|
14
|
Abstract
Clinical relevance: The development of region-specific norms for the Developmental Test of Visual Perception, third edition (DTVP-3), from a group of children from South India will contribute to the assessment of visual-perceptual skills in children.Background: Visual-perceptual skills are crucial for children to understand their environment, perform activities of daily living, and cope with their learning environment. These perceptual skills also influence children's behavioural characteristics. Well-constructed, norm-referenced standardised tools are vital for assessing visual-perceptual skills. Since ethnicity and cultural background may influence the performance of perceptual tasks, the proposed norms for any assessment tool need to be validated for specific populations. Hence, the current study aimed to develop norms in the Indian context for the Developmental Test of Visual Perception, third edition (DTVP-3), and compare the obtained norms with the norms established in the United States of America.Methods: One hundred and thirty-seven healthy children (mean age: 9.5 ± 1.7 years, range: 7-12 years, 67 females) participated in the study. Visual-perceptual functions including eye-hand coordination, copying, figure-ground, visual closure, and form constancy, were assessed and compared with age-matched norms provided in the test manual. Internal consistency of DTVP-3 was evaluated using Cronbach's Alpha correlation coefficients.Results: Significant differences were observed between the study groups and the given norms for assessment of the abilities of eye-hand coordination, copying skills and visual figure-ground (p < 0.05). No significant difference was found for visual closure and form constancy subtests. Cronbach's alpha coefficients for the five subtests ranged from 0.70 to 0.91 while the composite indexes had coefficients from 0.90 to 0.93.Conclusion: The DTVP-3 showed acceptable limits of internal consistency when tested in a group of children from South India. Region-specific norms need to be used for each of the subtests.
Collapse
Affiliation(s)
- A Valarmathi
- Department of Optometry, Sri Ramachandra Institute of Higher Education and Research, Chennai, India
| | | | - Lakshmi Venkatesh
- Department of Speech, Language and Hearing Sciences, Sri Ramachandra Institute of Higher Education and Research, Chennai, India
| | - Santhanam T
- SDS Institute of Behavioral Sciences, Chennai, India
| |
Collapse
|
15
|
Mena-Garcia L, Pastor-Jimeno JC, Maldonado MJ, Coco-Martin MB, Fernandez I, Arenillas JF. Multitasking Compensatory Saccadic Training Program for Hemianopia Patients: A New Approach With 3-Dimensional Real-World Objects. Transl Vis Sci Technol 2021; 10:3. [PMID: 34003888 PMCID: PMC7873505 DOI: 10.1167/tvst.10.2.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 12/25/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To examine whether a noncomputerized multitasking compensatory saccadic training program (MCSTP) for patients with hemianopia, based on a reading regimen and eight exercises that recreate everyday visuomotor activities using three-dimensional (3D) real-world objects, improves the visual ability/function, quality of life (QL), and functional independence (FI). Methods The 3D-MCSTP included four in-office visits and two customized home-based daily training sessions over 12 weeks. A quasiexperimental, pretest/posttest study design was carried out with an intervention group (IG) (n = 20) and a no-training group (NTG) (n = 20) matched for age, hemianopia type, and brain injury duration. Results The groups were comparable for the main baseline variables and all participants (n = 40) completed the study. The IG mainly showed significant improvements in visual-processing speed (57.34% ± 19.28%; P < 0.0001) and visual attention/retention ability (26.67% ± 19.21%; P < 0.0001), which also were significantly greater (P < 0.05) than in the NTG. Moreover, the IG showed large effect sizes (Cohen's d) in 75% of the total QL and FI dimensions analyzed; in contrast to the NTG that showed negligible mean effect sizes in 96% of these dimensions. Conclusions The customized 3D-MCSTP was associated with a satisfactory response in the IG for improving complex visual processing, QL, and FI. Translational Relevance Neurovisual rehabilitation of patients with hemianopia seems more efficient when programs combine in-office visits and customized home-based training sessions based on real objects and simulating real-life conditions, than no treatment or previously reported computer-screen approaches, probably because of better stimulation of patients´ motivation and visual-processing speed brain mechanisms.
Collapse
Affiliation(s)
- Laura Mena-Garcia
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain
- Universidad de Valladolid, Valladolid, Spain
| | - Jose C. Pastor-Jimeno
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain
- Universidad de Valladolid, Valladolid, Spain
- Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
- Red Temática de Investigación Colaborativa en Oftalmología (OftaRed), Instituto de Salud Carlos III, Madrid, Spain
| | - Miguel J. Maldonado
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain
- Universidad de Valladolid, Valladolid, Spain
- Red Temática de Investigación Colaborativa en Oftalmología (OftaRed), Instituto de Salud Carlos III, Madrid, Spain
| | - Maria B. Coco-Martin
- Universidad de Valladolid, Valladolid, Spain
- Department of Neurology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| | - Itziar Fernandez
- Universidad de Valladolid, Valladolid, Spain
- Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Valladolid, Spain
| | - Juan F. Arenillas
- Universidad de Valladolid, Valladolid, Spain
- Department of Neurology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| |
Collapse
|
16
|
Sergio LE, Gorbet DJ, Adams MS, Dobney DM. The Effects of Mild Traumatic Brain Injury on Cognitive-Motor Integration for Skilled Performance. Front Neurol 2020; 11:541630. [PMID: 33041992 PMCID: PMC7525090 DOI: 10.3389/fneur.2020.541630] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 08/12/2020] [Indexed: 01/01/2023] Open
Abstract
Adults exposed to blast and blunt impact often experience mild traumatic brain injury, affecting neural functions related to sensory, cognitive, and motor function. In this perspective article, we will review the effects of impact and blast exposure on functional performance that requires the integration of these sensory, cognitive, and motor control systems. We describe cognitive-motor integration and how it relates to successfully navigating skilled activities crucial for work, duty, sport, and even daily life. We review our research on the behavioral effects of traumatic impact and blast exposure on cognitive-motor integration in both younger and older adults, and the neural networks that are involved in these types of skills. Overall, we have observed impairments in rule-based skilled performance as a function of both physical impact and blast exposure. The extent of these impairments depended on the age at injury and the sex of the individual. It appears, however, that cognitive-motor integration deficits can be mitigated by the level of skill expertise of the affected individual, suggesting that such experience imparts resiliency in the brain networks that underly the control of complex visuomotor performance. Finally, we discuss the next steps needed to comprehensively understand the impact of trauma and blast exposure on functional movement control.
Collapse
Affiliation(s)
- Lauren E. Sergio
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - Diana J. Gorbet
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - Meaghan S. Adams
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
- Vision-Science to Application (VISTA) Program, York University, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Danielle M. Dobney
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
- Vision-Science to Application (VISTA) Program, York University, Toronto, ON, Canada
| |
Collapse
|
17
|
Hadjidimitrakis K. Coupling of head and hand movements during eye-head-hand coordination: there is more to reaching than meets eye. J Neurophysiol 2020; 123:1579-1582. [PMID: 32233904 DOI: 10.1152/jn.00099.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Does arm reaching affect eye-head shifts? Does the head alter eye-hand coordinated movements? Sensorimotor research has focused on either eye-head or eye-hand coordination, with only occasional works studying all these effectors together. Arora et al. (Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. J Neurophysiol 122: 1946-1961, 2019) examined eye-head-hand coordination for the first time in nonhuman primates and provide evidence suggesting that head and hand movements are more coupled than traditionally considered.
Collapse
|
18
|
Rosa HA, Adrián AC, Beatriz IS, María-José LC, Miguel-Ángel S. Psychomotor, Psychosocial and Reading Skills in Children with Amblyopia and the Effect of Different Treatments. J Mot Behav 2020; 53:176-184. [PMID: 32281918 DOI: 10.1080/00222895.2020.1747384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Amblyopia influences psychomotor and psychosocial skills, although not all studies are unanimous. Different treatments coexist, but the effect on those variables is not clear. This study aims to probe whether children with amblyopia have impairments in these areas and if different optometric treatments reduce them effectively. 50 children, diagnosed with amblyopia, and 33 without amblyopia participated in this study. Eye-hand coordination, psychosocial skills and reading abilities, were measured before and after three months of different treatments (patch, patch and near vision activities and perceptual learning). Results revealed lower scores in eye-hand coordination and some reading issues in children with amblyopia, without differences in psychosocial skills in regard to the control group. Moreover, optometric treatments improved eye-hand coordination.
Collapse
Affiliation(s)
- Hernández-Andrés Rosa
- Dpto. de Óptica y Optometría y Ciencias de la Visión, Facultad de Físicas. Universitat de València
| | | | | | - Luque-Cobija María-José
- Dpto. de Óptica y Optometría y Ciencias de la Visión, Facultad de Físicas. Universitat de València.,Dpto. de Psicobiología, Facultad de Psicología. Universitat de València
| | | |
Collapse
|
19
|
Gregori V, Cognolato M, Saetta G, Atzori M, Gijsberts A. On the Visuomotor Behavior of Amputees and Able-Bodied People During Grasping. Front Bioeng Biotechnol 2019; 7:316. [PMID: 31799243 PMCID: PMC6874164 DOI: 10.3389/fbioe.2019.00316] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 10/24/2019] [Indexed: 11/15/2022] Open
Abstract
Visual attention is often predictive for future actions in humans. In manipulation tasks, the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. Some recent studies have proposed to exploit this anticipatory gaze behavior to improve the control of dexterous upper limb prostheses. This requires a detailed understanding of visuomotor coordination to determine in which temporal window gaze may provide helpful information. In this paper, we verify and quantify the gaze and motor behavior of 14 transradial amputees who were asked to grasp and manipulate common household objects with their missing limb. For comparison, we also include data from 30 able-bodied subjects who executed the same protocol with their right arm. The dataset contains gaze, first person video, angular velocities of the head, and electromyography and accelerometry of the forearm. To analyze the large amount of video, we developed a procedure based on recent deep learning methods to automatically detect and segment all objects of interest. This allowed us to accurately determine the pixel distances between the gaze point, the target object, and the limb in each individual frame. Our analysis shows a clear coordination between the eyes and the limb in the reach-to-grasp phase, confirming that both intact and amputated subjects precede the grasp with their eyes by more than 500 ms. Furthermore, we note that the gaze behavior of amputees was remarkably similar to that of the able-bodied control group, despite their inability to physically manipulate the objects.
Collapse
Affiliation(s)
- Valentina Gregori
- Department of Computer, Control, and Management Engineering, University of Rome La Sapienza, Rome, Italy.,VANDAL Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Matteo Cognolato
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland.,Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Gianluca Saetta
- Department of Neurology, University Hospital of Zurich, Zurich, Switzerland
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | | | - Arjan Gijsberts
- VANDAL Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
20
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
21
|
Junghans BM, Khuu SK. Populations Norms for "SLURP"-An iPad App for Quantification of Visuomotor Coordination Testing. Front Neurosci 2019; 13:711. [PMID: 31354420 PMCID: PMC6636550 DOI: 10.3389/fnins.2019.00711] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 06/24/2019] [Indexed: 01/01/2023] Open
Abstract
Currently the integrity of brain function that drives behavior is predominantly measured in terms of pure motor function, yet most human behavior is visually driven. A means of easily quantifying such visually-driven brain function for comparison against population norms is lacking. Analysis of eye-hand coordination (EHC) using a digital game-like situation with downloadable spatio-temporal details has potential for clinicians and researchers. A simplified protocol for the Lee-Ryan EHC (Slurp) Test app for iPad® has been developed to monitor EHC. The two subtests selected, each of six quickly completed items with appeal to all ages, were found equivalent in terms of total errors/time and sensitive to developmental and aging milestones known to affect EHC. The sensitivity of outcomes due to the type of stylus being used during testing was also explored. Populations norms on 221 participants aged 5 to 80+years are presented for each test item according to two commonly used stylus types. The Slurp app uses two-dimensional space and is suited to clinicians for pre/post-intervention testing and to researchers in psychological, medical, and educational domains who are interested in understanding brain function.
Collapse
Affiliation(s)
- Barbara M Junghans
- School of Optometry and Vision Science, University of New South Wales Sydney, Sydney, NSW, Australia
| | - Sieu K Khuu
- School of Optometry and Vision Science, University of New South Wales Sydney, Sydney, NSW, Australia
| |
Collapse
|
22
|
Koppelaar H, Kordestani Moghadam P, Khan K, Kouhkani S, Segers G, Warmerdam MV. Reaction Time Improvements by Neural Bistability. Behav Sci (Basel) 2019; 9:E28. [PMID: 30889937 DOI: 10.3390/bs9030028] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2018] [Revised: 01/22/2019] [Accepted: 03/12/2019] [Indexed: 12/16/2022] Open
Abstract
The often reported reduction of Reaction Time (RT) by Vision Training) is successfully replicated by 81 athletes across sports. This enabled us to achieve a mean reduction of RTs for athletes eye-hand coordination of more than 10%, with high statistical significance. We explain how such an observed effect of Sensorimotor systems' plasticity causing reduced RT can last in practice for multiple days and even weeks in subjects, via a proof of principle. Its mathematical neural model can be forced outside a previous stable (but long) RT into a state leading to reduced eye-hand coordination RT, which is, again, in a stable neural state.
Collapse
|
23
|
Mathew J, Bernier PM, Danion FR. Asymmetrical Relationship between Prediction and Control during Visuomotor Adaptation. eNeuro 2018; 5:ENEURO. [PMID: 30627629 DOI: 10.1523/ENEURO.0280-18.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 10/24/2018] [Accepted: 10/25/2018] [Indexed: 11/23/2022] Open
Abstract
Current theories suggest that the ability to control the body and to predict its associated sensory consequences is key for skilled motor behavior. It is also suggested that these abilities need to be updated when the mapping between motor commands and sensory consequences is altered. Here we challenge this view by investigating the transfer of adaptation to rotated visual feedback between one task in which human participants had to control a cursor with their hand in order to track a moving target, and another in which they had to predict with their eyes the visual consequences of their hand movement on the cursor. Hand and eye tracking performances were evaluated respectively through cursor–target and eye–cursor distance. Results reveal a striking dissociation: although prior adaptation of hand tracking greatly facilitates eye tracking, the adaptation of eye tracking does not transfer to hand tracking. We conclude that although the update of control is associated with the update of prediction, prediction can be updated independently of control. To account for this pattern of results, we propose that task demands mediate the update of prediction and control. Although a joint update of prediction and control seemed mandatory for success in our hand tracking task, the update of control was only facultative for success in our eye tracking task. More generally, those results promote the view that prediction and control are mediated by separate neural processes and suggest that people can learn to predict movement consequences without necessarily promoting their ability to control these movements.
Collapse
|
24
|
Abstract
Endoscopic surgery procedures require specific skills, such as eye-hand coordination to be developed. Current education programs are facing with problems to provide appropriate skill improvement and assessment methods in this field. This study aims to propose objec-tive metrics for hand-movement skills and assess eye-hand coordination. An experimental study is conducted with 15 surgical residents to test the newly proposed measures. Two computer-based both-handed endoscopic surgery practice scenarios are developed in a simulation environment to gather the participants' eye-gaze data with the help of an eye tracker as well as the related hand movement data through haptic interfaces. Additionally, participants' eye-hand coordination skills are analyzed. The results indicate higher correla-tions in the intermediates' eye-hand movements compared to the novices. An increase in intermediates' visual concentration leads to smoother hand movements. Similarly, the novices' hand movements are shown to remain at a standstill. After the first round of practice, all participants' eye-hand coordination skills are improved on the specific task targeted in this study. According to these results, it can be concluded that the proposed metrics can potentially provide some additional insights about trainees' eye-hand coordi-nation skills and help instructional system designers to better address training requirements.
Collapse
|
25
|
de Haan MJ, Brochier T, Grün S, Riehle A, Barthélemy FV. Real-time visuomotor behavior and electrophysiology recording setup for use with humans and monkeys. J Neurophysiol 2018; 120:539-552. [PMID: 29718806 PMCID: PMC6139457 DOI: 10.1152/jn.00262.2017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Large-scale network dynamics in multiple visuomotor areas is of great interest in the study of eye-hand coordination in both human and monkey. To explore this, it is essential to develop a setup that allows for precise tracking of eye and hand movements. It is desirable that it is able to generate mechanical or visual perturbations of hand trajectories so that eye-hand coordination can be studied in a variety of conditions. There are simple solutions that satisfy these requirements for hand movements performed in the horizontal plane while visual stimuli and hand feedback are presented in the vertical plane. However, this spatial dissociation requires cognitive rules for eye-hand coordination different from eye-hand movements performed in the same space, as is the case in most natural conditions. Here we present an innovative solution for the precise tracking of eye and hand movements in a single reference frame. Importantly, our solution allows behavioral explorations under normal and perturbed conditions in both humans and monkeys. It is based on the integration of two noninvasive commercially available systems to achieve online control and synchronous recording of eye (EyeLink) and hand (KINARM) positions during interactive visuomotor tasks. We also present an eye calibration method compatible with different eye trackers that compensates for nonlinearities caused by the system's geometry. Our setup monitors the two effectors in real time with high spatial and temporal resolution and simultaneously outputs behavioral and neuronal data to an external data acquisition system using a common data format. NEW & NOTEWORTHY We developed a new setup for studying eye-hand coordination in humans and monkeys that monitors the two effectors in real time in a common reference frame. Our eye calibration method allows us to track gaze positions relative to visual stimuli presented in the horizontal workspace of the hand movements. This method compensates for nonlinearities caused by the system’s geometry and transforms kinematics signals from the eye tracker into the same coordinate system as hand and targets.
Collapse
Affiliation(s)
- Marcel Jan de Haan
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique-Aix-Marseille Université, UMR7289, Marseille , France.,Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Brain Institute I (INM-10), Forschungszentrum Jülich, Jülich , Germany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique-Aix-Marseille Université, UMR7289, Marseille , France
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Brain Institute I (INM-10), Forschungszentrum Jülich, Jülich , Germany.,RIKEN Brain Science Institute, Hirosawa, Wako-Shi, Saitama , Japan.,Theoretical Systems Neurobiology, RWTH Aachen University , Aachen , Germany
| | - Alexa Riehle
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique-Aix-Marseille Université, UMR7289, Marseille , France.,Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Brain Institute I (INM-10), Forschungszentrum Jülich, Jülich , Germany
| | - Frédéric V Barthélemy
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique-Aix-Marseille Université, UMR7289, Marseille , France.,Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Brain Institute I (INM-10), Forschungszentrum Jülich, Jülich , Germany
| |
Collapse
|
26
|
Abstract
The present article shows that infant and dyad differences in hand-eye coordination predict dyad differences in joint attention (JA). In the study reported here, 51 toddlers ranging in age from 11 to 24 months and their parents wore head-mounted eye trackers as they played with objects together. We found that physically active toddlers aligned their looking behavior with their parent and achieved a substantial proportion of time spent jointly attending to the same object. However, JA did not arise through gaze following but rather through the coordination of gaze with manual actions on objects as both infants and parents attended to their partner's object manipulations. Moreover, dyad differences in JA were associated with dyad differences in hand following.
Collapse
|
27
|
Abstract
The aim of this study was to provide a detailed account of the spatial and temporal disruptions to eye-hand coordination when using a prosthetic hand during a sequential fine motor skill. Twenty-one able-bodied participants performed 15 trials of the picking up coins task derived from the Southampton Hand Assessment Procedure with their anatomic hand and with a prosthesis simulator while wearing eye-tracking equipment. Gaze behavior results revealed that when using the prosthesis, performance detriments were accompanied by significantly greater hand-focused gaze and a significantly longer time to disengage gaze from manipulations to plan upcoming movements. The study findings highlight key metrics that distinguish disruptions to eye-hand coordination that may have implications for the training of prosthesis use.
Collapse
Affiliation(s)
- J V V Parr
- a School of Health Sciences , Liverpool Hope University , United Kingdom
| | - S J Vine
- b College of Life & Environmental Sciences , University of Exeter , United Kingdom
| | - N R Harrison
- c Department of Psychology , Liverpool Hope University , United Kingdom
| | - G Wood
- d Centre for Health, Exercise and Active Living , Manchester Metropolitan University , United Kingdom
| |
Collapse
|
28
|
Mathew J, Eusebio A, Danion F. Limited Contribution of Primary Motor Cortex in Eye-Hand Coordination: A TMS Study. J Neurosci 2017; 37:9730-40. [PMID: 28893926 DOI: 10.1523/JNEUROSCI.0564-17.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 08/09/2017] [Accepted: 09/05/2017] [Indexed: 11/21/2022] Open
Abstract
The ability to track a moving target with the eye is substantially improved when the target is self-moved compared with when it is moved by an external agent. To account for this observation, it has been postulated that the oculomotor system has access to hand efference copy, thereby allowing to predict the motion of the visual target. Along this scheme, we tested the effect of transcranial magnetic stimulation (TMS) over the hand area of the primary motor cortex (M1) when human participants (50% females) are asked to track with their eyes a visual target whose horizontal motion is driven by their grip force. We reasoned that, if the output of M1 is used by the oculomotor system to keep track of the target, on top of inducing short latency disturbance of grip force, single-pulse TMS should also quickly disrupt ongoing eye motion. For comparison purposes, the effect of TMS over M1 was monitored when subjects tracked an externally moved target (while keeping their hand at rest or not). In both cases, results showed no alterations in smooth pursuit, meaning that its velocity was unaffected within the 25-125 ms epoch that followed TMS. Overall, our results imply that the output of M1 has limited contribution in driving the eye motion during our eye-hand coordination task. This study suggests that, if hand motor signals are accessed by the oculomotor system, this is upstream of M1.SIGNIFICANCE STATEMENT The ability to coordinate eye and hand actions is central in everyday activity. However, the neural mechanisms underlying this coordination remain to be clarified. A leading hypothesis is that the oculomotor system has access to hand motor signals. Here we explored this possibility by means of transcranial magnetic stimulation (TMS) over the hand area of the primary motor cortex (M1) when humans tracked with the eyes a visual target that was moved by the hand. As expected, ongoing hand action was perturbed 25-30 ms after TMS, but our results fail to show any disruption of eye motion, smooth pursuit velocity being unaffected. This work suggests that, if hand motor signals are accessed by the oculomotor system, this is upstream of M1.
Collapse
|
29
|
Vazquez Y, Federici L, Pesaran B. Multiple spatial representations interact to increase reach accuracy when coordinating a saccade with a reach. J Neurophysiol 2017; 118:2328-2343. [PMID: 28768742 DOI: 10.1152/jn.00408.2017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 07/11/2017] [Accepted: 07/25/2017] [Indexed: 11/22/2022] Open
Abstract
Reaching is an essential behavior that allows primates to interact with the environment. Precise reaching to visual targets depends on our ability to localize and foveate the target. Despite this, how the saccade system contributes to improvements in reach accuracy remains poorly understood. To assess spatial contributions of eye movements to reach accuracy, we performed a series of behavioral psychophysics experiments in nonhuman primates (Macaca mulatta). We found that a coordinated saccade with a reach to a remembered target location increases reach accuracy without target foveation. The improvement in reach accuracy was similar to that obtained when the subject had visual information about the location of the current target in the visual periphery and executed the reach while maintaining central fixation. Moreover, we found that the increase in reach accuracy elicited by a coordinated movement involved a spatial coupling mechanism between the saccade and reach movements. We observed significant correlations between the saccade and reach errors for coordinated movements. In contrast, when the eye and arm movements were made to targets in different spatial locations, the magnitude of the error and the degree of correlation between the saccade and reach direction were determined by the spatial location of the eye and the hand targets. Hence, we propose that coordinated movements improve reach accuracy without target foveation due to spatial coupling between the reach and saccade systems. Spatial coupling could arise from a neural mechanism for coordinated visual behavior that involves interacting spatial representations.NEW & NOTEWORTHY How visual spatial representations guiding reach movements involve coordinated saccadic eye movements is unknown. Temporal coupling between the reach and saccade system during coordinated movements improves reach performance. However, the role of spatial coupling is unclear. Using behavioral psychophysics, we found that spatial coupling increases reach accuracy in addition to temporal coupling and visual acuity. These results suggest that a spatial mechanism to couple the reach and saccade systems increases the accuracy of coordinated movements.
Collapse
Affiliation(s)
- Yuriria Vazquez
- Center for Neural Science, New York University, New York, New York; and
| | - Laura Federici
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Bijan Pesaran
- Center for Neural Science, New York University, New York, New York; and
| |
Collapse
|
30
|
Danion F, Mathew J, Flanagan JR. Eye Tracking of Occluded Self-Moved Targets: Role of Haptic Feedback and Hand-Target Dynamics. eNeuro 2017; 4:ENEURO. [PMID: 28680964 DOI: 10.1523/ENEURO.0101-17.2017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 06/06/2017] [Accepted: 06/08/2017] [Indexed: 01/04/2023] Open
Abstract
Previous studies on smooth pursuit eye movements have shown that humans can continue to track the position of their hand, or a target controlled by the hand, after it is occluded, thereby demonstrating that arm motor commands contribute to the prediction of target motion driving pursuit eye movements. Here, we investigated this predictive mechanism by manipulating both the complexity of the hand-target mapping and the provision of haptic feedback. Two hand-target mappings were used, either a rigid (simple) one in which hand and target motion matched perfectly or a nonrigid (complex) one in which the target behaved as a mass attached to the hand by means of a spring. Target animation was obtained by asking participants to oscillate a lightweight robotic device that provided (or not) haptic feedback consistent with the target dynamics. Results showed that as long as 7 s after target occlusion, smooth pursuit continued to be the main contributor to total eye displacement (∼60%). However, the accuracy of eye-tracking varied substantially across experimental conditions. In general, eye-tracking was less accurate under the nonrigid mapping, as reflected by higher positional and velocity errors. Interestingly, haptic feedback helped to reduce the detrimental effects of target occlusion when participants used the nonrigid mapping, but not when they used the rigid one. Overall, we conclude that the ability to maintain smooth pursuit in the absence of visual information can extend to complex hand-target mappings, but the provision of haptic feedback is critical for the maintenance of accurate eye-tracking performance.
Collapse
|
31
|
Neromyliotis E, Moschovakis AK. Response Properties of Motor Equivalence Neurons of the Primate Premotor Cortex. Front Behav Neurosci 2017; 11:61. [PMID: 28446867 PMCID: PMC5388740 DOI: 10.3389/fnbeh.2017.00061] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2017] [Accepted: 03/27/2017] [Indexed: 11/23/2022] Open
Abstract
To study the response properties of cells that could participate in eye-hand coordination we trained two macaque monkeys to perform center-out saccades and pointing movements with their right or left forelimb toward visual targets presented on a video display. We analyzed the phasic movement related discharges of neurons of the periarcuate cortex that fire before and during saccades and movements of the hand whether accompanied by movements of the other effector or not. Because such cells could encode an abstract form of the desired displacement vector without regard to the effector that would execute the movement we refer to such cells as motor equivalence neurons (Meq). Most of them (75%) were found in or near the smooth pursuit region and the grasp related region in the caudal bank of the arcuate sulcus. The onset of their phasic discharges preceded saccades by about 70 ms and hand movements by about 150 ms and was often correlated to both the onset of saccades and the onset of hand movements. The on-direction of Meq cells was uniformly distributed without preference for ipsiversive or contraversive movements. In about half of the Meq cells the preferred direction for saccades was the preferred direction for hand movements as well. In the remaining cells the difference was considerable (>90 deg), and the on-direction for eye-hand movements resembled that for isolated saccades in some cells and for isolated hand movements in others. A three layer neural network model that used Meq cells as its input layer showed that the combination of effector invariant discharges with non-invariant discharges could help reduce the number of decoding errors when the network attempts to compute the correct movement metrics of the right effector.
Collapse
Affiliation(s)
- Eleftherios Neromyliotis
- Institute of Applied and Computational Mathematics, Foundation for Research and TechnologyHeraklion, Greece.,Department of Basic Sciences, Faculty of Medicine, University of CreteHeraklion, Greece
| | - A K Moschovakis
- Institute of Applied and Computational Mathematics, Foundation for Research and TechnologyHeraklion, Greece.,Department of Basic Sciences, Faculty of Medicine, University of CreteHeraklion, Greece
| |
Collapse
|
32
|
Abstract
Visual-motor development and executive functions were investigated with the Bender Test at age 5½ years in 175 children born preterm and 125 full-term controls, within the longitudinal Stockholm Neonatal Project. Assessment also included WPPSI-R and NEPSY neuropsychological battery for ages 4-7 (Korkman, 1990). Bender protocols were scored according to Brannigan & Decker (2003), Koppitz (1963) and a complementary neuropsychological scoring system (ABC), aimed at executive functions and developed for this study. Bender results by all three scoring systems were strongly related to overall cognitive level (Performance IQ), in both groups. The preterm group displayed inferior visual-motor skills compared to controls also when controlling for IQ. The largest group differences were found on the ABC scoring, which shared unique variance with NEPSY tests of executive function. Multiple regression analyses showed that hyperactive behavior and inattention increased the risk for visual-motor deficits in children born preterm, whereas no added risk was seen among hyperactive term children. Gender differences favoring girls were strongest within the preterm group, presumably reflecting the specific vulnerability of preterm boys. The results indicate that preterm children develop a different neurobehavioral organization from children born at term, and that the Bender test with a neuropsychological scoring is a useful tool in developmental screening around school start.
Collapse
Affiliation(s)
- Birgitta Böhm
- Karolinska Institutet, Department of Women's and Children's Health, Stockholm, SwedenDepartment of Psychology, Stockholm University, Sweden
| | - Aiko Lundequist
- Karolinska Institutet, Department of Women's and Children's Health, Stockholm, SwedenDepartment of Psychology, Stockholm University, Sweden
| | - Ann-Charlotte Smedler
- Karolinska Institutet, Department of Women's and Children's Health, Stockholm, SwedenDepartment of Psychology, Stockholm University, Sweden
| |
Collapse
|
33
|
Terao Y, Fukuda H, Tokushige SI, Inomata-Terada S, Ugawa Y. How Saccade Intrusions Affect Subsequent Motor and Oculomotor Actions. Front Neurosci 2017; 10:608. [PMID: 28127274 PMCID: PMC5226964 DOI: 10.3389/fnins.2016.00608] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 12/21/2016] [Indexed: 11/25/2022] Open
Abstract
In daily activities, there is a close spatial and temporal coupling between eye and hand movements that enables human beings to perform actions smoothly and accurately. If this coupling is disrupted by inadvertent saccade intrusions, subsequent motor actions suffer from delays, and lack of coordination. To examine how saccade intrusions affect subsequent voluntary actions, we used two tasks that require subjects to make motor/oculomotor actions in response to a visual cue. One was the memory guided saccade (MGS) task, and the other the hand reaction time (RT) task. The MGS task required subjects to initiate a voluntary saccade to a memorized target location, which is indicated shortly before by a briefly presented cue. The RT task required subjects to release a button on detection of a visual target, while foveating on a central fixation point. In normal subjects of various ages, inadvertent saccade intrusions delayed subsequent voluntary motor, and oculomotor actions. We also studied patients with Parkinson's disease (PD), who are impaired not only in initiating voluntary saccades but also in suppressing unwanted reflexive saccades. Saccade intrusions also delayed hand RT in PD patients. However, MGS was affected by the saccade intrusion differently. Saccade intrusion did not delay MGS latency in PD patients who could perform MGS with a relatively normal latency. In contrast, in PD patients who were unable to initiate MGS within the normal time range, we observed slightly decreased MGS latency after saccade intrusions. What explains this paradoxical phenomenon? It is known that motor actions slow down when switching between controlled and automatic behavior. We discuss how the effect of saccade intrusions on subsequent voluntary motor/oculomotor actions may reflect a similar switching cost between automatic and controlled behavior and a cost for switching between different motor effectors. In contrast, PD patients were unable to initiate internally guided MGS in the absence of visual target and could perform only automatic visually guided saccades, and did not have to switch between automatic and controlled behavior. This lack of switching may explain the shortening of MGS latency by the saccade intrusion in PD patients.
Collapse
Affiliation(s)
- Yasuo Terao
- Department of Neurology, University of TokyoTokyo, Japan; Department of Cell Physiology, Kyorin UniversityTokyo, Japan
| | - Hideki Fukuda
- Segawa Neurological Clinic for Children Tokyo, Japan
| | | | | | - Yoshikazu Ugawa
- Department of Neurology, School of Medicine, Fukushima Medical UniversityFukushima, Japan; Fukushima Global Medical Science Center, Advanced Clinical Research Center, Fukushima Medical UniversityFukushima, Japan
| |
Collapse
|
34
|
Hurtubise J, Gorbet D, Hamandi Y, Macpherson A, Sergio L. The effect of concussion history on cognitive-motor integration in elite hockey players. Concussion 2016; 1:CNC17. [PMID: 30202559 PMCID: PMC6093836 DOI: 10.2217/cnc-2016-0006] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2016] [Accepted: 06/03/2016] [Indexed: 11/23/2022] Open
Abstract
AIM To observe the effects of concussion history on cognitive-motor integration in elite-level athletes. METHODS The study included 102 National Hockey League draft prospects (n = 51 concussion history [CH]; n = 51 no history [NC]). Participants completed two computer-based visuomotor tasks, one involved 'standard' visuomotor mapping and one involved 'nonstandard' mapping in which vision and action were decoupled. RESULTS We observed a significant effect of group on reaction time (CH slower) and accuracy (CH worse), but a group by condition interaction only for reaction time (p < 0.05). There were no other deficits found. We discussed these findings in comparison to our previous work with non-elite athletes. CONCLUSION Previously concussed elite-level athletes may have lingering neurological deficits that are not detected using standard clinical assessments.
Collapse
Affiliation(s)
- Johanna Hurtubise
- School of Kinesiology & Health Science, York University, Toronto, ON, M3J 1P3, Canada
- York University Sports Medicine Team, York University Department of Athletics and Recreation, York University, Toronto, ON, M3J 1P3, Canada
| | - Diana Gorbet
- School of Kinesiology & Health Science, York University, Toronto, ON, M3J 1P3, Canada
- Center for Vision Research, York University, Toronto, ON, M3J 1P3, Canada
| | - Yehyah Hamandi
- School of Kinesiology & Health Science, York University, Toronto, ON, M3J 1P3, Canada
| | - Alison Macpherson
- School of Kinesiology & Health Science, York University, Toronto, ON, M3J 1P3, Canada
- York University Sports Medicine Team, York University Department of Athletics and Recreation, York University, Toronto, ON, M3J 1P3, Canada
| | - Lauren Sergio
- School of Kinesiology & Health Science, York University, Toronto, ON, M3J 1P3, Canada
- York University Sports Medicine Team, York University Department of Athletics and Recreation, York University, Toronto, ON, M3J 1P3, Canada
- Center for Vision Research, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
35
|
Chen J, Valsecchi M, Gegenfurtner KR. Role of motor execution in the ocular tracking of self-generated movements. J Neurophysiol 2016; 116:2586-2593. [PMID: 27628207 DOI: 10.1152/jn.00574.2016] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Accepted: 09/09/2016] [Indexed: 11/22/2022] Open
Abstract
When human observers track the movements of their own hand with their gaze, the eyes can start moving before the finger (i.e., anticipatory smooth pursuit). The signals driving anticipation could come from motor commands during finger motor execution or from motor intention and decision processes associated with self-initiated movements. For the present study, we built a mechanical device that could move a visual target either in the same direction as the participant's hand or in the opposite direction. Gaze pursuit of the target showed stronger anticipation if it moved in the same direction as the hand compared with the opposite direction, as evidenced by decreased pursuit latency, increased positional lead of the eye relative to target, increased pursuit gain, decreased saccade rate, and decreased delay at the movement reversal. Some degree of anticipation occurred for incongruent pursuit, indicating that there is a role for higher-level movement prediction in pursuit anticipation. The fact that anticipation was larger when target and finger moved in the same direction provides evidence for a direct coupling between finger and eye motor commands.
Collapse
Affiliation(s)
- Jing Chen
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Matteo Valsecchi
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Karl R Gegenfurtner
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universität Giessen, Giessen, Germany
| |
Collapse
|
36
|
Landelle C, Montagnini A, Madelain L, Danion F. Eye tracking a self-moved target with complex hand-target dynamics. J Neurophysiol 2016; 116:1859-1870. [PMID: 27466129 DOI: 10.1152/jn.00007.2016] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Accepted: 07/26/2016] [Indexed: 12/31/2022] Open
Abstract
Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring mapping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ∼5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics.
Collapse
Affiliation(s)
- Caroline Landelle
- Institut de Neurosciences de la Timone UMR 7289, Aix Marseille Université, Centre National de la Recherche Scientifique (CNRS), Marseille, France; and
| | - Anna Montagnini
- Institut de Neurosciences de la Timone UMR 7289, Aix Marseille Université, Centre National de la Recherche Scientifique (CNRS), Marseille, France; and
| | | | - Frederic Danion
- Institut de Neurosciences de la Timone UMR 7289, Aix Marseille Université, Centre National de la Recherche Scientifique (CNRS), Marseille, France; and
| |
Collapse
|
37
|
Chujo Y, Jono Y, Tani K, Nomura Y, Hiraoka K. Corticospinal Excitability in the Hand Muscles is Decreased During Eye Movement with Visual Occlusion. Percept Mot Skills 2016; 122:238-55. [PMID: 27420319 DOI: 10.1177/0031512515625331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Corticospinal excitability in the hand muscles decreases during smooth pursuit eye movement. The present study tested a hypothesis that the decrease in corticospinal excitability in the hand muscles at rest during eye movement is not caused by visual feedback but caused by motor commands to the eye muscles. Healthy men (M age = 28.4 yr., SD = 5.2) moved their eyes to the right with visual occlusion (dark goggles) while their arms and hands remained at rest. The motor-evoked potential in the hand muscles was suppressed by 19% in the third quarter of the eye-movement period, supporting a view that motor commands to the eye muscles are the cause of the decrease in corticospinal excitability in the hand muscles. The amount of the suppression was not significantly different among the muscles, indicating that modulation of corticospinal excitability in one muscle induced by eye movement is not dependent on whether eye movement direction and the direction of finger movement when the muscle contracts are identical. Thus, the finding failed to support a hypothetical view that motor commands to the eye muscles concomittantly produce motor commands to the hand muscles. Moreover, the amount of the suppression was not significantly different between the forearm positions, indicating that the suppression was not affected by proprioception of the forearm muscles when visual feedback is absent.
Collapse
Affiliation(s)
- Yuta Chujo
- Graduate School of Comprehensive Rehabilitation, Osaka Prefecture University, Japan
| | - Yasutomo Jono
- Graduate School of Comprehensive Rehabilitation, Osaka Prefecture University, Japan
| | - Keisuke Tani
- Graduate School of Comprehensive Rehabilitation, Osaka Prefecture University, Japan
| | - Yoshifumi Nomura
- Graduate School of Comprehensive Rehabilitation, Osaka Prefecture University, Japan
| | - Koichi Hiraoka
- College of Health and Human Sciences, Osaka Prefecture University, Japan
| |
Collapse
|
38
|
Chen J, Valsecchi M, Gegenfurtner KR. LRP predicts smooth pursuit eye movement onset during the ocular tracking of self-generated movements. J Neurophysiol 2016; 116:18-29. [PMID: 27009159 DOI: 10.1152/jn.00184.2016] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Accepted: 03/21/2016] [Indexed: 11/22/2022] Open
Abstract
Several studies have indicated that human observers are very efficient at tracking self-generated hand movements with their gaze, yet it is not clear whether this is simply a by-product of the predictability of self-generated actions or if it results from a deeper coupling of the somatomotor and oculomotor systems. In a first behavioral experiment we compared pursuit performance as observers either followed their own finger or tracked a dot whose motion was externally generated but mimicked their finger motion. We found that even when the dot motion was completely predictable in terms of both onset time and kinematics, pursuit was not identical to that produced as the observers tracked their finger, as evidenced by increased rate of catch-up saccades and by the fact that in the initial phase of the movement gaze was lagging behind the dot, whereas it was ahead of the finger. In a second experiment we recorded EEG in the attempt to find a direct link between the finger motor preparation, indexed by the lateralized readiness potential (LRP) and the latency of smooth pursuit. After taking into account finger movement onset variability, we observed larger LRP amplitudes associated with earlier smooth pursuit onset across trials. The same held across subjects, where average LRP onset correlated with average eye latency. The evidence from both experiments concurs to indicate that a strong coupling exists between the motor systems leading to eye and finger movements and that simple top-down predictive signals are unlikely to support optimal coordination.
Collapse
Affiliation(s)
- Jing Chen
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Matteo Valsecchi
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Karl R Gegenfurtner
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universität Giessen, Giessen, Germany
| |
Collapse
|
39
|
Gopal A, Murthy A. A common control signal and a ballistic stage can explain the control of coordinated eye-hand movements. J Neurophysiol 2016; 115:2470-84. [PMID: 26888104 DOI: 10.1152/jn.00910.2015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2015] [Accepted: 02/17/2016] [Indexed: 11/22/2022] Open
Abstract
Voluntary control has been extensively studied in the context of eye and hand movements made in isolation, yet little is known about the nature of control during eye-hand coordination. We probed this with a redirect task. Here subjects had to make reaching/pointing movements accompanied by coordinated eye movements but had to change their plans when the target occasionally changed its position during some trials. Using a race model framework, we found that separate effector-specific mechanisms may be recruited to control eye and hand movements when executed in isolation but when the same effectors are coordinated a unitary mechanism to control coordinated eye-hand movements is employed. Specifically, we found that performance curves were distinct for the eye and hand when these movements were executed in isolation but were comparable when they were executed together. Second, the time to switch motor plans, called the target step reaction time, was different in the eye-alone and hand-alone conditions but was similar in the coordinated condition under assumption of a ballistic stage of ∼40 ms, on average. Interestingly, the existence of this ballistic stage could predict the extent of eye-hand dissociations seen in individual subjects. Finally, when subjects were explicitly instructed to control specifically a single effector (eye or hand), redirecting one effector had a strong effect on the performance of the other effector. Taken together, these results suggest that a common control signal and a ballistic stage are recruited when coordinated eye-hand movement plans require alteration.
Collapse
Affiliation(s)
- Atul Gopal
- National Brain Research Centre, Nainwal More, Manesar, Haryana, India
| | - Aditya Murthy
- Centre for Neuroscience, Indian Institute of Science, Bangalore, Karnataka, India
| |
Collapse
|
40
|
Hadjidimitrakis K, Dal Bo' G, Breveglieri R, Galletti C, Fattori P. Overlapping representations for reach depth and direction in caudal superior parietal lobule of macaques. J Neurophysiol 2015; 114:2340-52. [PMID: 26269557 DOI: 10.1152/jn.00486.2015] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Accepted: 08/07/2015] [Indexed: 11/22/2022] Open
Abstract
Reaching movements in the real world have typically a direction and a depth component. Despite numerous behavioral studies, there is no consensus on whether reach coordinates are processed in separate or common visuomotor channels. Furthermore, the neural substrates of reach depth in parietal cortex have been ignored in most neurophysiological studies. In the medial posterior parietal area V6A, we recently demonstrated the strong presence of depth signals and the extensive convergence of depth and direction information on single neurons during all phases of a fixate-to-reach task in 3-dimensional (3D) space. Using the same task, in the present work we examined the processing of direction and depth information in area PEc of the caudal superior parietal lobule (SPL) in three Macaca fascicularis monkeys. Across the task, depth and direction had a similar, high incidence of modulatory effect. The effect of direction was stronger than depth during the initial fixation period. As the task progressed toward arm movement execution, depth tuning became more prominent than directional tuning and the number of cells modulated by both depth and direction increased significantly. Neurons tuned by depth showed a small bias for far peripersonal space. Cells with directional modulations were more frequently tuned toward contralateral spatial locations, but ipsilateral space was also represented. These findings, combined with results from neighboring areas V6A and PE, support a rostral-to-caudal gradient of overlapping representations for reach depth and direction in SPL. These findings also support a progressive change from visuospatial (vergence angle) to somatomotor representations of 3D space in SPL.
Collapse
Affiliation(s)
- Kostas Hadjidimitrakis
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Giulia Dal Bo'
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| | - Rossella Breveglieri
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| |
Collapse
|
41
|
Granek JA, Sergio LE. Evidence for distinct brain networks in the control of rule-based motor behavior. J Neurophysiol 2015; 114:1298-309. [PMID: 26133796 DOI: 10.1152/jn.00233.2014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Accepted: 06/30/2015] [Indexed: 11/22/2022] Open
Abstract
Reach guidance when the spatial location of the viewed target and hand movement are incongruent (i.e., decoupled) necessitates use of explicit cognitive rules (strategic control) or implicit recalibration of gaze and limb position (sensorimotor recalibration). In a patient with optic ataxia (OA) and bilateral superior parietal lobule damage, we recently demonstrated an increased reliance on strategic control when the patient performed a decoupled reach (Granek JA, Pisella L, Stemberger J, Vighetto A, Rossetti Y, Sergio LE. PLoS One 8: e86138, 2013). To more generally understand the fundamental mechanisms of decoupled visuomotor control and to more specifically test whether we could distinguish these two modes of movement control, we tested healthy participants in a cognitively demanding dual task. Participants continuously counted backward while simultaneously reaching toward horizontal (left or right) or diagonal (equivalent to top-left or top-right) targets with either veridical or rotated (90°) cursor feedback. By increasing the overall neural load and selectively compromising potentially overlapping neural circuits responsible for strategic control, the complex dual task served as a noninvasive means to disrupt the integration of a cognitive rule into a motor action. Complementary to our previous results observed in patients with optic ataxia, here our dual task led to greater performance deficits during movements that required an explicit rule, implying a selective disruption of strategic control in decoupled reaching. Our results suggest that distinct neural processing is required to control these different types of reaching because in considering the current results and previous patient results together, the two classes of movement could be differentiated depending on the type of interference.
Collapse
Affiliation(s)
- Joshua A Granek
- School of Kinesiology and Health Science, Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Lauren E Sergio
- School of Kinesiology and Health Science, Centre for Vision Research, York University, Toronto, Ontario, Canada
| |
Collapse
|
42
|
Gopal A, Murthy A. Eye-hand coordination during a double-step task: evidence for a common stochastic accumulator. J Neurophysiol 2015; 114:1438-54. [PMID: 26084906 DOI: 10.1152/jn.00276.2015] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2015] [Accepted: 06/15/2015] [Indexed: 11/22/2022] Open
Abstract
Many studies of reaching and pointing have shown significant spatial and temporal correlations between eye and hand movements. Nevertheless, it remains unclear whether these correlations are incidental, arising from common inputs (independent model); whether these correlations represent an interaction between otherwise independent eye and hand systems (interactive model); or whether these correlations arise from a single dedicated eye-hand system (common command model). Subjects were instructed to redirect gaze and pointing movements in a double-step task in an attempt to decouple eye-hand movements and causally distinguish between the three architectures. We used a drift-diffusion framework in the context of a race model, which has been previously used to explain redirect behavior for eye and hand movements separately, to predict the pattern of eye-hand decoupling. We found that the common command architecture could best explain the observed frequency of different eye and hand response patterns to the target step. A common stochastic accumulator for eye-hand coordination also predicts comparable variances, despite significant difference in the means of the eye and hand reaction time (RT) distributions, which we tested. Consistent with this prediction, we observed that the variances of the eye and hand RTs were similar, despite much larger hand RTs (∼90 ms). Moreover, changes in mean eye RTs, which also increased eye RT variance, produced a similar increase in mean and variance of the associated hand RT. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning.
Collapse
Affiliation(s)
- Atul Gopal
- National Brain Research Centre, Manesar, Haryana, India; and
| | - Aditya Murthy
- Centre for Neuroscience, Indian Institute of Science, Bangalore, Karnataka, India
| |
Collapse
|
43
|
Hu C, Huang K, Hu X, Liu Y, Yuan F, Wang Q, Fu G. Measuring the cognitive resources consumed per second for real-time lie-production and recollection: a dual-tasking paradigm. Front Psychol 2015; 6:596. [PMID: 25999903 PMCID: PMC4423307 DOI: 10.3389/fpsyg.2015.00596] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Accepted: 04/22/2015] [Indexed: 11/13/2022] Open
Abstract
This research report presents a novel method of dual-tasking lie-detection. Novel software "Follow Me" was invented for a concurrent eye-hand coordination task during truth-telling/lying. Undergraduate participants were instructed to tell truths on questions about undergraduate school whereas they were instructed to tell lies on interview questions about graduate school, pretending they were graduate students. Throughout the experiment, they operated the "Follow Me" software: moving the mouse pointer to follow a randomly-moving dot on a computer screen. The distance between the mouse pointer tip and the dot center was measured by the software every 50 ms. Frequency of distance fluctuation was analyzed as the index of cognitive effort consumed per second (i.e., "degree of cognitive effort"). The results revealed that the dominant frequency of distance fluctuation was significantly lower during encoding than during retrieving responses; and lower during lying than truth-telling. Thus, dominant frequency of distance fluctuation may be an effective index of cognitive effort. Moreover, both encoding and retrieving bald-faced lies were more cognitively effortful than truth-telling. This novel definition and measurement of degree of cognitive effort may contribute to the research field of deception as well as to many other fields in social cognition.
Collapse
Affiliation(s)
- Chao Hu
- Department of Psychology, Zhejiang Normal University Jinhua, China ; Applied Psychology and Human Development Department, University of Toronto Toronto, ON, Canada
| | - Kun Huang
- State Key Laboratory of Precision Spectroscopy, East China Normal University Shanghai, China
| | - Xiaoqing Hu
- Department of Psychology, Northwestern University Evanston, IL, USA
| | - Yanshuo Liu
- Department of Psychology, Zhejiang Normal University Jinhua, China
| | - Fang Yuan
- Department of Psychology, Zhejiang Normal University Jinhua, China
| | - Qiandong Wang
- Department of Psychology, Zhejiang Normal University Jinhua, China
| | - Genyue Fu
- Department of Psychology, Zhejiang Normal University Jinhua, China ; Department of Psychology, Hangzhou Normal University Hangzhou, China
| |
Collapse
|
44
|
Gopal A, Viswanathan P, Murthy A. A common stochastic accumulator with effector-dependent noise can explain eye-hand coordination. J Neurophysiol 2015; 113:2033-48. [PMID: 25568161 DOI: 10.1152/jn.00802.2014] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2014] [Accepted: 01/06/2015] [Indexed: 11/22/2022] Open
Abstract
The computational architecture that enables the flexible coupling between otherwise independent eye and hand effector systems is not understood. By using a drift diffusion framework, in which variability of the reaction time (RT) distribution scales with mean RT, we tested the ability of a common stochastic accumulator to explain eye-hand coordination. Using a combination of behavior, computational modeling and electromyography, we show how a single stochastic accumulator to threshold, followed by noisy effector-dependent delays, explains eye-hand RT distributions and their correlation, while an alternate independent, interactive eye and hand accumulator model does not. Interestingly, the common accumulator model did not explain the RT distributions of the same subjects when they made eye and hand movements in isolation. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning.
Collapse
Affiliation(s)
- Atul Gopal
- National Brain Research Centre, Manesar, Haryana, India; and
| | | | - Aditya Murthy
- Centre for Neuroscience, Indian Institute of Science, Bangalore, Karnataka, India
| |
Collapse
|
45
|
Gnanaseelan R, Gonzalez DA, Niechwiej-Szwedo E. Binocular advantage for prehension movements performed in visually enriched environments requiring visual search. Front Hum Neurosci 2014; 8:959. [PMID: 25506323 PMCID: PMC4246685 DOI: 10.3389/fnhum.2014.00959] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2014] [Accepted: 11/11/2014] [Indexed: 11/13/2022] Open
Abstract
The purpose of this study was to examine the role of binocular vision during a prehension task performed in a visually enriched environment where the target object was surrounded by distractors/obstacles. Fifteen adults reached and grasped for a cylindrical peg while eye movements and upper limb kinematics were recorded. The complexity of the visual environment was manipulated by varying the number of distractors and by varying the saliency of the target. Gaze behavior (i.e., the latency of the primary gaze shift and frequency of gaze shifts prior to reach initiation) was comparable between viewing conditions. In contrast, a binocular advantage was evident in performance accuracy. Specifically, participants picked up the wrong object twice as often during monocular viewing when the complexity of the environment increased. Reach performance was more efficient during binocular viewing, which was demonstrated by shorter reach reaction time and overall movement time. Reaching movements during the approach phase had higher peak velocity during binocular viewing. During monocular viewing reach trajectories exhibited a direction bias during the acceleration phase, which was leftward during left eye viewing and rightward during right eye viewing. This bias can be explained by the presence of esophoria in the covered eye. The grasping interval was also extended by ~20% during monocular viewing; however, the duration of the return phase after the target was picked up was comparable across viewing conditions. In conclusion, binocular vision provides important input for planning and execution of prehension movements in visually enriched environments. Binocular advantage was evident, regardless of set size or target saliency, indicating that adults plan their movements more cautiously during monocular viewing, even in relatively simple environments with a highly salient target. Nevertheless, in visually-normal adults monocular input provides sufficient information to engage in online control to correct the initial errors in movement planning.
Collapse
Affiliation(s)
- Roshani Gnanaseelan
- Visuomotor Neuroscience Lab, Department of Kinesiology, University of Waterloo Waterloo, ON, Canada
| | - Dave A Gonzalez
- Visuomotor Neuroscience Lab, Department of Kinesiology, University of Waterloo Waterloo, ON, Canada
| | - Ewa Niechwiej-Szwedo
- Visuomotor Neuroscience Lab, Department of Kinesiology, University of Waterloo Waterloo, ON, Canada
| |
Collapse
|
46
|
Abstract
To capture objects by hand, online motor corrections are required to compensate for self-body movements. Recent studies have shown that background visual motion, usually caused by body movement, plays a significant role in such online corrections. Visual motion applied during a reaching movement induces a rapid and automatic manual following response (MFR) in the direction of the visual motion. Importantly, the MFR amplitude is modulated by the gaze direction relative to the reach target location (i.e., foveal or peripheral reaching). That is, the brain specifies the adequate visuomotor gain for an online controller based on gaze-reach coordination. However, the time or state point at which the brain specifies this visuomotor gain remains unclear. More specifically, does the gain change occur even during the execution of reaching? In the present study, we measured MFR amplitudes during a task in which the participant performed a saccadic eye movement that altered the gaze-reach coordination during reaching. The results indicate that the MFR amplitude immediately after the saccade termination changed according to the new gaze-reach coordination, suggesting a flexible online updating of the MFR gain during reaching. An additional experiment showed that this gain updating mostly started before the saccade terminated. Therefore, the MFR gain updating process would be triggered by an ocular command related to saccade planning or execution based on forthcoming changes in the gaze-reach coordination. Our findings suggest that the brain flexibly updates the visuomotor gain for an online controller even during reaching movements based on continuous monitoring of the gaze-reach coordination.
Collapse
Affiliation(s)
- Naotoshi Abekawa
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Wakamiya, Morinosato, Atsugi, Kanagawa, Japan; and
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Wakamiya, Morinosato, Atsugi, Kanagawa, Japan; and CREST, Japan Science and Technology Agency, Kawaguchi, Saitama, Japan
| |
Collapse
|
47
|
Gaveau V, Prablanc C, Laurent D, Rossetti Y, Priot AE. Visuomotor adaptation needs a validation of prediction error by feedback error. Front Hum Neurosci 2014; 8:880. [PMID: 25408644 PMCID: PMC4219430 DOI: 10.3389/fnhum.2014.00880] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Accepted: 10/13/2014] [Indexed: 11/13/2022] Open
Abstract
The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated.
Collapse
Affiliation(s)
- Valérie Gaveau
- INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center Bron, France
| | - Claude Prablanc
- INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center Bron, France ; Université Claude Bernard Lyon 1 Villeurbanne, France
| | - Damien Laurent
- INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center Bron, France
| | - Yves Rossetti
- INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center Bron, France ; Université Claude Bernard Lyon 1 Villeurbanne, France ; Mouvement et Handicap, Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon Bron, France
| | - Anne-Emmanuelle Priot
- INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center Bron, France ; Institut de Recherche Biomédicale des Armées (IRBA), Brétigny-sur-Orge cedex, France
| |
Collapse
|
48
|
Berret B, Bisio A, Jacono M, Pozzo T. Reach endpoint formation during the visuomotor planning of free arm pointing. Eur J Neurosci 2014; 40:3491-503. [PMID: 25209101 DOI: 10.1111/ejn.12721] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 08/09/2014] [Accepted: 08/14/2014] [Indexed: 11/30/2022]
Abstract
Volitional motor control generally involves deciding 'where to go' and 'how to go there'. Understanding how these two constituent pieces of motor decision coordinate is an important issue in neuroscience. Although the two processes could be intertwined, they are generally thought to occur in series, whereby visuomotor planning begins with the knowledge of a final hand position to attain. However, daily activities are often compatible with an infinity of final hand positions. The purpose of the present study was to test whether the reach endpoint ('where') is an input of arm motor planning ('how') in such ecological settings. To this end, we considered a free pointing task, namely arm pointing to a long horizontal line, and investigated the formation of the reach endpoint through eye-hand coordination. Although eye movement always preceded hand movement, our results showed that the saccade initiation was delayed by ~ 120 ms on average when the line was being pointed to as compared with a single target dot; the hand reaction time was identical in the two conditions. When the latency of saccade initiation was relatively brief, subjects often performed double, or even triple, saccades before hand movement onset. The number of saccades triggered was found to significantly increase as a function of the primary saccade latency and accuracy. These results suggest that knowledge about the reach endpoint built up gradually along with the arm motor planning process, and that the oculomotor system delayed the primary reach-related saccade in order to gain more information about the final hand position.
Collapse
|
49
|
Romano G, Viggiano D. Interception of moving objects in karate: an experimental, marker-free benchmark. Muscles Ligaments Tendons J 2014; 4:101-105. [PMID: 25332918 PMCID: PMC4187587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
BACKGROUND karate requires an optimal interception of the opponent's attack. Particularly in unconstrained situations, normal, untrained, subjects solve this problem adopting rather different solutions. It is currently unknown if karate as show a more uniform selection of interception schemes due to their special training. METHODS here we applied a 3D scanner to study the movement reproducibility of skilled karate as in a natural setup, using an unconstrained interception task. Six right handed karatekas and six controls participated to the study. 3D motion tracking data of the upper limbs were obtained using the Microsoft Kinect sensor, a real-time 3D scanner. The interception task consisted of intercepting and stopping a moving stick which was directed towards the side of the subject in two different positions (upper and lower). RESULTS the analysis of hand trajectories showed that the strategy of the movement was remarkably different between control subjects, whereas it was more uniform in karatekas. Moreover, we observed a significant difference in the variability of the interception point between control subjects and karatekas. CONCLUSION the results confirm the presence of individual idiosincratic behavior in interception tasks also in ecologically realistic situations, and that experience and training (as in karatekas) play an important role in the trajectory in interceptive tasks.
Collapse
Affiliation(s)
| | - Davide Viggiano
- Corresponding author: Davide Viggiano, Medicine and Health Sciences, University of Molise, Via De Sanctis, 86100 Campobasso, Italy, E-mail:
| |
Collapse
|
50
|
Sayegh PF, Hawkins KM, Neagu B, Crawford JD, Hoffman KL, Sergio LE. Decoupling the actions of the eyes from the hand alters beta and gamma synchrony within SPL. J Neurophysiol 2014; 111:2210-21. [PMID: 24598517 DOI: 10.1152/jn.00793.2013] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Eye-hand coordination is crucial for our ability to interact with the world around us. However, much of the visually guided reaches that we perform require a spatial decoupling between gaze direction and hand orientation. These complex decoupled reaching movements are in contrast to more standard eye and hand reaching movements in which the eyes and the hand are coupled. The superior parietal lobule (SPL) receives converging eye and hand signals; however, what is yet to be understood is how the activity within this region is modulated during decoupled eye and hand reaches. To address this, we recorded local field potentials within SPL from two rhesus macaques during coupled vs. decoupled eye and hand movements. Overall we observed a distinct separation in synchrony within the lower 10- to 20-Hz beta range from that in the higher 30- to 40-Hz gamma range. Specifically, within the early planning phase, beta synchrony dominated; however, the onset of this sustained beta oscillation occurred later during eye-hand decoupled vs. coupled reaches. As the task progressed, there was a switch to low-frequency and gamma-dominated responses, specifically for decoupled reaches. More importantly, we observed local field potential activity to be a stronger task (coupled vs. decoupled) and state (planning vs. execution) predictor than that of single units alone. Our results provide further insight into the computations of SPL for visuomotor transformations and highlight the necessity of accounting for the decoupled eye-hand nature of a motor task when interpreting movement control research data.
Collapse
Affiliation(s)
- Patricia F Sayegh
- School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada; Centre for Vision Research, York University, Toronto, Ontario, Canada; Canadian Action and Perception Network, Toronto, Ontario, Canada; and
| | - Kara M Hawkins
- School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada; Centre for Vision Research, York University, Toronto, Ontario, Canada; Canadian Action and Perception Network, Toronto, Ontario, Canada; and
| | - Bogdan Neagu
- Canadian Action and Perception Network, Toronto, Ontario, Canada; and Division of Neurology, University of Toronto, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada; Department of Psychology, York University, Toronto, Ontario, Canada; Canadian Action and Perception Network, Toronto, Ontario, Canada; and
| | - Kari L Hoffman
- Centre for Vision Research, York University, Toronto, Ontario, Canada; Department of Psychology, York University, Toronto, Ontario, Canada; Canadian Action and Perception Network, Toronto, Ontario, Canada; and
| | - Lauren E Sergio
- School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada; Centre for Vision Research, York University, Toronto, Ontario, Canada; Canadian Action and Perception Network, Toronto, Ontario, Canada; and
| |
Collapse
|