1
|
Sulpizio V, Fattori P, Pitzalis S, Galletti C. Functional organization of the caudal part of the human superior parietal lobule. Neurosci Biobehav Rev 2023; 153:105357. [PMID: 37572972 DOI: 10.1016/j.neubiorev.2023.105357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 07/31/2023] [Accepted: 08/09/2023] [Indexed: 08/14/2023]
Abstract
Like in macaque, the caudal portion of the human superior parietal lobule (SPL) plays a key role in a series of perceptive, visuomotor and somatosensory processes. Here, we review the functional properties of three separate portions of the caudal SPL, i.e., the posterior parieto-occipital sulcus (POs), the anterior POs, and the anterior part of the caudal SPL. We propose that the posterior POs is mainly dedicated to the analysis of visual motion cues useful for object motion detection during self-motion and for spatial navigation, while the more anterior parts are implicated in visuomotor control of limb actions. The anterior POs is mainly involved in using the spotlight of attention to guide reach-to-grasp hand movements, especially in dynamic environments. The anterior part of the caudal SPL plays a central role in visually guided locomotion, being implicated in controlling leg-related movements as well as the four limbs interaction with the environment, and in encoding egomotion-compatible optic flow. Together, these functions reveal how the caudal SPL is strongly implicated in skilled visually-guided behaviors.
Collapse
Affiliation(s)
- Valentina Sulpizio
- Department of Psychology, Sapienza University, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Sabrina Pitzalis
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
2
|
Hadjidimitrakis K, De Vitis M, Ghodrati M, Filippini M, Fattori P. Anterior-posterior gradient in the integrated processing of forelimb movement direction and distance in macaque parietal cortex. Cell Rep 2022; 41:111608. [DOI: 10.1016/j.celrep.2022.111608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 07/16/2022] [Accepted: 10/14/2022] [Indexed: 11/09/2022] Open
|
3
|
Bosco A, Bertini C, Filippini M, Foglino C, Fattori P. Machine learning methods detect arm movement impairments in a patient with parieto-occipital lesion using only early kinematic information. J Vis 2022; 22:3. [PMID: 36069943 PMCID: PMC9465938 DOI: 10.1167/jov.22.10.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 08/03/2022] [Indexed: 11/24/2022] Open
Abstract
Patients with lesions of the parieto-occipital cortex typically misreach visual targets that they correctly perceive (optic ataxia). Although optic ataxia was described more than 30 years ago, distinguishing this condition from physiological behavior using kinematic data is still far from being an achievement. Here, combining kinematic analysis with machine learning methods, we compared the reaching performance of a patient with bilateral occipitoparietal damage with that of 10 healthy controls. They performed visually guided reaches toward targets located at different depths and directions. Using the horizontal, sagittal, and vertical deviation of the trajectories, we extracted classification accuracy in discriminating the reaching performance of patient from that of controls. Specifically, accurate predictions of the patient's deviations were detected after the 20% of the movement execution in all the spatial positions tested. This classification based on initial trajectory decoding was possible for both directional and depth components of the movement, suggesting the possibility of applying this method to characterize pathological motor behavior in wider frameworks.
Collapse
Affiliation(s)
- Annalisa Bosco
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute For Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Bologna, Italy
- CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Bologna, Italy
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Caterina Foglino
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute For Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
| |
Collapse
|
4
|
Dunn JA, Taylor CE, Wong B, Henninger HB, Bachus KN, Foreman KB. Testing Precision and Accuracy of an Upper Extremity Proprioceptive Targeting Task Assessment. Arch Rehabil Res Clin Transl 2022; 4:100202. [PMID: 36123975 PMCID: PMC9482043 DOI: 10.1016/j.arrct.2022.100202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Objective To develop and test an assessment measuring extended physiological proprioception (EPP). EPP is a learned skill that allows one to extend proprioception to an external tool, which is important for controlling prosthetic devices. The current study examines the ability of this assessment to measure EPP in a nonamputee population for translation into the affected population. Design Measuring precision and accuracy of an upper extremity (UE) proprioceptive targeting task assessment. Participants completed 2 sessions of a targeting task while seated at a table. The targeting was completed with the dominant and nondominant hand and with eyes open and eyes closed during the task. Participants completed 2 sessions of the clinical test with a 1-week washout period to simulate reasonable time between clinical visits. Setting Research laboratory. Participants Twenty right-handed participants (N=20) with no neurologic or orthopedic deficits that would interfere with proprioception, median age of 25 years (range, 19-33 years), completed the assessment (10 men, 10 women). Interventions Not applicable. Main Outcome Measures Precision (consistency in targeting) and accuracy (distance between the intended target and participant result) in UE targeting task using EPP; test-retest repeatability between sessions. Results Both precision and accuracy were significantly decreased in the eyes-closed condition compared with the eyes-open condition regardless of targeting with dominant or nondominant hand (all P<.001). In the eyes-open condition, there was a dominance effect relating to the accuracy; however, in the eyes-closed condition, accuracy between dominant and nondominant hands was statistically equivalent. Based on minimum detectable change with 95% confidence, there was no change in either metric between the first and second sessions. Conclusions The results of this study support the feasibility of using this assessment to measure EPP-based on the definition of EPP as a learned skill that indicates control over an external, simple tool-because they demonstrate reliance on proprioception in the eyes-closed condition, symmetry in proprioceptive accuracy between hands for within-participant control, and test-retest reliability for longitudinal measurements. The results also establish normative values for this assessment in young, healthy adults. Further research is required in a clinical population to evaluate the UE proprioceptive targeting task assessment further and collect objective data on EPP.
Collapse
Affiliation(s)
- Julia A. Dunn
- Department of Orthopedics, University of Utah, Salt Lake City, UT
- Department of Biomedical Engineering University of Utah, Salt Lake City, UT
| | - Carolyn E. Taylor
- Department of Orthopedics, University of Utah, Salt Lake City, UT
- Department of Biomedical Engineering University of Utah, Salt Lake City, UT
| | - Bob Wong
- College of Nursing, University of Utah, Salt Lake City, UT
| | - Heath B. Henninger
- Department of Orthopedics, University of Utah, Salt Lake City, UT
- Department of Biomedical Engineering University of Utah, Salt Lake City, UT
| | - Kent N. Bachus
- Department of Orthopedics, University of Utah, Salt Lake City, UT
- Department of Biomedical Engineering University of Utah, Salt Lake City, UT
- Department of Veterans Affairs, Salt Lake City, UT
| | - Kenneth B. Foreman
- Department of Orthopedics, University of Utah, Salt Lake City, UT
- Department of Veterans Affairs, Salt Lake City, UT
- Department of Physical Therapy and Athletic Training University of Utah, Salt Lake City, UT
| |
Collapse
|
5
|
The posterior parietal area V6A: an attentionally-modulated visuomotor region involved in the control of reach-toF-grasp action. Neurosci Biobehav Rev 2022; 141:104823. [PMID: 35961383 DOI: 10.1016/j.neubiorev.2022.104823] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 07/15/2022] [Accepted: 08/08/2022] [Indexed: 11/23/2022]
Abstract
In the macaque, the posterior parietal area V6A is involved in the control of all phases of reach-to-grasp actions: the transport phase, given that reaching neurons are sensitive to the direction and amplitude of arm movement, and the grasping phase, since reaching neurons are also sensitive to wrist orientation and hand shaping. Reaching and grasping activity are corollary discharges which, together with the somatosensory and visual signals related to the same movement, allow V6A to act as a state estimator that signals discrepancies during the motor act in order to maintain consistency between the ongoing movement and the desired one. Area V6A is also able to encode the target of an action because of gaze-dependent visual neurons and real-position cells. Here, we advance the hypothesis that V6A also uses the spotlight of attention to guide goal-directed movements of the hand, and hosts a priority map that is specific for the guidance of reaching arm movement, combining bottom-up inputs such as visual responses with top-down signals such as reaching plans.
Collapse
|
6
|
High proprioceptive acuity in slow and fast hand movements. Exp Brain Res 2022; 240:1791-1800. [DOI: 10.1007/s00221-022-06362-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 03/31/2022] [Indexed: 11/26/2022]
|
7
|
Savaki HE, Kavroulakis E, Papadaki E, Maris TG, Simos PG. Action Observation Responses Are Influenced by Movement Kinematics and Target Identity. Cereb Cortex 2021; 32:490-503. [PMID: 34259867 DOI: 10.1093/cercor/bhab225] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
In order to inform the debate whether cortical areas related to action observation provide a pragmatic or a semantic representation of goal-directed actions, we performed 2 functional magnetic resonance imaging (fMRI) experiments in humans. The first experiment, involving observation of aimless arm movements, resulted in activation of most of the components known to support action execution and action observation. Given the absence of a target/goal in this experiment and the activation of parieto-premotor cortical areas, which were associated in the past with direction, amplitude, and velocity of movement of biological effectors, our findings suggest that during action observation we could be monitoring movement kinematics. With the second, double dissociation fMRI experiment, we revealed the components of the observation-related cortical network affected by 1) actions that have the same target/goal but different reaching and grasping kinematics and 2) actions that have very similar kinematics but different targets/goals. We found that certain areas related to action observation, including the mirror neuron ones, are informed about movement kinematics and/or target identity, hence providing a pragmatic rather than a semantic representation of goal-directed actions. Overall, our findings support a process-driven simulation-like mechanism of action understanding, in agreement with the theory of motor cognition, and question motor theories of action concept processing.
Collapse
Affiliation(s)
- Helen E Savaki
- Institute of Applied and Computational Mathematics, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece.,Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece
| | - Eleftherios Kavroulakis
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece
| | - Efrosini Papadaki
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece.,Computational Bio-Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece
| | - Thomas G Maris
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece.,Computational Bio-Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece
| | - Panagiotis G Simos
- Faculty of Medicine, School of Health Sciences, University of Crete, Iraklion, Crete 70013, Greece.,Computational Bio-Medicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas, Iraklion, Crete 70013, Greece
| |
Collapse
|
8
|
A brief glimpse at a haptic target is sufficient for multisensory integration in reaching movements. Vision Res 2021; 185:50-57. [PMID: 33895647 DOI: 10.1016/j.visres.2021.03.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 01/26/2021] [Accepted: 03/31/2021] [Indexed: 11/22/2022]
Abstract
Goal-directed aiming movements toward visuo-haptic targets (i.e., seen and handheld targets) are generally more precise than those toward visual only or haptic only targets. This multisensory advantage stems from a continuous inflow of haptic and visual target information during the movement planning and execution phases. However, in everyday life, multisensory movements often occur without the support of continuous visual information. Here we investigated whether and to what extent limiting visual information to the initial stage of the action still leads to a multisensory advantage. Participants were asked to reach a handheld target while vision was briefly provided during the movement planning phase (50 ms, 100 ms, 200 ms of vision before movement onset), or during the planning and early execution phases (400 ms of vision), or during the entire movement. Additional conditions were performed in which only haptic target information was provided, or, only vision was provided either briefly (50 ms, 100 ms, 200 ms, 400 ms) or throughout the entire movement. Results showed that 50 ms of vision before movement onset were sufficient to trigger a direction-specific visuo-haptic integration process that increased endpoint precision. We conclude that, when a continuous support of vision is not available, endpoint precision is determined by the less recent, but most reliable multisensory information rather than by the latest unisensory (haptic) inputs.
Collapse
|
9
|
Arikan BE, Voudouris D, Voudouri-Gertz H, Sommer J, Fiehler K. Reach-relevant somatosensory signals modulate activity in the tactile suppression network. Neuroimage 2021; 236:118000. [PMID: 33864902 DOI: 10.1016/j.neuroimage.2021.118000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 02/26/2021] [Accepted: 03/25/2021] [Indexed: 10/21/2022] Open
Abstract
Somatosensory signals on a moving limb are typically suppressed. This results mainly from a predictive mechanism that generates an efference copy, and attenuates the predicted sensory consequences of that movement. Sensory feedback is, however, important for movement control. Behavioral studies show that the strength of suppression on a moving limb increases during somatosensory reaching, when reach-relevant somatosensory signals from the target limb can be additionally used to plan and guide the movement, leading to increased reliability of sensorimotor predictions. It is still unknown how this suppression is neurally implemented. In this fMRI study, participants reached to a somatosensory (static finger) or an external target (touch-screen) without vision. To probe suppression, participants detected brief vibrotactile stimuli on their moving finger shortly before reach onset. As expected, sensitivity to probes was reduced during reaching compared to baseline (resting), and this suppression was stronger during somatosensory than external reaching. BOLD activation associated with suppression was also modulated by the reach target: relative to baseline, processing of probes during somatosensory reaching led to distinct BOLD deactivations in somatosensory regions (postcentral gyrus, supramarginal gyrus-SMG) whereas probes during external reaching led to deactivations in the cerebellum. In line with the behavioral results, we also found additional deactivations during somatosensory relative to external reaching in the supplementary motor area, a region linked with sensorimotor prediction. Somatosensory reaching was also linked with increased functional connectivity between the left SMG and the right parietal operculum along with the right anterior insula. We show that somatosensory processing on a moving limb is reduced when additional reach-relevant feedback signals from the target limb contribute to the movement, by down-regulating activation in regions associated with predictive and feedback processing.
Collapse
Affiliation(s)
- Belkis Ezgi Arikan
- Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel Str. 10F, D-35394 Giessen, Germany.
| | - Dimitris Voudouris
- Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel Str. 10F, D-35394 Giessen, Germany
| | - Hanna Voudouri-Gertz
- Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel Str. 10F, D-35394 Giessen, Germany
| | - Jens Sommer
- Core Facility Brain Imaging, Faculty of Medicine, Philipps University Marburg, Rudolf-Bultmann-Str. 9, 35039 Marburg, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel Str. 10F, D-35394 Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
10
|
Bernard-Espina J, Beraneck M, Maier MA, Tagliabue M. Multisensory Integration in Stroke Patients: A Theoretical Approach to Reinterpret Upper-Limb Proprioceptive Deficits and Visual Compensation. Front Neurosci 2021; 15:646698. [PMID: 33897359 PMCID: PMC8058201 DOI: 10.3389/fnins.2021.646698] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 03/04/2021] [Indexed: 11/29/2022] Open
Abstract
For reaching and grasping, as well as for manipulating objects, optimal hand motor control arises from the integration of multiple sources of sensory information, such as proprioception and vision. For this reason, proprioceptive deficits often observed in stroke patients have a significant impact on the integrity of motor functions. The present targeted review attempts to reanalyze previous findings about proprioceptive upper-limb deficits in stroke patients, as well as their ability to compensate for these deficits using vision. Our theoretical approach is based on two concepts: first, the description of multi-sensory integration using statistical optimization models; second, on the insight that sensory information is not only encoded in the reference frame of origin (e.g., retinal and joint space for vision and proprioception, respectively), but also in higher-order sensory spaces. Combining these two concepts within a single framework appears to account for the heterogeneity of experimental findings reported in the literature. The present analysis suggests that functional upper limb post-stroke deficits could not only be due to an impairment of the proprioceptive system per se, but also due to deficiencies of cross-references processing; that is of the ability to encode proprioceptive information in a non-joint space. The distinction between purely proprioceptive or cross-reference-related deficits can account for two experimental observations: first, one and the same patient can perform differently depending on specific proprioceptive assessments; and a given behavioral assessment results in large variability across patients. The distinction between sensory and cross-reference deficits is also supported by a targeted literature review on the relation between cerebral structure and proprioceptive function. This theoretical framework has the potential to lead to a new stratification of patients with proprioceptive deficits, and may offer a novel approach to post-stroke rehabilitation.
Collapse
Affiliation(s)
| | | | - Marc A Maier
- Université de Paris, INCC UMR 8002, CNRS, Paris, France
| | | |
Collapse
|
11
|
Kisiel-Sajewicz K, Marusiak J, Rojas-Martínez M, Janecki D, Chomiak S, Kamiński Ł, Mencel J, Mañanas MÁ, Jaskólski A, Jaskólska A. High-density surface electromyography maps after computer-aided training in individual with congenital transverse deficiency: a case study. BMC Musculoskelet Disord 2020; 21:682. [PMID: 33059684 PMCID: PMC7566138 DOI: 10.1186/s12891-020-03694-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 10/01/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The aim of this study was to determine whether computer-aided training (CAT) of motor tasks would increase muscle activity and change its spatial distribution in a patient with a bilateral upper-limb congenital transverse deficiency. We believe that our study makes a significant contribution to the literature because it demonstrates the usefulness of CAT in promoting the neuromuscular adaptation in people with congenital limb deficiencies and altered body image. CASE PRESENTATION The patient with bilateral upper-limb congenital transverse deficiency and the healthy control subject performed 12 weeks of the CAT. The subject's task was to imagine reaching and grasping a book with the hand. Subjects were provided a visual animation of that movement and sensory feedback to facilitate the mental engagement to accomplish the task. High-density electromyography (HD-EMG; 64-electrode) were collected from the trapezius muscle during a shrug isometric contraction before and after 4, 8, 12 weeks of the training. After training, we observed in our patient changes in the spatial distribution of the activation, and the increased average intensity of the EMG maps and maximal force. CONCLUSIONS These results, although from only one patient, suggest that mental training supported by computer-generated visual and sensory stimuli leads to beneficial changes in muscle strength and activity. The increased muscle activation and changed spatial distribution of the EMG activity after mental training may indicate the training-induced functional plasticity of the motor activation strategy within the trapezius muscle in individual with bilateral upper-limb congenital transverse deficiency. Marked changes in spatial distribution during the submaximal contraction in the patient after training could be associated with changes of the neural drive to the muscle, which corresponds with specific (unfamiliar for patient) motor task. These findings are relevant to neuromuscular functional rehabilitation in patients with a bilateral upper-limb congenital transverse deficiency especially before and after upper limb transplantation and to development of the EMG based prostheses.
Collapse
Affiliation(s)
- Katarzyna Kisiel-Sajewicz
- Department of Kinesiology, Faculty of Physiotherapy, University School of Physical Education in Wrocław, Al.I.J. Paderewskiego 35, P4, 51-612, Wrocław, Poland.
| | - Jarosław Marusiak
- Department of Kinesiology, Faculty of Physiotherapy, University School of Physical Education in Wrocław, Al.I.J. Paderewskiego 35, P4, 51-612, Wrocław, Poland
| | - Mónica Rojas-Martínez
- Department of Bioengineering, Faculty of Engineering, Universidad El Bosque, No 131 A, Ak. 9 #131a2, Bogotá, Colombia
| | - Damian Janecki
- Department of Kinesiology, Faculty of Physiotherapy, University School of Physical Education in Wrocław, Al.I.J. Paderewskiego 35, P4, 51-612, Wrocław, Poland
| | - Sławomir Chomiak
- Department of Kinesiology, Faculty of Physiotherapy, University School of Physical Education in Wrocław, Al.I.J. Paderewskiego 35, P4, 51-612, Wrocław, Poland
| | - Łukasz Kamiński
- Department of Kinesiology, Faculty of Physiotherapy, University School of Physical Education in Wrocław, Al.I.J. Paderewskiego 35, P4, 51-612, Wrocław, Poland
| | - Joanna Mencel
- Department of Kinesiology, Faculty of Physiotherapy, University School of Physical Education in Wrocław, Al.I.J. Paderewskiego 35, P4, 51-612, Wrocław, Poland
| | - Miguel Ángel Mañanas
- Biomedical Engineering Research Centre and Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine, Universitat Politècnica de Catalunya, Avinguda Diagonal, 647, 08028, Barcelona, Spain
| | - Artur Jaskólski
- Department of Kinesiology, Faculty of Physiotherapy, University School of Physical Education in Wrocław, Al.I.J. Paderewskiego 35, P4, 51-612, Wrocław, Poland
| | - Anna Jaskólska
- Department of Kinesiology, Faculty of Physiotherapy, University School of Physical Education in Wrocław, Al.I.J. Paderewskiego 35, P4, 51-612, Wrocław, Poland
| |
Collapse
|
12
|
Goettker A, Fiehler K, Voudouris D. Somatosensory target information is used for reaching but not for saccadic eye movements. J Neurophysiol 2020; 124:1092-1102. [DOI: 10.1152/jn.00258.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A systematic investigation of contributions of different somatosensory modalities (proprioception, kinesthesia, tactile) for goal-directed movements is missing. Here we demonstrate that while eye movements are not affected by different types of somatosensory information, reach precision improves when two different types of information are available. Moreover, reach accuracy and gaze precision to unseen somatosensory targets improve when performing coordinated eye-hand movements, suggesting bidirectional contributions of efferent information in reach and eye movement control.
Collapse
Affiliation(s)
- Alexander Goettker
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Dimitris Voudouris
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
13
|
Parietal Cortex Integrates Saccade and Object Orientation Signals to Update Grasp Plans. J Neurosci 2020; 40:4525-4535. [PMID: 32354854 DOI: 10.1523/jneurosci.0300-20.2020] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 04/20/2020] [Accepted: 04/21/2020] [Indexed: 11/21/2022] Open
Abstract
Coordinated reach-to-grasp movements are often accompanied by rapid eye movements (saccades) that displace the desired object image relative to the retina. Parietal cortex compensates for this by updating reach goals relative to current gaze direction, but its role in the integration of oculomotor and visual orientation signals for updating grasp plans is unknown. Based on a recent perceptual experiment, we hypothesized that inferior parietal cortex (specifically supramarginal gyrus [SMG]) integrates saccade and visual signals to update grasp plans in additional intraparietal/superior parietal regions. To test this hypothesis in humans (7 females, 6 males), we used a functional magnetic resonance paradigm, where saccades sometimes interrupted grasp preparation toward a briefly presented object that later reappeared (with the same/different orientation) just before movement. Right SMG and several parietal grasp regions, namely, left anterior intraparietal sulcus and bilateral superior parietal lobule, met our criteria for transsaccadic orientation integration: they showed task-dependent saccade modulations and, during grasp execution, they were specifically sensitive to changes in object orientation that followed saccades. Finally, SMG showed enhanced functional connectivity with both prefrontal saccade regions (consistent with oculomotor input) and anterior intraparietal sulcus/superior parietal lobule (consistent with sensorimotor output). These results support the general role of parietal cortex for the integration of visuospatial perturbations, and provide specific cortical modules for the integration of oculomotor and visual signals for grasp updating.SIGNIFICANCE STATEMENT How does the brain simultaneously compensate for both external and internally driven changes in visual input? For example, how do we grasp an unstable object while eye movements are simultaneously changing its retinal location? Here, we used fMRI to identify a group of inferior parietal (supramarginal gyrus) and superior parietal (intraparietal and superior parietal) regions that show saccade-specific modulations during unexpected changes in object/grasp orientation, and functional connectivity with frontal cortex saccade centers. This provides a network, complementary to the reach goal updater, that integrates visuospatial updating into grasp plans, and may help to explain some of the more complex symptoms associated with parietal damage, such as constructional ataxia.
Collapse
|
14
|
Dandu B, Kuling IA, Visell Y. Proprioceptive Localization of the Fingers: Coarse, Biased, and Context-Sensitive. IEEE TRANSACTIONS ON HAPTICS 2020; 13:259-269. [PMID: 30762567 DOI: 10.1109/toh.2019.2899302] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The proprioceptive sense provides somatosensory information about positions of parts of the body, information that is essential for guiding behavior and monitoring the body. Few studies have investigated the perceptual localization of individual fingers, despite their importance for tactile exploration and fine manipulation. We present two experiments assessing the performance of proprioceptive localization of multiple fingers, either alone or in combination with visual cues. In the first experiment, we used a virtual reality paradigm to assess localization of multiple fingers. Surprisingly, the errors averaged 3.7 cm per digit, which represents a significant fraction of the range of motion of any finger. Both random and systematic errors were large. The latter included participant-specific biases and participant-independent distortions that evoked similar observations from prior studies of perceptual representations of hand shape. In a second experiment, we introduced visual cues about positions of nearby fingers, and observed that this contextual information could greatly decrease localization errors. The results suggest that only coarse proprioceptive information is available through somatosensation, and that finer information may not be necessary for fine motor behavior. These findings may help elucidate human hand function, and inform new applications to the design of human-computer interfaces or interactions in virtual reality.
Collapse
|
15
|
Camponogara I, Volcic R. Grasping movements toward seen and handheld objects. Sci Rep 2019; 9:3665. [PMID: 30842478 PMCID: PMC6403353 DOI: 10.1038/s41598-018-38277-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Accepted: 12/21/2018] [Indexed: 11/29/2022] Open
Abstract
Grasping movements are typically performed toward visually sensed objects. However, planning and execution of grasping movements can be supported also by haptic information when we grasp objects held in the other hand. In the present study we investigated this sensorimotor integration process by comparing grasping movements towards objects sensed through visual, haptic or visuo-haptic signals. When movements were based on haptic information only, hand preshaping was initiated earlier, the digits closed on the object more slowly, and the final phase was more cautious compared to movements based on only visual information. Importantly, the simultaneous availability of vision and haptics led to faster movements and to an overall decrease of the grip aperture. Our findings also show that each modality contributes to a different extent in different phases of the movement, with haptics being more crucial in the initial phases and vision being more important for the final on-line control. Thus, vision and haptics can be flexibly combined to optimize the execution of grasping movement.
Collapse
Affiliation(s)
- Ivan Camponogara
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Robert Volcic
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
16
|
Tugac N, Gonzalez D, Noguchi K, Niechwiej-Szwedo E. The role of somatosensory input in target localization during binocular and monocular viewing while performing a high precision reaching and placement task. Exp Eye Res 2018; 183:76-83. [PMID: 30125540 DOI: 10.1016/j.exer.2018.08.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 08/15/2018] [Accepted: 08/16/2018] [Indexed: 11/25/2022]
Abstract
Binocular vision provides the most accurate and precise depth information; however, many people have impairments in binocular visual function. It is possible that other sensory inputs could be used to obtain reliable depth information when binocular vision is not available. However, it is currently unknown whether depth information from another modality improves target localization in depth during action execution. Therefore, the goal of this study was to assess whether somatosensory input improves target localization during the performance of a precision placement task. Visually normal young adults (n = 15) performed a bead threading task during binocular and monocular viewing in two experimental conditions where needle location was specified by 1) vision only, or 2) vision and somatosensory input, which was provided by the non-dominant limb. Performance on the task was assessed using spatial and temporal kinematic measures. In accordance with the hypothesis, results showed that the interval spent placing the bead on the needle was significantly shorter during monocular viewing when somatosensory input was available in comparison to a vision only condition. In contrast, results showed no evidence to support that somatosensory input about the needle location affects trajectory control. These findings demonstrate that the central nervous system relies predominately on visual input during reach execution, however, somatosensory input can be used to facilitate the performance of the precision placement task.
Collapse
Affiliation(s)
- Naime Tugac
- Department of Kinesiology, University of Waterloo, Waterloo, Canada
| | - David Gonzalez
- Department of Kinesiology, University of Waterloo, Waterloo, Canada
| | - Kimihiro Noguchi
- Department of Mathematics, Western Washington University, Bellingham, USA
| | | |
Collapse
|
17
|
Brand J, Michels L, Bakker R, Hepp-Reymond MC, Kiper D, Morari M, Eng K. Neural correlates of visuomotor adjustments during scaling of human finger movements. Eur J Neurosci 2018; 46:1717-1729. [PMID: 28503804 DOI: 10.1111/ejn.13606] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Revised: 05/09/2017] [Accepted: 05/10/2017] [Indexed: 01/31/2023]
Abstract
Visually guided finger movements include online feedback of current effector position to guide target approach. This visual feedback may be scaled or otherwise distorted by unpredictable perturbations. Although adjustments to visual feedback scaling have been studied before, the underlying brain activation differences between upscaling (visual feedback larger than real movement) and downscaling (feedback smaller than real movement) are currently unknown. Brain activation differences between upscaling and downscaling might be expected because within-trial adjustments during upscaling require corrective backwards accelerations, whereas correcting for downscaling requires forward accelerations. In this behavioural and fMRI study we investigated adjustments during up- and downscaling in a target-directed finger flexion-extension task with real-time visual feedback. We found that subjects made longer and more complete within-trial corrections for downscaling perturbations than for upscaling perturbations. The finger task activated primary motor (M1) and somatosensory (S1) areas, premotor and parietal regions, basal ganglia, and cerebellum. General scaling effects were seen in the right pre-supplementary motor area, dorsal anterior cingulate cortex, inferior parietal lobule, and dorsolateral prefrontal cortex. Stronger activations for down- than for upscaling were observed in M1, supplementary motor area (SMA), S1 and anterior cingulate cortex. We argue that these activation differences may reflect differing online correction for upscaling vs. downscaling during finger flexion-extension.
Collapse
Affiliation(s)
- Johannes Brand
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland.,Automatic Control Laboratory, ETH Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Lars Michels
- Clinic of Neuroradiology, University Hospital Zurich, Zurich, Switzerland.,Centre for MR-Research, University Children's Hospital, Zurich, Switzerland
| | - Romy Bakker
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Marie-Claude Hepp-Reymond
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Daniel Kiper
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Manfred Morari
- Automatic Control Laboratory, ETH Zurich, Zurich, Switzerland
| | - Kynan Eng
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| |
Collapse
|
18
|
Chen J, Sperandio I, Goodale MA. Proprioceptive Distance Cues Restore Perfect Size Constancy in Grasping, but Not Perception, When Vision Is Limited. Curr Biol 2018; 28:927-932.e4. [DOI: 10.1016/j.cub.2018.01.076] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Revised: 12/27/2017] [Accepted: 01/24/2018] [Indexed: 01/12/2023]
|
19
|
Maij F, Wing AM, Medendorp WP. Afferent motor feedback determines the perceived location of tactile stimuli in the external space presented to the moving arm. J Neurophysiol 2017; 118:187-193. [PMID: 28356475 DOI: 10.1152/jn.00286.2016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Revised: 03/28/2017] [Accepted: 03/28/2017] [Indexed: 11/22/2022] Open
Abstract
People make systematic errors when localizing a brief tactile stimulus in the external space presented on the index finger while moving the arm. Although these errors likely arise in the spatiotemporal integration of the tactile input and information about arm position, the underlying arm position information used in this process is not known. In this study, we tested the contributions of afferent proprioceptive feedback and predictive arm position signals by comparing localization errors during passive vs. active arm movements. In the active trials, participants were instructed to localize a tactile stimulus in the external space that was presented to the index finger near the time of a self-generated arm movement. In the passive trials, each of the active trials was passively replayed in randomized order, using a robotic device. Our results provide evidence that the localization error patterns of the passive trials are similar to the active trials and, moreover, did not lag but rather led the active trials, which suggests that proprioceptive feedback makes an important contribution to tactile localization. To further test which kinematic property of this afferent feedback signal drives the underlying computations, we examined the localization errors with movements that had differently skewed velocity profiles but overall the same displacement. This revealed a difference in the localization patterns, which we explain by a probabilistic model in which temporal uncertainty about the stimulus is converted into a spatial likelihood, depending on the actual velocity of the arm rather than involving an efferent, preprogrammed movement.NEW & NOTEWORTHY We show that proprioceptive feedback of arm motion rather than efferent motor signals contributes to tactile localization during an arm movement. Data further show that localization errors depend on arm velocity, not displacement per se, suggesting that instantaneous velocity feedback plays a role in the underlying computations. Model simulation using Bayesian inference suggests that these errors depend not only on spatial but also on temporal uncertainties of sensory and motor signals.
Collapse
Affiliation(s)
- Femke Maij
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands; and .,School of Psychology, University of Birmingham, Birmingham, United Kingdom
| | - Alan M Wing
- School of Psychology, University of Birmingham, Birmingham, United Kingdom
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands; and
| |
Collapse
|
20
|
Planning Functional Grasps of Simple Tools Invokes the Hand-independent Praxis Representation Network: An fMRI Study. J Int Neuropsychol Soc 2017; 23:108-120. [PMID: 28205496 DOI: 10.1017/s1355617716001120] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVES Neuropsychological and neuroimaging evidence indicates that tool use knowledge and abilities are represented in the praxis representation network (PRN) of the left cerebral hemisphere. We investigated whether PRN would also underlie the planning of function-appropriate grasps of tools, even though such an assumption is inconsistent with some neuropsychological evidence for independent representations of tool grasping and skilled tool use. METHODS Twenty right-handed participants were tested in an event-related functional magnetic resonance imaging (fMRI) study wherein they planned functionally appropriate grasps of tools versus grasps of non-tools matched for size and/or complexity, and later executed the pantomimed grasps of these objects. The dominant right, and non-dominant left hands were used in two different sessions counterbalanced across participants. The tool and non-tool stimuli were presented at three different orientations, some requiring uncomfortable hand rotations for effective grips, with the difficulty matched for both hands. RESULTS Planning functional grasps of tools (vs. non-tools) was associated with significant asymmetrical increases of activity in the temporo/occipital-parieto-frontal networks. The greater involvement of the left hemisphere PRN was particularly evident when hand movement kinematics (including wrist rotations) for grasping tools and non-tools were matched. The networks engaged in the task for the dominant and non-dominant hand were virtually identical. The differences in neural activity for the two object categories disappeared during grasp execution. CONCLUSIONS The greater hand-independent engagement of the left-hemisphere praxis representation network for planning functional grasps reveals a genuine effect of an early affordance/function-based visual processing of tools. (JINS, 2017, 23, 108-120).
Collapse
|
21
|
Voudouris D, Goettker A, Mueller S, Fiehler K. Kinesthetic information facilitates saccades towards proprioceptive-tactile targets. Vision Res 2016; 122:73-80. [PMID: 27063362 DOI: 10.1016/j.visres.2016.03.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2015] [Revised: 12/22/2015] [Accepted: 03/09/2016] [Indexed: 10/21/2022]
Abstract
Saccades to somatosensory targets have longer latencies and are less accurate and precise than saccades to visual targets. Here we examined how different somatosensory information influences the planning and control of saccadic eye movements. Participants fixated a central cross and initiated a saccade as fast as possible in response to a tactile stimulus that was presented to either the index or the middle fingertip of their unseen left hand. In a static condition, the hand remained at a target location for the entire block of trials and the stimulus was presented at a fixed time after an auditory tone. Therefore, the target location was derived only from proprioceptive and tactile information. In a moving condition, the hand was first actively moved to the same target location and the stimulus was then presented immediately. Thus, in the moving condition additional kinesthetic information about the target location was available. We found shorter saccade latencies in the moving compared to the static condition, but no differences in accuracy or precision of saccadic endpoints. In a second experiment, we introduced variable delays after the auditory tone (static condition) or after the end of the hand movement (moving condition) in order to reduce the predictability of the moment of the stimulation and to allow more time to process the kinesthetic information. Again, we found shorter latencies in the moving compared to the static condition but no improvement in saccade accuracy or precision. In a third experiment, we showed that the shorter saccade latencies in the moving condition cannot be explained by the temporal proximity between the relevant event (auditory tone or end of hand movement) and the moment of the stimulation. Our findings suggest that kinesthetic information facilitates planning, but not control, of saccadic eye movements to proprioceptive-tactile targets.
Collapse
Affiliation(s)
| | | | - Stefanie Mueller
- Experimental Psychology, Justus-Liebig University Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus-Liebig University Giessen, Germany.
| |
Collapse
|
22
|
Marangon M, Kubiak A, Króliczak G. Haptically Guided Grasping. fMRI Shows Right-Hemisphere Parietal Stimulus Encoding, and Bilateral Dorso-Ventral Parietal Gradients of Object- and Action-Related Processing during Grasp Execution. Front Hum Neurosci 2016; 9:691. [PMID: 26779002 PMCID: PMC4700263 DOI: 10.3389/fnhum.2015.00691] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 12/07/2015] [Indexed: 11/13/2022] Open
Abstract
The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality.
Collapse
Affiliation(s)
- Mattia Marangon
- Action and Cognition Laboratory, Department of Social Sciences, Institute of Psychology, Adam Mickiewicz University in Poznań Poznań, Poland
| | - Agnieszka Kubiak
- Action and Cognition Laboratory, Department of Social Sciences, Institute of Psychology, Adam Mickiewicz University in Poznań Poznań, Poland
| | - Gregory Króliczak
- Action and Cognition Laboratory, Department of Social Sciences, Institute of Psychology, Adam Mickiewicz University in Poznań Poznań, Poland
| |
Collapse
|
23
|
Hadjidimitrakis K, Dal Bo' G, Breveglieri R, Galletti C, Fattori P. Overlapping representations for reach depth and direction in caudal superior parietal lobule of macaques. J Neurophysiol 2015; 114:2340-52. [PMID: 26269557 DOI: 10.1152/jn.00486.2015] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Accepted: 08/07/2015] [Indexed: 11/22/2022] Open
Abstract
Reaching movements in the real world have typically a direction and a depth component. Despite numerous behavioral studies, there is no consensus on whether reach coordinates are processed in separate or common visuomotor channels. Furthermore, the neural substrates of reach depth in parietal cortex have been ignored in most neurophysiological studies. In the medial posterior parietal area V6A, we recently demonstrated the strong presence of depth signals and the extensive convergence of depth and direction information on single neurons during all phases of a fixate-to-reach task in 3-dimensional (3D) space. Using the same task, in the present work we examined the processing of direction and depth information in area PEc of the caudal superior parietal lobule (SPL) in three Macaca fascicularis monkeys. Across the task, depth and direction had a similar, high incidence of modulatory effect. The effect of direction was stronger than depth during the initial fixation period. As the task progressed toward arm movement execution, depth tuning became more prominent than directional tuning and the number of cells modulated by both depth and direction increased significantly. Neurons tuned by depth showed a small bias for far peripersonal space. Cells with directional modulations were more frequently tuned toward contralateral spatial locations, but ipsilateral space was also represented. These findings, combined with results from neighboring areas V6A and PE, support a rostral-to-caudal gradient of overlapping representations for reach depth and direction in SPL. These findings also support a progressive change from visuospatial (vergence angle) to somatomotor representations of 3D space in SPL.
Collapse
Affiliation(s)
- Kostas Hadjidimitrakis
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Giulia Dal Bo'
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| | - Rossella Breveglieri
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy; and
| |
Collapse
|
24
|
Khanafer S, Cressman EK. Sensory integration during reaching: the effects of manipulating visual target availability. Exp Brain Res 2014; 232:3833-46. [DOI: 10.1007/s00221-014-4064-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Accepted: 08/01/2014] [Indexed: 11/24/2022]
|
25
|
Cameron BD, López-Moliner J. Target modality affects visually guided online control of reaching. Vision Res 2014; 110:233-43. [PMID: 24997229 DOI: 10.1016/j.visres.2014.06.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2014] [Revised: 06/19/2014] [Accepted: 06/24/2014] [Indexed: 10/25/2022]
Abstract
The integration of vision and proprioception for estimating the hand's starting location prior to a reach has been shown to depend on the modality of the target towards which the reach is planned. Here we investigated whether the processing of online feedback is also influenced by target modality. Participants made reaching movements to a target that was defined by vision, proprioception, or both, and visual feedback about the unfolding movement was either present or absent. To measure online control we used the variability across trials; we examined the course of this variability for the different target modalities and effector conditions. Our results showed that the rate of decrease in variability in the later part of the movements (an indicator of online control) was minimally influenced by effector vision when participants reached towards a proprioceptive target, whereas the rate of decrease was clearly influenced by effector vision when participants reached towards a visual target. In other words, when participants reached towards a proprioceptively defined target they relied less on visual information about the moving hand than when they reached towards a visually defined target. These results suggest that target modality influences visual processing for online control.
Collapse
Affiliation(s)
- Brendan D Cameron
- Vision and Control of Action Group, Departament de Psicologia Bàsica, Universitat de Barcelona, Passeig de la Vall d'Hebron, 171, 08035 Barcelona, Catalonia, Spain; Institute for Brain, Cognition, and Behaviour (IR3C), Universitat de Barcelona, Passeig de la Vall d'Hebron, 171, 08035 Barcelona, Catalonia, Spain.
| | - Joan López-Moliner
- Vision and Control of Action Group, Departament de Psicologia Bàsica, Universitat de Barcelona, Passeig de la Vall d'Hebron, 171, 08035 Barcelona, Catalonia, Spain; Institute for Brain, Cognition, and Behaviour (IR3C), Universitat de Barcelona, Passeig de la Vall d'Hebron, 171, 08035 Barcelona, Catalonia, Spain
| |
Collapse
|
26
|
Li L, Brockmeier AJ, Choi JS, Francis JT, Sanchez JC, Príncipe JC. A tensor-product-kernel framework for multiscale neural activity decoding and control. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2014; 2014:870160. [PMID: 24829569 PMCID: PMC4009155 DOI: 10.1155/2014/870160] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2013] [Revised: 01/28/2014] [Accepted: 02/11/2014] [Indexed: 12/04/2022]
Abstract
Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation.
Collapse
Affiliation(s)
- Lin Li
- Philips Research North America, Briarcliff Manor, NY 10510, USA
| | - Austin J. Brockmeier
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
| | - John S. Choi
- Joint Program in Biomedical Engineering, NYU Polytechnic School of Engineering and SUNY Downstate, Brooklyn, NY 11203, USA
| | - Joseph T. Francis
- Department of Physiology and Pharmacology, State University of New York Downstate Medical Center, Joint Program in Biomedical Engineering, NYU Polytechnic School of Engineering and SUNY Downstate, Robert F. Furchgott Center for Neural & Behavioral Science, Brooklyn, NY 11203, USA
| | - Justin C. Sanchez
- Department of Biomedical Engineering, Department of Neuroscience, Miami Project to Cure Paralysis, University of Miami, Coral Gables, FL 33146, USA
| | - José C. Príncipe
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
| |
Collapse
|
27
|
Perceived size change induced by nonvisual signals in darkness: the relative contribution of vergence and proprioception. J Neurosci 2013; 33:16915-23. [PMID: 24155297 DOI: 10.1523/jneurosci.0977-13.2013] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Most of the time, the human visual system computes perceived size by scaling the size of an object on the retina with its perceived distance. There are instances, however, in which size-distance scaling is not based on visual inputs but on extraretinal cues. In the Taylor illusion, the perceived afterimage that is projected on an observer's hand will change in size depending on how far the limb is positioned from the eyes-even in complete darkness. In the dark, distance cues might derive from hand position signals either by an efference copy of the motor command to the moving hand or by proprioceptive input. Alternatively, there have been reports that vergence signals from the eyes might also be important. We performed a series of behavioral and eye-tracking experiments to tease apart how these different sources of distance information contribute to the Taylor illusion. We demonstrate that, with no visual information, perceived size changes mainly as a function of the vergence angle of the eyes, underscoring its importance in size-distance scaling. Interestingly, the strength of this relationship decreased when a mismatch between vergence and proprioception was introduced, indicating that proprioceptive feedback from the arm also affected size perception. By using afterimages, we provide strong evidence that the human visual system can benefit from sensory signals that originate from the hand when visual information about distance is unavailable.
Collapse
|
28
|
Capaday C, Darling WG, Stanek K, Van Vreeswijk C. Pointing to oneself: active versus passive proprioception revisited and implications for internal models of motor system function. Exp Brain Res 2013; 229:171-80. [PMID: 23756602 DOI: 10.1007/s00221-013-3603-4] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2013] [Accepted: 05/29/2013] [Indexed: 12/21/2022]
Abstract
We re-examined the issue of active versus passive proprioception to more fully characterize the accuracy afforded by proprioceptive information in natural, unconstrained, movements in 3-dimensions. Subjects made pointing movements with their non-dominant arm to various locations with eyes closed. They then proprioceptively localized the tip of its index finger with a prompt pointing movement of their dominant arm, thereby bringing the two indices in apposition. Subjects performed this task with remarkable accuracy. More remarkably, the same subjects were equally accurate at localizing the index finger when the arm was passively moved and maintained in its final position by an experimenter. Two subjects were also tested with eyes open, and they were no more accurate than with eyes closed. We also found that the magnitude of the error did not depend on movement duration, which is contrary to a key observation in support of the existence of an internal forward model-based state-reconstruction scheme. Three principal conclusions derive from this study. First, in unconstrained movements, proprioceptive information provides highly accurate estimates of limb position. Second, so-called active proprioception does not provide better estimates of limb position than passive proprioception. Lastly, in the active movement condition, an internal model-based estimation of limb position should, according to that hypothesis, have occurred throughout the movement. If so, it did not lead to a better estimate of final limb position, or lower variance of the estimate, casting doubt on the necessity to invoke this hypothetical construct.
Collapse
Affiliation(s)
- Charles Capaday
- Brain and Movement Laboratory, Department of Electrical Engineering, Technical University of Denmark, 2800, Kongens Lyngby, Denmark.
| | | | | | | |
Collapse
|
29
|
Konen CS, Mruczek REB, Montoya JL, Kastner S. Functional organization of human posterior parietal cortex: grasping- and reaching-related activations relative to topographically organized cortex. J Neurophysiol 2013; 109:2897-908. [PMID: 23515795 DOI: 10.1152/jn.00657.2012] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
The act of reaching to grasp an object requires the coordination between transporting the arm and shaping the hand. Neurophysiological, neuroimaging, neuroanatomic, and neuropsychological studies in macaque monkeys and humans suggest that the neural networks underlying grasping and reaching acts are at least partially separable within the posterior parietal cortex (PPC). To better understand how these neural networks have evolved in primates, we characterized the relationship between grasping- and reaching-related responses and topographically organized areas of the human intraparietal sulcus (IPS) using functional MRI. Grasping-specific activation was localized to the left anterior IPS, partially overlapping with the most anterior topographic regions and extending into the postcentral sulcus. Reaching-specific activation was localized to the left precuneus and superior parietal lobule, partially overlapping with the medial aspects of the more posterior topographic regions. Although the majority of activity within the topographic regions of the IPS was nonspecific with respect to movement type, we found evidence for a functional gradient of specificity for reaching and grasping movements spanning posterior-medial to anterior-lateral PPC. In contrast to the macaque monkey, grasp- and reach-specific activations were largely located outside of the human IPS.
Collapse
Affiliation(s)
- Christina S Konen
- Department of Psychology, Princeton University, Princeton, NJ 085444, USA
| | | | | | | |
Collapse
|
30
|
Hadjidimitrakis K, Bertozzi F, Breveglieri R, Bosco A, Galletti C, Fattori P. Common Neural Substrate for Processing Depth and Direction Signals for Reaching in the Monkey Medial Posterior Parietal Cortex. Cereb Cortex 2013; 24:1645-57. [DOI: 10.1093/cercor/bht021] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
31
|
Byrne PA, Henriques DYP. When more is less: increasing allocentric visual information can switch visual-proprioceptive combination from an optimal to sub-optimal process. Neuropsychologia 2012; 51:26-37. [PMID: 23142707 DOI: 10.1016/j.neuropsychologia.2012.10.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2012] [Revised: 08/16/2012] [Accepted: 10/05/2012] [Indexed: 10/27/2022]
Abstract
When reaching for an object in the environment, the brain often has access to multiple independent estimates of that object's location. For example, if someone places their coffee cup on a table, then later they know where it is because they see it, but also because they remember how their reaching limb was oriented when they placed the cup. Intuitively, one would expect more accurate reaches if either of these estimates were improved (e.g., if a light were turned on so the cup were more visible). It is now well-established that the brain tends to combine two or more estimates about the same stimulus as a maximum-likelihood estimator (MLE), which is the best thing to do when estimates are unbiased. Even in the presence of small biases, relying on the MLE rule is still often better than choosing a single estimate. For this work, we designed a reaching task in which human subjects could integrate proprioceptive and allocentric (landmark-relative) visual information to reach for a remembered target. Even though both of these modalities contain some level of bias, we demonstrate via simulation that our subjects should use an MLE rule in preference to relying on one modality or the other in isolation. Furthermore, we show that when visual information is poor, subjects do, indeed, combine information in this way. However, when we improve the quality of visual information, subjects counter-intuitively switch to a sub-optimal strategy that occasionally includes reliance on a single modality.
Collapse
Affiliation(s)
- Patrick A Byrne
- Centre for Vision Research, Science, York University, 4700 Keele Street, Toronto, ON, Canada M3J 1P3.
| | | |
Collapse
|
32
|
Karl JM, Sacrey LAR, Doan JB, Whishaw IQ. Oral hapsis guides accurate hand preshaping for grasping food targets in the mouth. Exp Brain Res 2012; 221:223-40. [PMID: 22782480 DOI: 10.1007/s00221-012-3164-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2012] [Accepted: 06/21/2012] [Indexed: 10/28/2022]
Abstract
Preshaping the digits and orienting the hand when reaching to grasp a distal target is proposed to be optimal when guided by vision. A reach-to-grasp movement to an object in one's own mouth is a natural and commonly used movement, but there has been no previous description of how it is performed. The movement requires accuracy but likely depends upon haptic rather than visual guidance, leading to the question of whether the kinematics of this movement are similar to those with vision or whether the movement depends upon an alternate strategy. The present study used frame-by-frame video analysis and linear kinematics to analyze hand movements as participants reached for ethologically relevant food targets placed either at a distal location or in the mouth. When reaching for small and medium-sized food items (blueberries and donut balls) that had maximal lip-to-target contact, hand preshaping was equivalent to that used for visually guided reaching. When reaching for a large food item (orange slice) that extended beyond the edges of the mouth, hand preshaping was suboptimal compared to vision. Nevertheless, hapsis from the reaching hand was used to reshape and reorient the hand after first contact with the large target. The equally precise guidance of hand preshaping under oral hapsis is discussed in relation to the idea that hand preshaping, and its requisite neural circuitry, may have originated under somatosensory control, with secondary access by vision.
Collapse
Affiliation(s)
- Jenni M Karl
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge AB T1K 3M4, Canada.
| | | | | | | |
Collapse
|
33
|
Jones SAH, Byrne PA, Fiehler K, Henriques DYP. Reach endpoint errors do not vary with movement path of the proprioceptive target. J Neurophysiol 2012; 107:3316-24. [DOI: 10.1152/jn.00901.2011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Previous research has shown that reach endpoints vary with the starting position of the reaching hand and the location of the reach target in space. We examined the effect of movement direction of a proprioceptive target-hand, immediately preceding a reach, on reach endpoints to that target. Participants reached to visual, proprioceptive (left target-hand), or visual-proprioceptive targets (left target-hand illuminated for 1 s prior to reach onset) with their right hand. Six sites served as starting and final target locations (35 target movement directions in total). Reach endpoints do not vary with the movement direction of the proprioceptive target, but instead appear to be anchored to some other reference (e.g., body). We also compared reach endpoints across the single and dual modality conditions. Overall, the pattern of reaches for visual-proprioceptive targets resembled those for proprioceptive targets, while reach precision resembled those for the visual targets. We did not, however, find evidence for integration of vision and proprioception based on a maximum-likelihood estimator in these tasks.
Collapse
Affiliation(s)
- Stephanie A. H. Jones
- The School of Health and Human Performance, Dalhousie University, Halifax, Nova Scotia
| | - Patrick A. Byrne
- School of Kinesiology and Health Science, York University, Toronto, Canada; and
| | - Katja Fiehler
- Department of Psychology, Justus-Liebig University, Giessen, Germany
| | | |
Collapse
|
34
|
Karl JM, Sacrey LAR, Doan JB, Whishaw IQ. Hand shaping using hapsis resembles visually guided hand shaping. Exp Brain Res 2012; 219:59-74. [DOI: 10.1007/s00221-012-3067-y] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2011] [Accepted: 03/04/2012] [Indexed: 11/28/2022]
|
35
|
Liu J, Khalil HK, Oweiss KG. Neural feedback for instantaneous spatiotemporal modulation of afferent pathways in bi-directional brain-machine interfaces. IEEE Trans Neural Syst Rehabil Eng 2011; 19:521-33. [PMID: 21859634 DOI: 10.1109/tnsre.2011.2162003] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In bi-directional brain-machine interfaces (BMIs), precisely controlling the delivery of microstimulation, both in space and in time, is critical to continuously modulate the neural activity patterns that carry information about the state of the brain-actuated device to sensory areas in the brain. In this paper, we investigate the use of neural feedback to control the spatiotemporal firing patterns of neural ensembles in a model of the thalamocortical pathway. Control of pyramidal (PY) cells in the primary somatosensory cortex (S1) is achieved based on microstimulation of thalamic relay cells through multiple-input multiple-output (MIMO) feedback controllers. This closed loop feedback control mechanism is achieved by simultaneously varying the stimulation parameters across multiple stimulation electrodes in the thalamic circuit based on continuous monitoring of the difference between reference patterns and the evoked responses of the cortical PY cells. We demonstrate that it is feasible to achieve a desired level of performance by controlling the firing activity pattern of a few "key" neural elements in the network. Our results suggest that neural feedback could be an effective method to facilitate the delivery of information to the cortex to substitute lost sensory inputs in cortically controlled BMIs.
Collapse
Affiliation(s)
- Jianbo Liu
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA.
| | | | | |
Collapse
|