1
|
Bernard-Espina J, Dal Canto D, Beraneck M, McIntyre J, Tagliabue M. How Tilting the Head Interferes With Eye-Hand Coordination: The Role of Gravity in Visuo-Proprioceptive, Cross-Modal Sensory Transformations. Front Integr Neurosci 2022; 16:788905. [PMID: 35359704 PMCID: PMC8961421 DOI: 10.3389/fnint.2022.788905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 02/03/2022] [Indexed: 11/13/2022] Open
Abstract
To correctly position the hand with respect to the spatial location and orientation of an object to be reached/grasped, visual information about the target and proprioceptive information from the hand must be compared. Since visual and proprioceptive sensory modalities are inherently encoded in a retinal and musculo-skeletal reference frame, respectively, this comparison requires cross-modal sensory transformations. Previous studies have shown that lateral tilts of the head interfere with the visuo-proprioceptive transformations. It is unclear, however, whether this phenomenon is related to the neck flexion or to the head-gravity misalignment. To answer to this question, we performed three virtual reality experiments in which we compared a grasping-like movement with lateral neck flexions executed in an upright seated position and while lying supine. In the main experiment, the task requires cross-modal transformations, because the target information is visually acquired, and the hand is sensed through proprioception only. In the other two control experiments, the task is unimodal, because both target and hand are sensed through one, and the same, sensory channel (vision and proprioception, respectively), and, hence, cross-modal processing is unnecessary. The results show that lateral neck flexions have considerably different effects in the seated and supine posture, but only for the cross-modal task. More precisely, the subjects’ response variability and the importance associated to the visual encoding of the information significantly increased when supine. We show that these findings are consistent with the idea that head-gravity misalignment interferes with the visuo-proprioceptive cross-modal processing. Indeed, the principle of statistical optimality in multisensory integration predicts the observed results if the noise associated to the visuo-proprioceptive transformations is assumed to be affected by gravitational signals, and not by neck proprioceptive signals per se. This finding is also consistent with the observation of otolithic projections in the posterior parietal cortex, which is involved in the visuo-proprioceptive processing. Altogether these findings represent a clear evidence of the theorized central role of gravity in spatial perception. More precisely, otolithic signals would contribute to reciprocally align the reference frames in which the available sensory information can be encoded.
Collapse
Affiliation(s)
- Jules Bernard-Espina
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
| | - Daniele Dal Canto
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
| | - Mathieu Beraneck
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
| | - Joseph McIntyre
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
- Ikerbasque Science Foundation, Bilbao, Spain
- TECNALIA, Basque Research and Technology Alliance (BRTA), San Sebastian, Spain
| | - Michele Tagliabue
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
- *Correspondence: Michele Tagliabue,
| |
Collapse
|
2
|
Bernard-Espina J, Beraneck M, Maier MA, Tagliabue M. Multisensory Integration in Stroke Patients: A Theoretical Approach to Reinterpret Upper-Limb Proprioceptive Deficits and Visual Compensation. Front Neurosci 2021; 15:646698. [PMID: 33897359 PMCID: PMC8058201 DOI: 10.3389/fnins.2021.646698] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 03/04/2021] [Indexed: 11/29/2022] Open
Abstract
For reaching and grasping, as well as for manipulating objects, optimal hand motor control arises from the integration of multiple sources of sensory information, such as proprioception and vision. For this reason, proprioceptive deficits often observed in stroke patients have a significant impact on the integrity of motor functions. The present targeted review attempts to reanalyze previous findings about proprioceptive upper-limb deficits in stroke patients, as well as their ability to compensate for these deficits using vision. Our theoretical approach is based on two concepts: first, the description of multi-sensory integration using statistical optimization models; second, on the insight that sensory information is not only encoded in the reference frame of origin (e.g., retinal and joint space for vision and proprioception, respectively), but also in higher-order sensory spaces. Combining these two concepts within a single framework appears to account for the heterogeneity of experimental findings reported in the literature. The present analysis suggests that functional upper limb post-stroke deficits could not only be due to an impairment of the proprioceptive system per se, but also due to deficiencies of cross-references processing; that is of the ability to encode proprioceptive information in a non-joint space. The distinction between purely proprioceptive or cross-reference-related deficits can account for two experimental observations: first, one and the same patient can perform differently depending on specific proprioceptive assessments; and a given behavioral assessment results in large variability across patients. The distinction between sensory and cross-reference deficits is also supported by a targeted literature review on the relation between cerebral structure and proprioceptive function. This theoretical framework has the potential to lead to a new stratification of patients with proprioceptive deficits, and may offer a novel approach to post-stroke rehabilitation.
Collapse
Affiliation(s)
| | | | - Marc A Maier
- Université de Paris, INCC UMR 8002, CNRS, Paris, France
| | | |
Collapse
|
3
|
Stahn AC, Riemer M, Wolbers T, Werner A, Brauns K, Besnard S, Denise P, Kühn S, Gunga HC. Spatial Updating Depends on Gravity. Front Neural Circuits 2020; 14:20. [PMID: 32581724 PMCID: PMC7291770 DOI: 10.3389/fncir.2020.00020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 04/09/2020] [Indexed: 12/13/2022] Open
Abstract
As we move through an environment the positions of surrounding objects relative to our body constantly change. Maintaining orientation requires spatial updating, the continuous monitoring of self-motion cues to update external locations. This ability critically depends on the integration of visual, proprioceptive, kinesthetic, and vestibular information. During weightlessness gravity no longer acts as an essential reference, creating a discrepancy between vestibular, visual and sensorimotor signals. Here, we explore the effects of repeated bouts of microgravity and hypergravity on spatial updating performance during parabolic flight. Ten healthy participants (four women, six men) took part in a parabolic flight campaign that comprised a total of 31 parabolas. Each parabola created about 20–25 s of 0 g, preceded and followed by about 20 s of hypergravity (1.8 g). Participants performed a visual-spatial updating task in seated position during 15 parabolas. The task included two updating conditions simulating virtual forward movements of different lengths (short and long), and a static condition with no movement that served as a control condition. Two trials were performed during each phase of the parabola, i.e., at 1 g before the start of the parabola, at 1.8 g during the acceleration phase of the parabola, and during 0 g. Our data demonstrate that 0 g and 1.8 g impaired pointing performance for long updating trials as indicated by increased variability of pointing errors compared to 1 g. In contrast, we found no support for any changes for short updating and static conditions, suggesting that a certain degree of task complexity is required to affect pointing errors. These findings are important for operational requirements during spaceflight because spatial updating is pivotal for navigation when vision is poor or unreliable and objects go out of sight, for example during extravehicular activities in space or the exploration of unfamiliar environments. Future studies should compare the effects on spatial updating during seated and free-floating conditions, and determine at which g-threshold decrements in spatial updating performance emerge.
Collapse
Affiliation(s)
- Alexander Christoph Stahn
- Department of Psychiatry, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, United States.,Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Physiology, Berlin, Germany
| | - Martin Riemer
- Aging and Cognition Research Group, German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
| | - Thomas Wolbers
- Aging and Cognition Research Group, German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
| | - Anika Werner
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Physiology, Berlin, Germany.,Normandie Université, UNICAEN, INSERM, COMETE, Caen, France
| | - Katharina Brauns
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Physiology, Berlin, Germany
| | | | - Pierre Denise
- Normandie Université, UNICAEN, INSERM, COMETE, Caen, France
| | - Simone Kühn
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Lise Meitner Group for Environmental Neuroscience, Max Planck Institute for Human Development, Berlin, Germany
| | - Hanns-Christian Gunga
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Physiology, Berlin, Germany
| |
Collapse
|
4
|
Seeing Your Foot Move Changes Muscle Proprioceptive Feedback. eNeuro 2019; 6:eN-NWR-0341-18. [PMID: 30923738 PMCID: PMC6437656 DOI: 10.1523/eneuro.0341-18.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Revised: 02/13/2019] [Accepted: 02/14/2019] [Indexed: 01/01/2023] Open
Abstract
Multisensory effects are found when the input from single senses combines, and this has been well researched in the brain. Presently, we examined in humans the potential impact of visuo-proprioceptive interactions at the peripheral level, using microneurography, and compared it with a similar behavioral task. We used a paradigm where participants had either proprioceptive information only (no vision) or combined visual and proprioceptive signals (vision). We moved the foot to measure changes in the sensitivity of single muscle afferents, which can be altered by the descending fusimotor drive. Visual information interacted with proprioceptive information, where we found that for the same passive movement, the response of muscle afferents increased when the proprioceptive channel was the only source of information, as compared with when visual cues were added, regardless of the attentional level. Behaviorally, when participants looked at their foot moving, they more accurately judged differences between movement amplitudes, than in the absence of visual cues. These results impact our understanding of multisensory interactions throughout the nervous system, where the information from different senses can modify the sensitivity of peripheral receptors. This has clinical implications, where future strategies may modulate such visual signals during sensorimotor rehabilitation.
Collapse
|
5
|
Arnoux L, Fromentin S, Farotto D, Beraneck M, McIntyre J, Tagliabue M. The visual encoding of purely proprioceptive intermanual tasks is due to the need of transforming joint signals, not to their interhemispheric transfer. J Neurophysiol 2017; 118:1598-1608. [PMID: 28615330 DOI: 10.1152/jn.00140.2017] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 06/02/2017] [Accepted: 06/02/2017] [Indexed: 11/22/2022] Open
Abstract
To perform goal-oriented hand movement, humans combine multiple sensory signals (e.g., vision and proprioception) that can be encoded in various reference frames (body centered and/or exo-centered). In a previous study (Tagliabue M, McIntyre J. PLoS One 8: e68438, 2013), we showed that, when aligning a hand to a remembered target orientation, the brain encodes both target and response in visual space when the target is sensed by one hand and the response is performed by the other, even though both are sensed only through proprioception. Here we ask whether such visual encoding is due 1) to the necessity of transferring sensory information across the brain hemispheres, or 2) to the necessity, due to the arms' anatomical mirror symmetry, of transforming the joint signals of one limb into the reference frame of the other. To answer this question, we asked subjects to perform purely proprioceptive tasks in different conditions: Intra, the same arm sensing the target and performing the movement; Inter/Parallel, one arm sensing the target and the other reproducing its orientation; and Inter/Mirror, one arm sensing the target and the other mirroring its orientation. Performance was very similar between Intra and Inter/Mirror (conditions not requiring joint-signal transformations), while both differed from Inter/Parallel. Manipulation of the visual scene in a virtual reality paradigm showed visual encoding of proprioceptive information only in the latter condition. These results suggest that the visual encoding of purely proprioceptive tasks is not due to interhemispheric transfer of the proprioceptive information per se, but to the necessity of transforming joint signals between mirror-symmetric limbs.NEW & NOTEWORTHY Why does the brain encode goal-oriented, intermanual tasks in a visual space, even in the absence of visual feedback about the target and the hand? We show that the visual encoding is not due to the transfer of proprioceptive signals between brain hemispheres per se, but to the need, due to the mirror symmetry of the two limbs, of transforming joint angle signals of one arm in different joint signals of the other.
Collapse
Affiliation(s)
- Léo Arnoux
- Center for Neurophysics, Physiology & Pathology, UMR 8119, CNRS Université Paris Descartes, Sorbonne Paris Cité, Paris, France
| | - Sebastien Fromentin
- Center for Neurophysics, Physiology & Pathology, UMR 8119, CNRS Université Paris Descartes, Sorbonne Paris Cité, Paris, France
| | - Dario Farotto
- Center for Neurophysics, Physiology & Pathology, UMR 8119, CNRS Université Paris Descartes, Sorbonne Paris Cité, Paris, France
| | - Mathieu Beraneck
- Center for Neurophysics, Physiology & Pathology, UMR 8119, CNRS Université Paris Descartes, Sorbonne Paris Cité, Paris, France
| | - Joseph McIntyre
- Center for Neurophysics, Physiology & Pathology, UMR 8119, CNRS Université Paris Descartes, Sorbonne Paris Cité, Paris, France.,Ikerbasque Science Foundation, Bilbao, Spain; and.,Health Division, Tecnalia Research & Development, San Sebastian, Spain
| | - Michele Tagliabue
- Center for Neurophysics, Physiology & Pathology, UMR 8119, CNRS Université Paris Descartes, Sorbonne Paris Cité, Paris, France;
| |
Collapse
|
6
|
A New Neurocognitive Interpretation of Shoulder Position Sense during Reaching: Unexpected Competence in the Measurement of Extracorporeal Space. BIOMED RESEARCH INTERNATIONAL 2016; 2016:9065495. [PMID: 28105438 PMCID: PMC5220422 DOI: 10.1155/2016/9065495] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 11/07/2016] [Accepted: 11/28/2016] [Indexed: 11/17/2022]
Abstract
Background. The position sense of the shoulder joint is important during reaching. Objective. To examine the existence of additional competence of the shoulder with regard to the ability to measure extracorporeal space, through a novel approach, using the shoulder proprioceptive rehabilitation tool (SPRT), during reaching. Design. Observational case-control study. Methods. We examined 50 subjects: 25 healthy and 25 with impingement syndrome with a mean age [years] of 64.52 +/− 6.98 and 68.36 +/− 6.54, respectively. Two parameters were evaluated using the SPRT: the integration of visual information and the proprioceptive afferents of the shoulder (Test 1) and the discriminative proprioceptive capacity of the shoulder, with the subject blindfolded (Test 2). These tasks assessed the spatial error (in centimeters) by the shoulder joint in reaching movements on the sagittal plane. Results. The shoulder had proprioceptive features that allowed it to memorize a reaching position and reproduce it (error of 1.22 cm to 1.55 cm in healthy subjects). This ability was lower in the impingement group, with a statistically significant difference compared to the healthy group (p < 0.05 by Mann–Whitney test). Conclusions. The shoulder has specific expertise in the measurement of the extracorporeal space during reaching movements that gradually decreases in impingement syndrome.
Collapse
|
7
|
Chancel M, Blanchard C, Guerraz M, Montagnini A, Kavounoudias A. Optimal visuotactile integration for velocity discrimination of self-hand movements. J Neurophysiol 2016; 116:1522-1535. [PMID: 27385802 DOI: 10.1152/jn.00883.2015] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Accepted: 07/06/2016] [Indexed: 11/22/2022] Open
Abstract
Illusory hand movements can be elicited by a textured disk or a visual pattern rotating under one's hand, while proprioceptive inputs convey immobility information (Blanchard C, Roll R, Roll JP, Kavounoudias A. PLoS One 8: e62475, 2013). Here, we investigated whether visuotactile integration can optimize velocity discrimination of illusory hand movements in line with Bayesian predictions. We induced illusory movements in 15 volunteers by visual and/or tactile stimulation delivered at six angular velocities. Participants had to compare hand illusion velocities with a 5°/s hand reference movement in an alternative forced choice paradigm. Results showed that the discrimination threshold decreased in the visuotactile condition compared with unimodal (visual or tactile) conditions, reflecting better bimodal discrimination. The perceptual strength (gain) of the illusions also increased: the stimulation required to give rise to a 5°/s illusory movement was slower in the visuotactile condition compared with each of the two unimodal conditions. The maximum likelihood estimation model satisfactorily predicted the improved discrimination threshold but not the increase in gain. When we added a zero-centered prior, reflecting immobility information, the Bayesian model did actually predict the gain increase but systematically overestimated it. Interestingly, the predicted gains better fit the visuotactile performances when a proprioceptive noise was generated by covibrating antagonist wrist muscles. These findings show that kinesthetic information of visual and tactile origins is optimally integrated to improve velocity discrimination of self-hand movements. However, a Bayesian model alone could not fully describe the illusory phenomenon pointing to the crucial importance of the omnipresent muscle proprioceptive cues with respect to other sensory cues for kinesthesia.
Collapse
Affiliation(s)
- M Chancel
- LNIA UMR 7260, Aix Marseille Université-Centre National de la Recherche Scientifique (CNRS), Marseille, France; LPNC UMR 5105, Université Savoie Mont Blanc-CNRS, Chambéry, France
| | - C Blanchard
- School of Psychology, University of Nottingham, Nottingham, United Kingdom; and
| | - M Guerraz
- LPNC UMR 5105, Université Savoie Mont Blanc-CNRS, Chambéry, France
| | - A Montagnini
- INT UMR 7289, Aix Marseille Université-CNRS, Marseille, France
| | - A Kavounoudias
- LNIA UMR 7260, Aix Marseille Université-Centre National de la Recherche Scientifique (CNRS), Marseille, France;
| |
Collapse
|
8
|
Mixed body- and gaze-centered coding of proprioceptive reach targets after effector movement. Neuropsychologia 2016; 87:63-73. [PMID: 27157885 DOI: 10.1016/j.neuropsychologia.2016.04.033] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2015] [Revised: 04/11/2016] [Accepted: 04/28/2016] [Indexed: 11/21/2022]
Abstract
Previous studies demonstrated that an effector movement intervening between encoding and reaching to a proprioceptive target determines the underlying reference frame: proprioceptive reach targets are represented in a gaze-independent reference frame if no movement occurs but are represented with respect to gaze after an effector movement (Mueller and Fiehler, 2014a). The present experiment explores whether an effector movement leads to a switch from a gaze-independent, body-centered reference frame to a gaze-dependent reference frame or whether a gaze-dependent reference frame is employed in addition to a gaze-independent, body-centered reference frame. Human participants were asked to reach in complete darkness to an unseen finger (proprioceptive target) of their left target hand indicated by a touch. They completed 2 conditions in which the target hand remained either stationary at the target location (stationary condition) or was actively moved to the target location, received a touch and was moved back before reaching to the target (moved condition). We dissociated the location of the movement vector relative to the body midline and to the gaze direction. Using correlation and regression analyses, we estimated the contribution of each reference frame based on horizontal reach errors in the stationary and moved conditions. Gaze-centered coding was only found in the moved condition, replicating our previous results. Body-centered coding dominated in the stationary condition while body- and gaze-centered coding contributed equally strong in the moved condition. Our results indicate a shift from body-centered to combined body- and gaze-centered coding due to an effector movement before reaching towards proprioceptive targets.
Collapse
|
9
|
Alikhanian H, de Carvalho SR, Blohm G. Quantifying effects of stochasticity in reference frame transformations on posterior distributions. Front Comput Neurosci 2015; 9:82. [PMID: 26190998 PMCID: PMC4490245 DOI: 10.3389/fncom.2015.00082] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2014] [Accepted: 06/17/2015] [Indexed: 11/24/2022] Open
Abstract
Reference frame transformations are usually considered to be deterministic. However, translations, scaling or rotation angles could be stochastic. Indeed, variability of these entities often originates from noisy estimation processes. The impact of transformation noise on the statistics of the transformed signals is unknown and a quantification of these effects is the goal of this study. We first quantify analytically and numerically how stochastic reference frame transformations (SRFT) alter the posterior distribution of the transformed signals. We then propose an new empirical measure to quantify deviations from a given distribution when only limited data is available. We apply this empirical measure to an example in sensory-motor neuroscience to quantify how different head roll angles change the distribution of reach endpoints away from the normal distribution.
Collapse
Affiliation(s)
- Hooman Alikhanian
- Centre for Neuroscience Studies, Queen's University Kingston, ON, Canada ; Canadian Action and Perception Network Kingston, ON, Canada
| | - Schubert R de Carvalho
- Centre for Neuroscience Studies, Queen's University Kingston, ON, Canada ; Canadian Action and Perception Network Kingston, ON, Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University Kingston, ON, Canada ; Canadian Action and Perception Network Kingston, ON, Canada ; Association for Canadian Neuroinformatics and Computational Neuroscience Kingston, ON, Canada
| |
Collapse
|
10
|
Blouin J, Saradjian AH, Lebar N, Guillaume A, Mouchnino L. Opposed optimal strategies of weighting somatosensory inputs for planning reaching movements toward visual and proprioceptive targets. J Neurophysiol 2014; 112:2290-301. [DOI: 10.1152/jn.00857.2013] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Behavioral studies have suggested that the brain uses a visual estimate of the hand to plan reaching movements toward visual targets and somatosensory inputs in the case of somatosensory targets. However, neural correlates for distinct coding of the hand according to the sensory modality of the target have not yet been identified. Here we tested the twofold hypothesis that the somatosensory input from the reaching hand is facilitated and inhibited, respectively, when planning movements toward somatosensory (unseen fingers) or visual targets. The weight of the somatosensory inputs was assessed by measuring the amplitude of the somatosensory evoked potential (SEP) resulting from vibration of the reaching finger during movement planning. The target sensory modality had no significant effect on SEP amplitude. However, Spearman's analyses showed significant correlations between the SEPs and reaching errors. When planning movements toward proprioceptive targets without visual feedback of the reaching hand, participants showing the greater SEPs were those who produced the smaller directional errors. Inversely, participants showing the smaller SEPs when planning movements toward visual targets with visual feedback of the reaching hand were those who produced the smaller directional errors. No significant correlation was found between the SEPs and radial or amplitude errors. Our results indicate that the sensory strategy for planning movements is highly flexible among individuals and also for a given sensory context. Most importantly, they provide neural bases for the suggestion that optimization of movement planning requires the target and the reaching hand to both be represented in the same sensory modality.
Collapse
Affiliation(s)
- Jean Blouin
- Laboratory of Cognitive Neuroscience, CNRS, Aix-Marseille University, FR 3C 3512, Marseille, France
| | - Anahid H. Saradjian
- Laboratory of Cognitive Neuroscience, CNRS, Aix-Marseille University, FR 3C 3512, Marseille, France
| | - Nicolas Lebar
- Laboratory of Cognitive Neuroscience, CNRS, Aix-Marseille University, FR 3C 3512, Marseille, France
| | - Alain Guillaume
- Laboratory of Cognitive Neuroscience, CNRS, Aix-Marseille University, FR 3C 3512, Marseille, France
| | - Laurence Mouchnino
- Laboratory of Cognitive Neuroscience, CNRS, Aix-Marseille University, FR 3C 3512, Marseille, France
| |
Collapse
|
11
|
Tagliabue M, McIntyre J. A modular theory of multisensory integration for motor control. Front Comput Neurosci 2014; 8:1. [PMID: 24550816 PMCID: PMC3908447 DOI: 10.3389/fncom.2014.00001] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Accepted: 01/06/2014] [Indexed: 11/13/2022] Open
Abstract
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame.
Collapse
Affiliation(s)
- Michele Tagliabue
- Centre d'Étude de la Sensorimotricité, (CNRS UMR 8194), Institut des Neurosciences et de la Cognition, Université Paris Descartes, Sorbonne Paris Cité Paris, France
| | - Joseph McIntyre
- Centre d'Étude de la Sensorimotricité, (CNRS UMR 8194), Institut des Neurosciences et de la Cognition, Université Paris Descartes, Sorbonne Paris Cité Paris, France
| |
Collapse
|