151
|
De Paepe AL, Crombez G, Legrain V. What's Coming Near? The Influence of Dynamical Visual Stimuli on Nociceptive Processing. PLoS One 2016; 11:e0155864. [PMID: 27224421 PMCID: PMC4880339 DOI: 10.1371/journal.pone.0155864] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Accepted: 05/05/2016] [Indexed: 11/18/2022] Open
Abstract
Objects approaching us may pose a threat, and signal the need to initiate defensive behavior. Detecting these objects early is crucial to either avoid the object or prepare for contact most efficiently. This requires the construction of a coherent representation of our body, and the space closely surrounding our body, i.e. the peripersonal space. This study, with 27 healthy volunteers, investigated how the processing of nociceptive stimuli applied to the hand is influenced by dynamical visual stimuli either approaching or receding from the hand. On each trial a visual stimulus was either approaching or receding the participant's left or right hand. At different temporal delays from the onset of the visual stimulus, a nociceptive stimulus was applied either at the same or the opposite hand, so that it was presented when the visual stimulus was perceived at varying distances from the hand. Participants were asked to respond as fast as possible at which side they perceived a nociceptive stimulus. We found that reaction times were fastest when the visual stimulus appeared near the stimulated hand. Moreover, investigating the influence of the visual stimuli along the continuous spatial range (from near to far) showed that approaching lights had a stronger spatially dependent effect on nociceptive processing, compared to receding lights. These results suggest that the coding of nociceptive information in a peripersonal frame of reference may constitute a safety margin around the body that is designed to protect it from potential physical threat.
Collapse
Affiliation(s)
- Annick L. De Paepe
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium
- * E-mail:
| | - Geert Crombez
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium
- Centre for Pain Research, University of Bath, Bath, United Kingdom
| | - Valéry Legrain
- Institute of Neuroscience, Université catholique de Louvain, Brussels Woluwe, Belgium
| |
Collapse
|
152
|
Zimmermann E, Derichs C, Fink GR. The functional role of time compression. Sci Rep 2016; 6:25843. [PMID: 27180810 PMCID: PMC4867590 DOI: 10.1038/srep25843] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 04/18/2016] [Indexed: 11/21/2022] Open
Abstract
Multisensory integration provides continuous and stable perception from separate sensory inputs. Here, we investigated the functional role of temporal binding between the visual and the tactile senses. To this end we used the paradigm of compression that induces shifts in time when probe stimuli are degraded, e.g., by a visual mask (Zimmermann et al. 2014). Subjects had to estimate the duration of temporal intervals of 500 ms defined by a tactile and a visual, masked stimulus. We observed a strong (~100 ms) underestimation of the temporal interval when the stimuli from both senses appeared to occur at the same position in space. In contrast, when the positions of the visual and tactile stimuli were spatially separate, interval perception was almost veridical. Temporal compression furthermore depended on the correspondence of probe features and was absent when the orientation of the tactile and visual probes was incongruent. An additional experiment revealed that temporal compression also occurs when objects were presented outside the attentional focus. In conclusion, these data support a role for spatiotemporal binding in temporal compression, which is at least in part selective for object features.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Germany
| | - Christina Derichs
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Germany
| | - Gereon R. Fink
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Germany
- Department of Neurology, University Hospital Cologne, Germany
| |
Collapse
|
153
|
Cheng Z, Gu Y. Distributed Representation of Curvilinear Self-Motion in the Macaque Parietal Cortex. Cell Rep 2016; 15:1013-1023. [PMID: 27117412 DOI: 10.1016/j.celrep.2016.03.089] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Revised: 12/10/2015] [Accepted: 03/24/2016] [Indexed: 11/29/2022] Open
Abstract
Information about translations and rotations of the body is critical for complex self-motion perception during spatial navigation. However, little is known about the nature and function of their convergence in the cortex. We measured neural activity in multiple areas in the macaque parietal cortex in response to three different types of body motion applied through a motion platform: translation, rotation, and combined stimuli, i.e., curvilinear motion. We found a continuous representation of motion types in each area. In contrast to single-modality cells preferring either translation-only or rotation-only stimuli, convergent cells tend to be optimally tuned to curvilinear motion. A weighted summation model captured the data well, suggesting that translation and rotation signals are integrated subadditively in the cortex. Interestingly, variation in the activity of convergent cells parallels behavioral outputs reported in human psychophysical experiments. We conclude that representation of curvilinear self-motion perception is widely distributed in the primate sensory cortex.
Collapse
Affiliation(s)
- Zhixian Cheng
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yong Gu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China.
| |
Collapse
|
154
|
Joint representation of translational and rotational components of optic flow in parietal cortex. Proc Natl Acad Sci U S A 2016; 113:5077-82. [PMID: 27095846 DOI: 10.1073/pnas.1604818113] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals.
Collapse
|
155
|
Cardini F, Longo MR. Congruency of body-related information induces somatosensory reorganization. Neuropsychologia 2016; 84:213-21. [DOI: 10.1016/j.neuropsychologia.2016.02.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2015] [Revised: 01/16/2016] [Accepted: 02/18/2016] [Indexed: 10/22/2022]
|
156
|
Grasso PA, Benassi M, Làdavas E, Bertini C. Audio-visual multisensory training enhances visual processing of motion stimuli in healthy participants: an electrophysiological study. Eur J Neurosci 2016; 44:2748-2758. [PMID: 26921844 DOI: 10.1111/ejn.13221] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Revised: 01/29/2016] [Accepted: 02/19/2016] [Indexed: 11/29/2022]
Abstract
Evidence from electrophysiological and imaging studies suggests that audio-visual (AV) stimuli presented in spatial coincidence enhance activity in the subcortical colliculo-dorsal extrastriate pathway. To test whether repetitive AV stimulation might specifically activate this neural circuit underlying multisensory integrative processes, electroencephalographic data were recorded before and after 2 h of AV training, during the execution of two lateralized visual tasks: a motion discrimination task, relying on activity in the colliculo-dorsal MT pathway, and an orientation discrimination task, relying on activity in the striate and early ventral extrastriate cortices. During training, participants were asked to detect and perform a saccade towards AV stimuli that were disproportionally allocated to one hemifield (the trained hemifield). Half of the participants underwent a training in which AV stimuli were presented in spatial coincidence, while the remaining half underwent a training in which AV stimuli were presented in spatial disparity (32°). Participants who received AV training with stimuli in spatial coincidence had a post-training enhancement of the anterior N1 component in the motion discrimination task, but only in response to stimuli presented in the trained hemifield. However, no effect was found in the orientation discrimination task. In contrast, participants who received AV training with stimuli in spatial disparity showed no effects on either task. The observed N1 enhancement might reflect enhanced discrimination for motion stimuli, probably due to increased activity in the colliculo-dorsal MT pathway induced by multisensory training.
Collapse
Affiliation(s)
- Paolo A Grasso
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, 40127, Italy.,CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Viale Europa 980, Cesena 47521, Italy
| | - Mariagrazia Benassi
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, 40127, Italy
| | - Elisabetta Làdavas
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, 40127, Italy.,CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Viale Europa 980, Cesena 47521, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, 40127, Italy.,CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Viale Europa 980, Cesena 47521, Italy
| |
Collapse
|
157
|
Watching what’s coming near increases tactile sensitivity: An experimental investigation. Behav Brain Res 2016; 297:307-14. [DOI: 10.1016/j.bbr.2015.10.028] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Revised: 10/05/2015] [Accepted: 10/09/2015] [Indexed: 11/23/2022]
|
158
|
Greenlee M, Frank S, Kaliuzhna M, Blanke O, Bremmer F, Churan J, Cuturi LF, MacNeilage P, Smith A. Multisensory Integration in Self Motion Perception. Multisens Res 2016. [DOI: 10.1163/22134808-00002527] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
Collapse
Affiliation(s)
- Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Sebastian M. Frank
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Mariia Kaliuzhna
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Jan Churan
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Luigi F. Cuturi
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, UK
| |
Collapse
|
159
|
Miller LE, Longo MR, Saygin AP. Mental body representations retain homuncular shape distortions: Evidence from Weber's illusion. Conscious Cogn 2015; 40:17-25. [PMID: 26741857 DOI: 10.1016/j.concog.2015.12.008] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Revised: 12/16/2015] [Accepted: 12/17/2015] [Indexed: 10/22/2022]
Abstract
Mental body representations underlying tactile perception do not accurately reflect the body's true morphology. For example, perceived tactile distance is dependent on both the body part being touched and the stimulus orientation, a phenomenon called Weber's illusion. These findings suggest the presence of size and shape distortions, respectively. However, whereas each morphological feature is typically measured in isolation, a complete morphological characterization requires the concurrent measurement of both size and shape. We did so in three experiments, manipulating both the stimulated body parts (hand; forearm) and stimulus orientation while requiring participants to make tactile distance judgments. We found that the forearm was significantly more distorted than the hand lengthwise but not widthwise. Effects of stimulus orientation are thought to reflect receptive field anisotropies in primary somatosensory cortex. The results of the present study therefore suggest that mental body representations retain homuncular shape distortions that characterize early stages of somatosensory processing.
Collapse
Affiliation(s)
- Luke E Miller
- Department of Cognitive Science, University of California, San Diego, USA; Kavli Institute for Brain and Mind, University of California, San Diego, USA.
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London, United Kingdom
| | - Ayse P Saygin
- Department of Cognitive Science, University of California, San Diego, USA; Kavli Institute for Brain and Mind, University of California, San Diego, USA
| |
Collapse
|
160
|
Body part-centered and full body-centered peripersonal space representations. Sci Rep 2015; 5:18603. [PMID: 26690698 PMCID: PMC4686995 DOI: 10.1038/srep18603] [Citation(s) in RCA: 123] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Accepted: 11/09/2015] [Indexed: 11/09/2022] Open
Abstract
Dedicated neural systems represent the space surrounding the body, termed Peripersonal space (PPS), by integrating visual or auditory stimuli occurring near the body with somatosensory information. As a behavioral proxy to PPS, we measured participants' reaction time to tactile stimulation while task-irrelevant auditory or visual stimuli were presented at different distances from their body. In 7 experiments we delineated the critical distance at which auditory or visual stimuli boosted tactile processing on the hand, face, and trunk as a proxy of the PPS extension. Three main findings were obtained. First, the size of PPS varied according to the stimulated body part, being progressively bigger for the hand, then face, and largest for the trunk. Second, while approaching stimuli always modulated tactile processing in a space-dependent manner, receding stimuli did so only for the hand. Finally, the extension of PPS around the hand and the face varied according to their relative positioning and stimuli congruency, whereas the trunk PPS was constant. These results suggest that at least three body-part specific PPS representations exist, differing in extension and directional tuning. These distinct PPS representations, however, are not fully independent from each other, but referenced to the common reference frame of the trunk.
Collapse
|
161
|
Dissociable routes for personal and interpersonal visual enhancement of touch. Cortex 2015; 73:289-97. [DOI: 10.1016/j.cortex.2015.09.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 05/26/2015] [Accepted: 09/14/2015] [Indexed: 11/22/2022]
|
162
|
True and Perceived Synchrony are Preferentially Associated With Particular Sensory Pairings. Sci Rep 2015; 5:17467. [PMID: 26621493 PMCID: PMC4664927 DOI: 10.1038/srep17467] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 10/29/2015] [Indexed: 11/09/2022] Open
Abstract
Perception and behavior are fundamentally shaped by the integration of different sensory modalities into unique multisensory representations, a process governed by spatio-temporal correspondence. Prior work has characterized temporal perception using the point in time at which subjects are most likely to judge multisensory stimuli to be simultaneous (PSS) and the temporal binding window (TBW) over which participants are likely to do so. Here we examine the relationship between the PSS and the TBW within and between individuals, and within and between three sensory combinations: audiovisual, audiotactile and visuotactile. We demonstrate that TBWs correlate within individuals and across multisensory pairings, but PSSs do not. Further, we reveal that while the audiotactile and audiovisual pairings show tightly related TBWs, they also exhibit a differential relationship with respect to true and perceived multisensory synchrony. Thus, audiotactile and audiovisual temporal processing share mechanistic features yet are respectively functionally linked to objective and subjective synchrony.
Collapse
|
163
|
Di Bono MG, Begliomini C, Castiello U, Zorzi M. Probing the reaching-grasping network in humans through multivoxel pattern decoding. Brain Behav 2015; 5:e00412. [PMID: 26664793 PMCID: PMC4666323 DOI: 10.1002/brb3.412] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2015] [Revised: 07/27/2015] [Accepted: 09/13/2015] [Indexed: 11/30/2022] Open
Abstract
INTRODUCTION The quest for a putative human homolog of the reaching-grasping network identified in monkeys has been the focus of many neuropsychological and neuroimaging studies in recent years. These studies have shown that the network underlying reaching-only and reach-to-grasp movements includes the superior parieto-occipital cortex (SPOC), the anterior part of the human intraparietal sulcus (hAIP), the ventral and the dorsal portion of the premotor cortex, and the primary motor cortex (M1). Recent evidence for a wider frontoparietal network coding for different aspects of reaching-only and reach-to-grasp actions calls for a more fine-grained assessment of the reaching-grasping network in humans by exploiting pattern decoding methods (multivoxel pattern analysis--MVPA). METHODS Here, we used MPVA on functional magnetic resonance imaging (fMRI) data to assess whether regions of the frontoparietal network discriminate between reaching-only and reach-to-grasp actions, natural and constrained grasping, different grasp types, and object sizes. Participants were required to perform either reaching-only movements or two reach-to-grasp types (precision or whole hand grasp) upon spherical objects of different sizes. RESULTS Multivoxel pattern analysis highlighted that, independently from the object size, all the selected regions of both hemispheres contribute in coding for grasp type, with the exception of SPOC and the right hAIP. Consistent with recent neurophysiological findings on monkeys, there was no evidence for a clear-cut distinction between a dorsomedial and a dorsolateral pathway that would be specialized for reaching-only and reach-to-grasp actions, respectively. Nevertheless, the comparison of decoding accuracy across brain areas highlighted their different contributions to reaching-only and grasping actions. CONCLUSIONS Altogether, our findings enrich the current knowledge regarding the functional role of key brain areas involved in the cortical control of reaching-only and reach-to-grasp actions in humans, by revealing novel fine-grained distinctions among action types within a wide frontoparietal network.
Collapse
Affiliation(s)
| | - Chiara Begliomini
- Department of General Psychology University of Padova Padova Italy ; Cognitive Neuroscience Center University of Padova Padova Italy
| | - Umberto Castiello
- Department of General Psychology University of Padova Padova Italy ; Cognitive Neuroscience Center University of Padova Padova Italy ; Centro Interdisciplinare Beniamino Segre Accademia dei Lincei Roma Italy
| | - Marco Zorzi
- Department of General Psychology University of Padova Padova Italy ; Cognitive Neuroscience Center University of Padova Padova Italy ; IRCCS San Camillo Hospital Venice-Lido Italy
| |
Collapse
|
164
|
An invisible touch: Body-related multisensory conflicts modulate visual consciousness. Neuropsychologia 2015; 88:131-139. [PMID: 26519553 DOI: 10.1016/j.neuropsychologia.2015.10.034] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2015] [Revised: 09/15/2015] [Accepted: 10/26/2015] [Indexed: 11/22/2022]
Abstract
The majority of scientific studies on consciousness have focused on vision, exploring the cognitive and neural mechanisms of conscious access to visual stimuli. In parallel, studies on bodily consciousness have revealed that bodily (i.e. tactile, proprioceptive, visceral, vestibular) signals are the basis for the sense of self. However, the role of bodily signals in the formation of visual consciousness is not well understood. Here we investigated how body-related visuo-tactile stimulation modulates conscious access to visual stimuli. We used a robotic platform to apply controlled tactile stimulation to the participants' back while they viewed a dot moving either in synchrony or asynchrony with the touch on their back. Critically, the dot was rendered invisible through continuous flash suppression. Manipulating the visual context by presenting the dot moving on either a body form, or a non-bodily object we show that: (i) conflict induced by synchronous visuo-tactile stimulation in a body context is associated with a delayed conscious access compared to asynchronous visuo-tactile stimulation, (ii) this effect occurs only in the context of a visual body form, and (iii) is not due to detection or response biases. The results indicate that body-related visuo-tactile conflicts impact visual consciousness by facilitating access of non-conflicting visual information to awareness, and that these are sensitive to the visual context in which they are presented, highlighting the interplay between bodily signals and visual experience.
Collapse
|
165
|
Hervais-Adelman A, Legrand LB, Zhan M, Tamietto M, de Gelder B, Pegna AJ. Looming sensitive cortical regions without V1 input: evidence from a patient with bilateral cortical blindness. Front Integr Neurosci 2015; 9:51. [PMID: 26557059 PMCID: PMC4614319 DOI: 10.3389/fnint.2015.00051] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Accepted: 09/25/2015] [Indexed: 11/26/2022] Open
Abstract
Fast and automatic behavioral responses are required to avoid collision with an approaching stimulus. Accordingly, looming stimuli have been found to be highly salient and efficient attractors of attention due to the implication of potential collision and potential threat. Here, we address the question of whether looming motion is processed in the absence of any functional primary visual cortex and consequently without awareness. For this, we investigated a patient (TN) suffering from complete, bilateral damage to his primary visual cortex. Using an fMRI paradigm, we measured TN's brain activation during the presentation of looming, receding, rotating, and static point lights, of which he was unaware. When contrasted with other conditions, looming was found to produce bilateral activation of the middle temporal areas, as well as the superior temporal sulcus and inferior parietal lobe (IPL). The latter are generally thought to be involved in multisensory processing of motion in extrapersonal space, as well as attentional capture and saliency. No activity was found close to the lesioned V1 area. This demonstrates that looming motion is processed in the absence of awareness through direct subcortical projections to areas involved in multisensory processing of motion and saliency that bypass V1.
Collapse
Affiliation(s)
- Alexis Hervais-Adelman
- Laboratory of Experimental Neuropsychology, Neurology Clinic, Department of Clinical Neuroscience, University of Geneva Geneva, Switzerland ; Brain and Language Lab, Department of Clinical Neuroscience, University of Geneva Geneva, Switzerland
| | - Lore B Legrand
- Laboratory of Experimental Neuropsychology, Neurology Clinic, Department of Clinical Neuroscience, University of Geneva Geneva, Switzerland ; Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Minye Zhan
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Marco Tamietto
- Department of Psychology, University of Torino Torino, Italy ; Cognitive and Affective Neuroscience Laboratory, Center of Research on Psychology in Somatic Diseases, Tilburg University Tilburg, Netherlands ; Department of Experimental Psychology, University of Oxford Oxford, UK
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Alan J Pegna
- Laboratory of Experimental Neuropsychology, Neurology Clinic, Department of Clinical Neuroscience, University of Geneva Geneva, Switzerland ; Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland ; School of Psychology, University of Queensland Brisbane, QLD, Australia
| |
Collapse
|
166
|
Wardak C, Guipponi O, Pinède S, Ben Hamed S. Tactile representation of the head and shoulders assessed by fMRI in the nonhuman primate. J Neurophysiol 2015; 115:80-91. [PMID: 26467517 DOI: 10.1152/jn.00633.2015] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2015] [Accepted: 10/13/2015] [Indexed: 11/22/2022] Open
Abstract
In nonhuman primates, tactile representation at the cortical level has mostly been studied using single-cell recordings targeted to specific cortical areas. In this study, we explored the representation of tactile information delivered to the face or the shoulders at the whole brain level, using functional magnetic resonance imaging (fMRI) in the nonhuman primate. We used air puffs delivered to the center of the face, the periphery of the face, or the shoulders. These stimulations elicited activations in numerous cortical areas, encompassing the primary and secondary somatosensory areas, prefrontal and premotor areas, and parietal, temporal, and cingulate areas as well as low-level visual cortex. Importantly, a specific parieto-temporo-prefrontal network responded to the three stimulations but presented a marked preference for air puffs directed to the center of the face. This network corresponds to areas that are also involved in near-space representation, as well as in the multisensory integration of information at the interface between this near space and the skin of the face, and is probably involved in the construction of a peripersonal space representation around the head.
Collapse
Affiliation(s)
- Claire Wardak
- Centre de Neuroscience Cognitive, UMR 5229, Centre National de la Recherche Scientifique, Université Claude Bernard Lyon 1, Bron, France
| | - Olivier Guipponi
- Centre de Neuroscience Cognitive, UMR 5229, Centre National de la Recherche Scientifique, Université Claude Bernard Lyon 1, Bron, France
| | - Serge Pinède
- Centre de Neuroscience Cognitive, UMR 5229, Centre National de la Recherche Scientifique, Université Claude Bernard Lyon 1, Bron, France
| | - Suliann Ben Hamed
- Centre de Neuroscience Cognitive, UMR 5229, Centre National de la Recherche Scientifique, Université Claude Bernard Lyon 1, Bron, France
| |
Collapse
|
167
|
Blanke O, Slater M, Serino A. Behavioral, Neural, and Computational Principles of Bodily Self-Consciousness. Neuron 2015; 88:145-66. [PMID: 26447578 DOI: 10.1016/j.neuron.2015.09.029] [Citation(s) in RCA: 398] [Impact Index Per Article: 44.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Affiliation(s)
- Olaf Blanke
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), 9 Chemin des Mines, 1202 Geneva, Switzerland; Department of Neurology, University of Geneva, 24 rue Micheli-du-Crest, 1211 Geneva, Switzerland.
| | - Mel Slater
- ICREA-University of Barcelona, Campus de Mundet, 08035 Barcelona, Spain; Department of Computer Science, University College London, Malet Place Engineering Building, Gower Street, London, WC1E 6BT, UK
| | - Andrea Serino
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), 9 Chemin des Mines, 1202 Geneva, Switzerland.
| |
Collapse
|
168
|
Colon E, Legrain V, Huang G, Mouraux A. Frequency tagging of steady-state evoked potentials to explore the crossmodal links in spatial attention between vision and touch. Psychophysiology 2015; 52:1498-510. [PMID: 26329531 DOI: 10.1111/psyp.12511] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Accepted: 07/11/2015] [Indexed: 11/29/2022]
Abstract
The sustained periodic modulation of a stimulus induces an entrainment of cortical neurons responding to the stimulus, appearing as a steady-state evoked potential (SS-EP) in the EEG frequency spectrum. Here, we used frequency tagging of SS-EPs to study the crossmodal links in spatial attention between touch and vision. We hypothesized that a visual stimulus approaching the left or right hand orients spatial attention toward the approached hand, and thereby enhances the processing of vibrotactile input originating from that hand. Twenty-five subjects took part in the experiment: 16-s trains of vibrotactile stimuli (4.2 and 7.2 Hz) were applied simultaneously to the left and right hand, concomitantly with a punctate visual stimulus blinking at 9.8 Hz. The visual stimulus was approached toward the left or right hand. The hands were either uncrossed (left and right hands to the left and right of the participant) or crossed (left and right hands to the right and left of the participant). The vibrotactile stimuli elicited two distinct SS-EPs with scalp topographies compatible with activity in the contralateral primary somatosensory cortex. The visual stimulus elicited a third SS-EP with a topography compatible with activity in visual areas. When the visual stimulus was over one of the hands, the amplitude of the vibrotactile SS-EP elicited by stimulation of that hand was enhanced, regardless of whether the hands were uncrossed or crossed. This demonstrates a crossmodal effect of spatial attention between vision and touch, integrating proprioceptive and/or visual information to map the position of the limbs in external space.
Collapse
Affiliation(s)
- Elisabeth Colon
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
| | - Valéry Legrain
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
| | - Gan Huang
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
| | - André Mouraux
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
| |
Collapse
|
169
|
Guipponi O, Cléry J, Odouard S, Wardak C, Ben Hamed S. Whole brain mapping of visual and tactile convergence in the macaque monkey. Neuroimage 2015; 117:93-102. [DOI: 10.1016/j.neuroimage.2015.05.022] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2015] [Revised: 04/24/2015] [Accepted: 05/08/2015] [Indexed: 11/28/2022] Open
|
170
|
Dundon NM, Bertini C, Làdavas E, Sabel BA, Gall C. Visual rehabilitation: visual scanning, multisensory stimulation and vision restoration trainings. Front Behav Neurosci 2015; 9:192. [PMID: 26283935 PMCID: PMC4515568 DOI: 10.3389/fnbeh.2015.00192] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2015] [Accepted: 07/09/2015] [Indexed: 12/16/2022] Open
Abstract
Neuropsychological training methods of visual rehabilitation for homonymous vision loss caused by postchiasmatic damage fall into two fundamental paradigms: “compensation” and “restoration”. Existing methods can be classified into three groups: Visual Scanning Training (VST), Audio-Visual Scanning Training (AViST) and Vision Restoration Training (VRT). VST and AViST aim at compensating vision loss by training eye scanning movements, whereas VRT aims at improving lost vision by activating residual visual functions by training light detection and discrimination of visual stimuli. This review discusses the rationale underlying these paradigms and summarizes the available evidence with respect to treatment efficacy. The issues raised in our review should help guide clinical care and stimulate new ideas for future research uncovering the underlying neural correlates of the different treatment paradigms. We propose that both local “within-system” interactions (i.e., relying on plasticity within peri-lesional spared tissue) and changes in more global “between-system” networks (i.e., recruiting alternative visual pathways) contribute to both vision restoration and compensatory rehabilitation, which ultimately have implications for the rehabilitation of cognitive functions.
Collapse
Affiliation(s)
- Neil M Dundon
- Department of Psychology, University of Bologna Bologna, Italy ; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna Cesena, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna Bologna, Italy ; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna Cesena, Italy
| | - Elisabetta Làdavas
- Department of Psychology, University of Bologna Bologna, Italy ; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna Cesena, Italy
| | - Bernhard A Sabel
- Medical Faculty, Institute of Medical Psychology, Otto-von-Guericke University of Magdeburg Magdeburg, Germany
| | - Carolin Gall
- Medical Faculty, Institute of Medical Psychology, Otto-von-Guericke University of Magdeburg Magdeburg, Germany
| |
Collapse
|
171
|
Kaas JH, Stepniewska I. Evolution of posterior parietal cortex and parietal-frontal networks for specific actions in primates. J Comp Neurol 2015; 524:595-608. [PMID: 26101180 DOI: 10.1002/cne.23838] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Revised: 06/16/2015] [Accepted: 06/16/2015] [Indexed: 12/21/2022]
Abstract
Posterior parietal cortex (PPC) is an extensive region of the human brain that develops relatively late and is proportionally large compared with that of monkeys and prosimian primates. Our ongoing comparative studies have led to several conclusions about the evolution of this posterior parietal region. In early placental mammals, PPC likely was a small multisensory region much like PPC of extant rodents and tree shrews. In early primates, PPC likely resembled that of prosimian galagos, in which caudal PPC (PPCc) is visual and rostral PPC (PPCr) has eight or more multisensory domains where electrical stimulation evokes different complex motor behaviors, including reaching, hand-to-mouth, looking, protecting the face or body, and grasping. These evoked behaviors depend on connections with functionally matched domains in premotor cortex (PMC) and motor cortex (M1). Domains in each region compete with each other, and a serial arrangement of domains allows different factors to influence motor outcomes successively. Similar arrangements of domains have been retained in New and Old World monkeys, and humans appear to have at least some of these domains. The great expansion and prolonged development of PPC in humans suggest the addition of functionally distinct territories. We propose that, across primates, PMC and M1 domains are second and third levels in a number of parallel, interacting networks for mediating and selecting one type of action over others.
Collapse
Affiliation(s)
- Jon H Kaas
- Department of Psychology, Vanderbilt University, Nashville, Tennessee, 37240
| | - Iwona Stepniewska
- Department of Psychology, Vanderbilt University, Nashville, Tennessee, 37240
| |
Collapse
|
172
|
Affiliation(s)
- Andrew Glennerster
- Department of Psychology, School of Psychology and Clinical Language Sciences, University of Reading Reading, UK
| |
Collapse
|
173
|
Rozzi S, Coudé G. Grasping actions and social interaction: neural bases and anatomical circuitry in the monkey. Front Psychol 2015; 6:973. [PMID: 26236258 PMCID: PMC4500865 DOI: 10.3389/fpsyg.2015.00973] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Accepted: 06/29/2015] [Indexed: 11/13/2022] Open
Abstract
The study of the neural mechanisms underlying grasping actions showed that cognitive functions are deeply embedded in motor organization. In the first part of this review, we describe the anatomical structure of the motor cortex in the monkey and the cortical and sub-cortical connections of the different motor areas. In the second part, we review the neurophysiological literature showing that motor neurons are not only involved in movement execution, but also in the transformation of object physical features into motor programs appropriate to grasp them (through visuo-motor transformations). We also discuss evidence indicating that motor neurons can encode the goal of motor acts and the intention behind action execution. Then, we describe one of the mechanisms-the mirror mechanism-considered to be at the basis of action understanding and intention reading, and describe the anatomo-functional pathways through which information about the social context can reach the areas containing mirror neurons. Finally, we briefly show that a clear similarity exists between monkey and human in the organization of the motor and mirror systems. Based on monkey and human literature, we conclude that the mirror mechanism relies on a more extended network than previously thought, and possibly subserves basic social functions. We propose that this mechanism is also involved in preparing appropriate complementary response to observed actions, allowing two individuals to become attuned and cooperate in joint actions.
Collapse
Affiliation(s)
- Stefano Rozzi
- Department of Neuroscience, University of Parma , Parma, Italy
| | - Gino Coudé
- Department of Neuroscience, University of Parma , Parma, Italy
| |
Collapse
|
174
|
Petroni A, Carbajal MJ, Sigman M. Proprioceptive body illusions modulate the visual perception of reaching distance. PLoS One 2015; 10:e0131087. [PMID: 26110274 PMCID: PMC4482541 DOI: 10.1371/journal.pone.0131087] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2015] [Accepted: 05/28/2015] [Indexed: 11/25/2022] Open
Abstract
The neurobiology of reaching has been extensively studied in human and non-human primates. However, the mechanisms that allow a subject to decide—without engaging in explicit action—whether an object is reachable are not fully understood. Some studies conclude that decisions near the reach limit depend on motor simulations of the reaching movement. Others have shown that the body schema plays a role in explicit and implicit distance estimation, especially after motor practice with a tool. In this study we evaluate the causal role of multisensory body representations in the perception of reachable space. We reasoned that if body schema is used to estimate reach, an illusion of the finger size induced by proprioceptive stimulation should propagate to the perception of reaching distances. To test this hypothesis we induced a proprioceptive illusion of extension or shrinkage of the right index finger while participants judged a series of LEDs as reachable or non-reachable without actual movement. Our results show that reach distance estimation depends on the illusory perceived size of the finger: illusory elongation produced a shift of reaching distance away from the body whereas illusory shrinkage produced the opposite effect. Combining these results with previous findings, we suggest that deciding if a target is reachable requires an integration of body inputs in high order multisensory parietal areas that engage in movement simulations through connections with frontal premotor areas.
Collapse
Affiliation(s)
- Agustin Petroni
- Departamento de Física, FCEN-UBA, Ciudad Universitaria, (1428) Buenos Aires, Argentina
- * E-mail:
| | - M. Julia Carbajal
- Departamento de Física, FCEN-UBA, Ciudad Universitaria, (1428) Buenos Aires, Argentina
| | - Mariano Sigman
- Departamento de Física, FCEN-UBA, Ciudad Universitaria, (1428) Buenos Aires, Argentina
- Universidad Torcuato Di Tella, Sáenz Valiente 1010, (1428) Buenos Aires, Argentina
| |
Collapse
|
175
|
Caminiti R, Innocenti GM, Battaglia-Mayer A. Organization and evolution of parieto-frontal processing streams in macaque monkeys and humans. Neurosci Biobehav Rev 2015; 56:73-96. [PMID: 26112130 DOI: 10.1016/j.neubiorev.2015.06.014] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2015] [Revised: 05/08/2015] [Accepted: 06/09/2015] [Indexed: 01/01/2023]
Abstract
The functional organization of the parieto-frontal system is crucial for understanding cognitive-motor behavior and provides the basis for interpreting the consequences of parietal lesions in humans from a neurobiological perspective. The parieto-frontal connectivity defines some main information streams that, rather than being devoted to restricted functions, underlie a rich behavioral repertoire. Surprisingly, from macaque to humans, evolution has added only a few, new functional streams, increasing however their complexity and encoding power. In fact, the characterization of the conduction times of parietal and frontal areas to different target structures has recently opened a new window on cortical dynamics, suggesting that evolution has amplified the probability of dynamic interactions between the nodes of the network, thanks to communication patterns based on temporally-dispersed conduction delays. This might allow the representation of sensory-motor signals within multiple neural assemblies and reference frames, as to optimize sensory-motor remapping within an action space characterized by different and more complex demands across evolution.
Collapse
Affiliation(s)
- Roberto Caminiti
- Department of Physiology and Pharmacology, University of Rome SAPIENZA, P.le Aldo Moro 5, 00185 Rome, Italy.
| | - Giorgio M Innocenti
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden; Brain and Mind Institute, Federal Institute of Technology, EPFL, Lausanne, Switzerland
| | - Alexandra Battaglia-Mayer
- Department of Physiology and Pharmacology, University of Rome SAPIENZA, P.le Aldo Moro 5, 00185 Rome, Italy
| |
Collapse
|
176
|
Uesaki M, Ashida H. Optic-flow selective cortical sensory regions associated with self-reported states of vection. Front Psychol 2015; 6:775. [PMID: 26106350 PMCID: PMC4459088 DOI: 10.3389/fpsyg.2015.00775] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2015] [Accepted: 05/25/2015] [Indexed: 11/13/2022] Open
Abstract
Optic flow is one of the most important visual cues to the estimation of self-motion. It has repeatedly been demonstrated that a cortical network including visual, multisensory, and vestibular areas is implicated in processing optic flow; namely, visual areas middle temporal cortex (MT+), V6; multisensory areas ventral intra-parietal area (VIP), cingulate sulcus visual area, precuneus motion area (PcM); and vestibular areas parieto-insular vestibular cortex (PIVC) and putative area 2v (p2v). However, few studies have investigated the roles of and interaction between the optic-flow selective sensory areas within the context of self-motion perception. When visual information (i.e., optic flow) is the sole cue to computing self-motion parameters, the discrepancy amongst the sensory signals may induce an illusion of self-motion referred to as ‘vection.’ This study aimed to identify optic-flow selective sensory areas that are involved in the processing of visual cues to self-motion, by introducing vection as an index and assessing activation in which of those areas reflect vection, using functional magnetic resonance imaging. The results showed that activity in visual areas MT+ and V6, multisensory area VIP and vestibular area PIVC was significantly greater while participants were experiencing vection, as compared to when they were experiencing no vection, which may indicate that activation in MT+, V6, VIP, and PIVC reflects vection. The results also place VIP in a good position to integrate visual cues related to self-motion and vestibular information.
Collapse
Affiliation(s)
- Maiko Uesaki
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto Japan ; Japan Society for the Promotion of Science, Tokyo Japan
| | - Hiroshi Ashida
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto Japan
| |
Collapse
|
177
|
Neural substrates underlying the passive observation and active control of translational egomotion. J Neurosci 2015; 35:4258-67. [PMID: 25762672 DOI: 10.1523/jneurosci.2647-14.2015] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Moving or static obstacles often get in the way while walking in daily life. Avoiding obstacles involves both perceptual processing of motion information and controlling appropriate defensive movements. Several higher-level motion areas, including the ventral intraparietal area (VIP), medial superior temporal area, parieto-insular vestibular cortex (PIVC), areas V6 and V6A, and cingulate sulcus visual area, have been identified in humans by passive viewing of optic flow patterns that simulate egomotion and object motion. However, the roles of these areas in the active control of egomotion in the real world remain unclear. Here, we used functional magnetic resonance imaging (fMRI) to map the neural substrates underlying the passive observation and active control of translational egomotion in humans. A wide-field virtual reality environment simulated a daily scenario where doors randomly swing outward while walking in a hallway. The stimuli of door-dodging events were essentially the same in two event-related fMRI experiments, which compared passive and active dodges in response to swinging doors. Passive dodges were controlled by a computer program, while active dodges were controlled by the subject. Passive dodges activated several higher-level areas distributed across three dorsal motion streams in the temporal, parietal, and cingulate cortex. Active dodges most strongly activated the temporal-vestibular stream, with peak activation located in the right PIVC. Other higher-level motion areas including VIP showed weaker to no activation in active dodges. These results suggest that PIVC plays an active role in sensing and guiding translational egomotion that moves an observer aside from impending obstacles.
Collapse
|
178
|
Abstract
From an ecological point of view, approaching objects are potentially more harmful than receding objects. A predator, a dominant conspecific, or a mere branch coming up at high speed can all be dangerous if one does not detect them and produce the appropriate escape behavior fast enough. And indeed, looming stimuli trigger stereotyped defensive responses in both monkeys and human infants. However, while the heteromodal somatosensory consequences of visual looming stimuli can be fully predicted by their spatiotemporal dynamics, few studies if any have explored whether visual stimuli looming toward the face predictively enhance heteromodal tactile sensitivity around the expected time of impact and at its expected location on the body. In the present study, we report that, in addition to triggering a defensive motor repertoire, looming stimuli toward the face provide the nervous system with predictive cues that enhance tactile sensitivity on the face. Specifically, we describe an enhancement of tactile processes at the expected time and location of impact of the stimulus on the face. We additionally show that a looming stimulus that brushes past the face also enhances tactile sensitivity on the nearby cheek, suggesting that the space close to the face is incorporated into the subjects' body schema. We propose that this cross-modal predictive facilitation involves multisensory convergence areas subserving the representation of a peripersonal space and a safety boundary of self.
Collapse
|
179
|
Heed T, Buchholz VN, Engel AK, Röder B. Tactile remapping: from coordinate transformation to integration in sensorimotor processing. Trends Cogn Sci 2015; 19:251-8. [DOI: 10.1016/j.tics.2015.03.001] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 03/04/2015] [Accepted: 03/05/2015] [Indexed: 10/23/2022]
|
180
|
Cléry J, Guipponi O, Wardak C, Ben Hamed S. Neuronal bases of peripersonal and extrapersonal spaces, their plasticity and their dynamics: Knowns and unknowns. Neuropsychologia 2015; 70:313-26. [PMID: 25447371 DOI: 10.1016/j.neuropsychologia.2014.10.022] [Citation(s) in RCA: 151] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Revised: 10/09/2014] [Accepted: 10/14/2014] [Indexed: 11/19/2022]
Affiliation(s)
- Justine Cléry
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France
| | - Olivier Guipponi
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France
| | - Claire Wardak
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France
| | - Suliann Ben Hamed
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France.
| |
Collapse
|
181
|
Finisguerra A, Canzoneri E, Serino A, Pozzo T, Bassolino M. Moving sounds within the peripersonal space modulate the motor system. Neuropsychologia 2015; 70:421-8. [PMID: 25281311 DOI: 10.1016/j.neuropsychologia.2014.09.043] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2014] [Revised: 08/29/2014] [Accepted: 09/24/2014] [Indexed: 10/24/2022]
|
182
|
Kandula M, Hofman D, Dijkerman HC. Visuo-tactile interactions are dependent on the predictive value of the visual stimulus. Neuropsychologia 2015; 70:358-66. [DOI: 10.1016/j.neuropsychologia.2014.12.008] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Revised: 12/04/2014] [Accepted: 12/05/2014] [Indexed: 10/24/2022]
|
183
|
Sclafani V, Simpson EA, Suomi SJ, Ferrari PF. Development of space perception in relation to the maturation of the motor system in infant rhesus macaques (Macaca mulatta). Neuropsychologia 2015; 70:429-41. [PMID: 25486636 PMCID: PMC5100747 DOI: 10.1016/j.neuropsychologia.2014.12.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Revised: 11/07/2014] [Accepted: 12/02/2014] [Indexed: 11/25/2022]
Abstract
To act on the environment, organisms must perceive object locations in relation to their body. Several neuroscientific studies provide evidence of neural circuits that selectively represent space within reach (i.e., peripersonal) and space outside of reach (i.e., extrapersonal). However, the developmental emergence of these space representations remains largely unexplored. We investigated the development of space coding in infant macaques and found that they exhibit different motor strategies and hand configurations depending on the objects' size and location. Reaching-grasping improved from 2 to 4 weeks of age, suggesting a broadly defined perceptual body schema at birth, modified by the acquisition and refinement of motor skills through early sensorimotor experience, enabling the development of a mature capacity for coding space.
Collapse
Affiliation(s)
- Valentina Sclafani
- Dipartimento di Neuroscienze, Università di Parma, Via Volturno 39 - 43100 Parma, Italy.
| | - Elizabeth A Simpson
- Dipartimento di Neuroscienze, Università di Parma, Via Volturno 39 - 43100 Parma, Italy; Eunice Kennedy Shiver National Institute of Child Health and Human Development, Laboratory of Comparative Ethology, Poolesville, MD, USA
| | - Stephen J Suomi
- Eunice Kennedy Shiver National Institute of Child Health and Human Development, Laboratory of Comparative Ethology, Poolesville, MD, USA
| | | |
Collapse
|
184
|
Kilteni K, Maselli A, Kording KP, Slater M. Over my fake body: body ownership illusions for studying the multisensory basis of own-body perception. Front Hum Neurosci 2015; 9:141. [PMID: 25852524 PMCID: PMC4371812 DOI: 10.3389/fnhum.2015.00141] [Citation(s) in RCA: 205] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Accepted: 02/28/2015] [Indexed: 11/13/2022] Open
Abstract
Which is my body and how do I distinguish it from the bodies of others, or from objects in the surrounding environment? The perception of our own body and more particularly our sense of body ownership is taken for granted. Nevertheless, experimental findings from body ownership illusions (BOIs), show that under specific multisensory conditions, we can experience artificial body parts or fake bodies as our own body parts or body, respectively. The aim of the present paper is to discuss how and why BOIs are induced. We review several experimental findings concerning the spatial, temporal, and semantic principles of crossmodal stimuli that have been applied to induce BOIs. On the basis of these principles, we discuss theoretical approaches concerning the underlying mechanism of BOIs. We propose a conceptualization based on Bayesian causal inference for addressing how our nervous system could infer whether an object belongs to our own body, using multisensory, sensorimotor, and semantic information, and we discuss how this can account for several experimental findings. Finally, we point to neural network models as an implementational framework within which the computational problem behind BOIs could be addressed in the future.
Collapse
Affiliation(s)
- Konstantina Kilteni
- Event Lab, Department of Personality, Evaluation and Psychological Treatment, University of Barcelona Barcelona, Spain ; IR3C Institute for Brain, Cognition, and Behaviour, University of Barcelona Barcelona, Spain
| | - Antonella Maselli
- Event Lab, Department of Personality, Evaluation and Psychological Treatment, University of Barcelona Barcelona, Spain
| | - Konrad P Kording
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago Chicago, IL, USA ; Department of Physical Medicine and Rehabilitation, Northwestern University Chicago, IL, USA ; Department of Physiology, Northwestern University Chicago, IL, USA
| | - Mel Slater
- Event Lab, Department of Personality, Evaluation and Psychological Treatment, University of Barcelona Barcelona, Spain ; IR3C Institute for Brain, Cognition, and Behaviour, University of Barcelona Barcelona, Spain ; Institució Catalana de Recerca i Estudis Avançats, Passeig Lluís Companys 23 Barcelona, Spain
| |
Collapse
|
185
|
Sunkara A, DeAngelis GC, Angelaki DE. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex. eLife 2015; 4. [PMID: 25693417 PMCID: PMC4337725 DOI: 10.7554/elife.04693] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Accepted: 01/20/2015] [Indexed: 11/16/2022] Open
Abstract
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI:http://dx.doi.org/10.7554/eLife.04693.001 When strolling along a path beside a busy street, we can look around without losing our stride. The things we see change as we walk forward, and our view also changes if we turn our head—for example, to look at a passing car. Nevertheless, we can still tell that we are walking in a straight-line because our brain is able to compute the direction in which we are heading by discounting the visual changes caused by rotating our head or eyes. It remains unclear how the brain gets the information about head and eye movements that it would need to be able to do this. Many researchers had proposed that the brain estimates these rotations by using a copy of the neural signals that are sent to the muscles to move the eyes or head. However, it is possible that the brain can estimate head and eye rotations by directly analyzing the visual information from the eyes. One region of the brain that may contribute to this process is the ventral intraparietal area or ‘area VIP’ for short. Sunkara et al. devised an experiment that can help distinguish the effects of visual cues from copies of neural signals sent to the muscles during eye rotations. This involved training monkeys to look at a 3D display of moving dots, which gives the impression of moving through space. Sunkara et al. then measured the electrical signals in area VIP either when the monkey moved its eyes (to follow a moving target), or when the display changed to give the monkey the same visual cues as if it had rotated its eyes, when in fact it had not. Sunkara et al. found that the electrical signals recorded in area VIP when the monkey was given the illusion of rotating its eyes were similar to the signals recorded when the monkey actually rotated its eyes. This suggests that visual cues play an important role in correcting for the effects of eye rotations and correctly estimating the direction in which we are heading. Further research into the mechanisms behind this neural process could lead to new vision-based treatments for medical disorders that cause people to have balance problems. Similar research could also help to identify ways to improve navigation in automated vehicles, such as driverless cars. DOI:http://dx.doi.org/10.7554/eLife.04693.002
Collapse
Affiliation(s)
- Adhira Sunkara
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
186
|
Serino A, Canzoneri E, Marzolla M, di Pellegrino G, Magosso E. Extending peripersonal space representation without tool-use: evidence from a combined behavioral-computational approach. Front Behav Neurosci 2015; 9:4. [PMID: 25698947 PMCID: PMC4313698 DOI: 10.3389/fnbeh.2015.00004] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2014] [Accepted: 01/08/2015] [Indexed: 11/16/2022] Open
Abstract
Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data.
Collapse
Affiliation(s)
- Andrea Serino
- Laboratory of Cognitive Neuroscience, Department of Life Science, Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland ; Dipartimento di Psicologia, Alma Mater Studiorum, Università di Bologna Bologna, Italy
| | - Elisa Canzoneri
- Laboratory of Cognitive Neuroscience, Department of Life Science, Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland ; Dipartimento di Psicologia, Centro Studi e ricerche in Neuroscienze Cognitive, Polo Scientifico Didattico di Cesena, Alma Mater Studiorum, Università di Bologna Bologna, Italy
| | - Marilena Marzolla
- Dipartimento di Psicologia, Centro Studi e ricerche in Neuroscienze Cognitive, Polo Scientifico Didattico di Cesena, Alma Mater Studiorum, Università di Bologna Bologna, Italy
| | - Giuseppe di Pellegrino
- Dipartimento di Psicologia, Alma Mater Studiorum, Università di Bologna Bologna, Italy ; Dipartimento di Psicologia, Centro Studi e ricerche in Neuroscienze Cognitive, Polo Scientifico Didattico di Cesena, Alma Mater Studiorum, Università di Bologna Bologna, Italy
| | - Elisa Magosso
- Interdepartmental Centre for Industrial Research in Health Sciences and Technologies, Alma Mater Studiorum, University of Bologna Bologna, Italy ; Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi," Alma Mater Studiorum, University of Bologna Bologna, Italy
| |
Collapse
|
187
|
Madl T, Chen K, Montaldi D, Trappl R. Computational cognitive models of spatial memory in navigation space: a review. Neural Netw 2015; 65:18-43. [PMID: 25659941 DOI: 10.1016/j.neunet.2015.01.002] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Revised: 12/15/2014] [Accepted: 01/12/2015] [Indexed: 10/24/2022]
Abstract
Spatial memory refers to the part of the memory system that encodes, stores, recognizes and recalls spatial information about the environment and the agent's orientation within it. Such information is required to be able to navigate to goal locations, and is vitally important for any embodied agent, or model thereof, for reaching goals in a spatially extended environment. In this paper, a number of computationally implemented cognitive models of spatial memory are reviewed and compared. Three categories of models are considered: symbolic models, neural network models, and models that are part of a systems-level cognitive architecture. Representative models from each category are described and compared in a number of dimensions along which simulation models can differ (level of modeling, types of representation, structural accuracy, generality and abstraction, environment complexity), including their possible mapping to the underlying neural substrate. Neural mappings are rarely explicated in the context of behaviorally validated models, but they could be useful to cognitive modeling research by providing a new approach for investigating a model's plausibility. Finally, suggested experimental neuroscience methods are described for verifying the biological plausibility of computational cognitive models of spatial memory, and open questions for the field of spatial memory modeling are outlined.
Collapse
Affiliation(s)
- Tamas Madl
- School of Computer Science, University of Manchester, Manchester M13 9PL, UK; Austrian Research Institute for Artificial Intelligence, Vienna A-1010, Austria.
| | - Ke Chen
- School of Computer Science, University of Manchester, Manchester M13 9PL, UK
| | - Daniela Montaldi
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, UK
| | - Robert Trappl
- Austrian Research Institute for Artificial Intelligence, Vienna A-1010, Austria
| |
Collapse
|
188
|
Zelinsky GJ, Bisley JW. The what, where, and why of priority maps and their interactions with visual working memory. Ann N Y Acad Sci 2015; 1339:154-64. [PMID: 25581477 DOI: 10.1111/nyas.12606] [Citation(s) in RCA: 115] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Priority maps are winner-take-all neural mechanisms thought to guide the allocation of covert and overt attention. Here, we go beyond this standard definition and argue that priority maps play a much broader role in controlling goal-directed behavior. We start by defining what priority maps are and where they might be found in the brain; we then ask why they exist-the function that they serve. We propose that this function is to communicate a goal state to the different effector systems, thereby guiding behavior. Within this framework, we speculate on how priority maps interact with visual working memory and introduce our common source hypothesis, the suggestion that this goal state is maintained in visual working memory and used to construct all of the priority maps controlling the various motor systems. Finally, we look ahead and suggest questions about priority maps that should be asked next.
Collapse
Affiliation(s)
- Gregory J Zelinsky
- Department of Psychology; Department of Computer Science, Stony Brook University, Stony Brook, New York; Center for Interdisciplinary Research (ZiF), Bielefeld University, Bielefeld, Germany
| | | |
Collapse
|
189
|
di Pellegrino G, Làdavas E. Peripersonal space in the brain. Neuropsychologia 2015; 66:126-33. [DOI: 10.1016/j.neuropsychologia.2014.11.011] [Citation(s) in RCA: 161] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Revised: 11/03/2014] [Accepted: 11/07/2014] [Indexed: 11/26/2022]
|
190
|
Albouy P, Lévêque Y, Hyde KL, Bouchet P, Tillmann B, Caclin A. Boosting pitch encoding with audiovisual interactions in congenital amusia. Neuropsychologia 2014; 67:111-20. [PMID: 25499145 DOI: 10.1016/j.neuropsychologia.2014.12.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2014] [Revised: 12/03/2014] [Accepted: 12/05/2014] [Indexed: 11/19/2022]
Abstract
The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance.
Collapse
Affiliation(s)
- Philippe Albouy
- Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team & Auditory Cognition and Psychoacoustics Team, CRNL, INSERM U1028, CNRS UMR5292, Lyon, F-69000, France; University Lyon 1, Lyon F-69000, France; Montreal Neurological Institute, McGill University, 3801 University Street Montreal, QC, Canada H3A2B4; International Laboratory for Brain Music and Sound Research, University of Montreal and McGill University, Canada.
| | - Yohana Lévêque
- Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team & Auditory Cognition and Psychoacoustics Team, CRNL, INSERM U1028, CNRS UMR5292, Lyon, F-69000, France; University Lyon 1, Lyon F-69000, France
| | - Krista L Hyde
- International Laboratory for Brain Music and Sound Research, University of Montreal and McGill University, Canada
| | - Patrick Bouchet
- Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team & Auditory Cognition and Psychoacoustics Team, CRNL, INSERM U1028, CNRS UMR5292, Lyon, F-69000, France; University Lyon 1, Lyon F-69000, France
| | - Barbara Tillmann
- University Lyon 1, Lyon F-69000, France; University Lyon 1, Lyon F-69000, France
| | - Anne Caclin
- Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team & Auditory Cognition and Psychoacoustics Team, CRNL, INSERM U1028, CNRS UMR5292, Lyon, F-69000, France; University Lyon 1, Lyon F-69000, France
| |
Collapse
|
191
|
Ursino M, Cuppini C, Magosso E. Neurocomputational approaches to modelling multisensory integration in the brain: A review. Neural Netw 2014; 60:141-65. [DOI: 10.1016/j.neunet.2014.08.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2014] [Revised: 08/05/2014] [Accepted: 08/07/2014] [Indexed: 10/24/2022]
|
192
|
The spatial distance rule in the moving and classical rubber hand illusions. Conscious Cogn 2014; 30:118-32. [DOI: 10.1016/j.concog.2014.08.022] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2014] [Revised: 08/26/2014] [Accepted: 08/31/2014] [Indexed: 11/23/2022]
|
193
|
Ishida H, Suzuki K, Grandi LC. Predictive coding accounts of shared representations in parieto-insular networks. Neuropsychologia 2014; 70:442-54. [PMID: 25447372 DOI: 10.1016/j.neuropsychologia.2014.10.020] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2014] [Revised: 10/07/2014] [Accepted: 10/14/2014] [Indexed: 12/15/2022]
Abstract
The discovery of mirror neurons in the ventral premotor cortex (area F5) and inferior parietal cortex (area PFG) in the macaque monkey brain has provided the physiological evidence for direct matching of the intrinsic motor representations of the self and the visual image of the actions of others. The existence of mirror neurons implies that the brain has mechanisms reflecting shared self and other action representations. This may further imply that the neural basis self-body representations may also incorporate components that are shared with other-body representations. It is likely that such a mechanism is also involved in predicting other's touch sensations and emotions. However, the neural basis of shared body representations has remained unclear. Here, we propose a neural basis of body representation of the self and of others in both human and non-human primates. We review a series of behavioral and physiological findings which together paint a picture that the systems underlying such shared representations require integration of conscious exteroception and interoception subserved by a cortical sensory-motor network involving parieto-inner perisylvian circuits (the ventral intraparietal area [VIP]/inferior parietal area [PFG]-secondary somatosensory cortex [SII]/posterior insular cortex [pIC]/anterior insular cortex [aIC]). Based on these findings, we propose a computational mechanism of the shared body representation in the predictive coding (PC) framework. Our mechanism proposes that processes emerging from generative models embedded in these specific neuronal circuits play a pivotal role in distinguishing a self-specific body representation from a shared one. The model successfully accounts for normal and abnormal shared body phenomena such as mirror-touch synesthesia and somatoparaphrenia. In addition, it generates a set of testable experimental predictions.
Collapse
Affiliation(s)
- Hiroaki Ishida
- Istituto Italiano di Tecnologia (IIT), Brain Center for Social and Motor Cognition (BCSMC), Parma, Italy; Frontal Lobe Function Project, Tokyo Metropolitan Institute of Medical Science, Tokyo, Japan.
| | - Keisuke Suzuki
- Sackler Center for Consciousness Science, University of Sussex, Brighton, UK; School of Informatics and Engineering, University of Sussex, Brighton, UK
| | - Laura Clara Grandi
- Department of Neuroscience, Unit of Physiology, Parma University, Parma, Italy
| |
Collapse
|
194
|
Maselli A, Slater M. Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality. Front Hum Neurosci 2014; 8:693. [PMID: 25309383 PMCID: PMC4161166 DOI: 10.3389/fnhum.2014.00693] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2014] [Accepted: 08/19/2014] [Indexed: 11/13/2022] Open
Abstract
Bodily illusions have been used to study bodily self-consciousness and disentangle its various components, among other the sense of ownership and self-location. Congruent multimodal correlations between the real body and a fake humanoid body can in fact trigger the illusion that the fake body is one's own and/or disrupt the unity between the perceived self-location and the position of the physical body. However, the extent to which changes in self-location entail changes in ownership is still matter of debate. Here we address this problem with the support of immersive virtual reality. Congruent visuotactile stimulation was delivered on healthy participants to trigger full body illusions from different visual perspectives, each resulting in a different degree of overlap between real and virtual body. Changes in ownership and self-location were measured with novel self-posture assessment tasks and with an adapted version of the cross-modal congruency task. We found that, despite their strong coupling, self-location and ownership can be selectively altered: self-location was affected when having a third person perspective over the virtual body, while ownership toward the virtual body was experienced only in the conditions with total or partial overlap. Thus, when the virtual body is seen in the far extra-personal space, changes in self-location were not coupled with changes in ownership. If a partial spatial overlap is present, ownership was instead typically experienced with a boosted change in the perceived self-location. We discussed results in the context of the current knowledge of the multisensory integration mechanisms contributing to self-body perception. We argue that changes in the perceived self-location are associated to the dynamical representation of peripersonal space encoded by visuotactile neurons. On the other hand, our results speak in favor of visuo-proprioceptive neuronal populations being a driving trigger in full body ownership illusions.
Collapse
Affiliation(s)
- Antonella Maselli
- EVENT Lab, Facultat de Psicologia, Universitat de Barcelona Barcelona, Spain
| | - Mel Slater
- EVENT Lab, Facultat de Psicologia, Universitat de Barcelona Barcelona, Spain ; Institució Catalana Recerca i Estudis Avancats Barcelona, Spain
| |
Collapse
|
195
|
Kaminiarz A, Schlack A, Hoffmann KP, Lappe M, Bremmer F. Visual selectivity for heading in the macaque ventral intraparietal area. J Neurophysiol 2014; 112:2470-80. [PMID: 25122709 DOI: 10.1152/jn.00410.2014] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.
Collapse
Affiliation(s)
| | - Anja Schlack
- Allgemeine Zoologie und Neurobiologie, University of Bochum, Bochum, Germany; and
| | - Klaus-Peter Hoffmann
- AG Neurophysik, University of Marburg, Marburg, Germany; Allgemeine Zoologie und Neurobiologie, University of Bochum, Bochum, Germany; and
| | - Markus Lappe
- Institut für Psychologie, University of Münster, Münster, Germany
| | - Frank Bremmer
- AG Neurophysik, University of Marburg, Marburg, Germany;
| |
Collapse
|
196
|
Sharing Social Touch in the Primary Somatosensory Cortex. Curr Biol 2014; 24:1513-7. [DOI: 10.1016/j.cub.2014.05.025] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2013] [Revised: 03/20/2014] [Accepted: 05/12/2014] [Indexed: 12/23/2022]
|
197
|
Jacob S, Nieder A. Complementary Roles for Primate Frontal and Parietal Cortex in Guarding Working Memory from Distractor Stimuli. Neuron 2014; 83:226-37. [DOI: 10.1016/j.neuron.2014.05.009] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/02/2014] [Indexed: 10/25/2022]
|
198
|
Fabbri S, Strnad L, Caramazza A, Lingnau A. Overlapping representations for grip type and reach direction. Neuroimage 2014; 94:138-146. [DOI: 10.1016/j.neuroimage.2014.03.017] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2013] [Revised: 02/19/2014] [Accepted: 03/08/2014] [Indexed: 11/16/2022] Open
|
199
|
Strappini F, Pitzalis S, Snyder AZ, McAvoy MP, Sereno MI, Corbetta M, Shulman GL. Eye position modulates retinotopic responses in early visual areas: a bias for the straight-ahead direction. Brain Struct Funct 2014; 220:2587-601. [PMID: 24942135 PMCID: PMC4549389 DOI: 10.1007/s00429-014-0808-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Accepted: 05/21/2014] [Indexed: 11/30/2022]
Abstract
Even though the eyes constantly change position, the location of a stimulus can be accurately represented by a population of neurons with retinotopic receptive fields modulated by eye position gain fields. Recent electrophysiological studies, however, indicate that eye position gain fields may serve an additional function since they have a non-uniform spatial distribution that increases the neural response to stimuli in the straight-ahead direction. We used functional magnetic resonance imaging and a wide-field stimulus display to determine whether gaze modulations in early human visual cortex enhance the blood-oxygenation-level dependent (BOLD) response to stimuli that are straight-ahead. Subjects viewed rotating polar angle wedge stimuli centered straight-ahead or vertically displaced by ±20° eccentricity. Gaze position did not affect the topography of polar phase-angle maps, confirming that coding was retinotopic, but did affect the amplitude of the BOLD response, consistent with a gain field. In agreement with recent electrophysiological studies, BOLD responses in V1 and V2 to a wedge stimulus at a fixed retinal locus decreased when the wedge location in head-centered coordinates was farther from the straight-ahead direction. We conclude that stimulus-evoked BOLD signals are modulated by a systematic, non-uniform distribution of eye-position gain fields.
Collapse
Affiliation(s)
- Francesca Strappini
- Department of Neurology, Washington University, School of Medicine, Saint Louis, MO, 63110, USA,
| | | | | | | | | | | | | |
Collapse
|
200
|
Abstract
Correctly localising sensory stimuli in space is a formidable challenge for the newborn brain. A new study provides a first glimpse into how human brain mechanisms for sensory remapping develop in the first year of life.
Collapse
|