1
|
Cavicchioli M, Santoni A, Chiappetta F, Deodato M, Di Dona G, Scalabrini A, Galli F, Ronconi L. Psychological dissociation and temporal integration/segregation across the senses: An experimental study. Conscious Cogn 2024; 124:103731. [PMID: 39096823 DOI: 10.1016/j.concog.2024.103731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 07/19/2024] [Accepted: 07/23/2024] [Indexed: 08/05/2024]
Abstract
There are no studies that have experimentally tested how temporal integration/segregation of sensory inputs might be linked to the emergence of dissociative experiences and alterations of emotional functioning. Thirty-six participants completed 3 sensory integration tasks. Psychometric thresholds were estimated as indexes of temporal integration/segregation processes. We collected self-report measures of pre-task trait levels of dissociation, as well as pre- post-task changes in both dissociation and emotionality. An independent sample of 21 subjects completed a control experiment administering the Attention Network Test. Results showed: (i) a significant increase of dissociative experiences after the completion of sensory integration tasks, but not after the ANT task; (ii) that subjective thresholds predicted the emergence of dissociative states; (iii) temporal integration efforts affected positive emotionality, which was explained by the extent of task-dependent dissociative states. The present findings reveal that dissociation could be understood in terms of an imbalance between "hyper-segregation" and "hyper-integration" processes.
Collapse
Affiliation(s)
- Marco Cavicchioli
- Department of Dynamic and Clinical Psychology, and Health Studies, Faculty of Medicine and Psychology, SAPIENZA University of Rome, Italy; Faculty of Psychology, Sigmund Freud University, Ripa di Porta Ticinese 77, Milan, Italy.
| | - Alessia Santoni
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy
| | | | - Michele Deodato
- Psychology Program, Division of Science, New York University Abu Dhabi, United Arab Emirates
| | - Giuseppe Di Dona
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy
| | - Andrea Scalabrini
- Department of Human and Social Science, University of Bergamo, Mental Health, Bergamo, Italy
| | - Federica Galli
- Department of Dynamic and Clinical Psychology, and Health Studies, Faculty of Medicine and Psychology, SAPIENZA University of Rome, Italy
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy
| |
Collapse
|
2
|
Smyre SA, Bean NL, Stein BE, Rowland BA. The brain can develop conflicting multisensory principles to guide behavior. Cereb Cortex 2024; 34:bhae247. [PMID: 38879756 PMCID: PMC11179994 DOI: 10.1093/cercor/bhae247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/23/2024] [Accepted: 05/30/2024] [Indexed: 06/19/2024] Open
Abstract
Midbrain multisensory neurons undergo a significant postnatal transition in how they process cross-modal (e.g. visual-auditory) signals. In early stages, signals derived from common events are processed competitively; however, at later stages they are processed cooperatively such that their salience is enhanced. This transition reflects adaptation to cross-modal configurations that are consistently experienced and become informative about which correspond to common events. Tested here was the assumption that overt behaviors follow a similar maturation. Cats were reared in omnidirectional sound thereby compromising the experience needed for this developmental process. Animals were then repeatedly exposed to different configurations of visual and auditory stimuli (e.g. spatiotemporally congruent or spatially disparate) that varied on each side of space and their behavior was assessed using a detection/localization task. Animals showed enhanced performance to stimuli consistent with the experience provided: congruent stimuli elicited enhanced behaviors where spatially congruent cross-modal experience was provided, and spatially disparate stimuli elicited enhanced behaviors where spatially disparate cross-modal experience was provided. Cross-modal configurations not consistent with experience did not enhance responses. The presumptive benefit of such flexibility in the multisensory developmental process is to sensitize neural circuits (and the behaviors they control) to the features of the environment in which they will function. These experiments reveal that these processes have a high degree of flexibility, such that two (conflicting) multisensory principles can be implemented by cross-modal experience on opposite sides of space even within the same animal.
Collapse
Affiliation(s)
- Scott A Smyre
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| |
Collapse
|
3
|
Bean NL, Stein BE, Rowland BA. Cross-modal exposure restores multisensory enhancement after hemianopia. Cereb Cortex 2023; 33:11036-11046. [PMID: 37724427 PMCID: PMC10646694 DOI: 10.1093/cercor/bhad343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 08/28/2023] [Accepted: 08/30/2023] [Indexed: 09/20/2023] Open
Abstract
Hemianopia is a common consequence of unilateral damage to visual cortex that manifests as a profound blindness in contralesional space. A noninvasive cross-modal (visual-auditory) exposure paradigm has been developed in an animal model to ameliorate this disorder. Repeated stimulation of a visual-auditory stimulus restores overt responses to visual stimuli in the blinded hemifield. It is believed to accomplish this by enhancing the visual sensitivity of circuits remaining after a lesion of visual cortex; in particular, circuits involving the multisensory neurons of the superior colliculus. Neurons in this midbrain structure are known to integrate spatiotemporally congruent visual and auditory signals to amplify their responses, which, in turn, enhances behavioral performance. Here we evaluated the relationship between the rehabilitation of hemianopia and this process of multisensory integration. Induction of hemianopia also eliminated multisensory enhancement in the blinded hemifield. Both vision and multisensory enhancement rapidly recovered with the rehabilitative cross-modal exposures. However, although both reached pre-lesion levels at similar rates, they did so with different spatial patterns. The results suggest that the capability for multisensory integration and enhancement is not a pre-requisite for visual recovery in hemianopia, and that the underlying mechanisms for recovery may be more complex than currently appreciated.
Collapse
Affiliation(s)
- Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| |
Collapse
|
4
|
Smyre SA, Bean NL, Stein BE, Rowland BA. Predictability alters multisensory responses by modulating unisensory inputs. Front Neurosci 2023; 17:1150168. [PMID: 37065927 PMCID: PMC10090419 DOI: 10.3389/fnins.2023.1150168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/13/2023] [Indexed: 03/30/2023] Open
Abstract
The multisensory (deep) layers of the superior colliculus (SC) play an important role in detecting, localizing, and guiding orientation responses to salient events in the environment. Essential to this role is the ability of SC neurons to enhance their responses to events detected by more than one sensory modality and to become desensitized (‘attenuated’ or ‘habituated’) or sensitized (‘potentiated’) to events that are predictable via modulatory dynamics. To identify the nature of these modulatory dynamics, we examined how the repetition of different sensory stimuli affected the unisensory and multisensory responses of neurons in the cat SC. Neurons were presented with 2HZ stimulus trains of three identical visual, auditory, or combined visual–auditory stimuli, followed by a fourth stimulus that was either the same or different (‘switch’). Modulatory dynamics proved to be sensory-specific: they did not transfer when the stimulus switched to another modality. However, they did transfer when switching from the visual–auditory stimulus train to either of its modality-specific component stimuli and vice versa. These observations suggest that predictions, in the form of modulatory dynamics induced by stimulus repetition, are independently sourced from and applied to the modality-specific inputs to the multisensory neuron. This falsifies several plausible mechanisms for these modulatory dynamics: they neither produce general changes in the neuron’s transform, nor are they dependent on the neuron’s output.
Collapse
|
5
|
Subliminal audio-visual temporal congruency in music videos enhances perceptual pleasure. Neurosci Lett 2022; 779:136623. [DOI: 10.1016/j.neulet.2022.136623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 03/31/2022] [Accepted: 04/05/2022] [Indexed: 11/19/2022]
|
6
|
Jiang H, Stanford TR, Rowland BA, Stein BE. Association Cortex Is Essential to Reverse Hemianopia by Multisensory Training. Cereb Cortex 2021; 31:5015-5023. [PMID: 34056645 DOI: 10.1093/cercor/bhab138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 04/19/2021] [Accepted: 04/19/2021] [Indexed: 11/14/2022] Open
Abstract
Hemianopia induced by unilateral visual cortex lesions can be resolved by repeatedly exposing the blinded hemifield to auditory-visual stimuli. This rehabilitative "training" paradigm depends on mechanisms of multisensory plasticity that restore the lost visual responsiveness of multisensory neurons in the ipsilesional superior colliculus (SC) so that they can once again support vision in the blinded hemifield. These changes are thought to operate via the convergent visual and auditory signals relayed to the SC from association cortex (the anterior ectosylvian sulcus [AES], in cat). The present study tested this assumption by cryogenically deactivating ipsilesional AES in hemianopic, anesthetized cats during weekly multisensory training sessions. No signs of visual recovery were evident in this condition, even after providing animals with up to twice the number of training sessions required for effective rehabilitation. Subsequent training under the same conditions, but with AES active, reversed the hemianopia within the normal timeframe. These results indicate that the corticotectal circuit that is normally engaged in SC multisensory plasticity has to be operational for the brain to use visual-auditory experience to resolve hemianopia.
Collapse
Affiliation(s)
- Huai Jiang
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Terrence R Stanford
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| |
Collapse
|
7
|
Rezaul Karim AKM, Proulx MJ, de Sousa AA, Likova LT. Neuroplasticity and Crossmodal Connectivity in the Normal, Healthy Brain. PSYCHOLOGY & NEUROSCIENCE 2021; 14:298-334. [PMID: 36937077 PMCID: PMC10019101 DOI: 10.1037/pne0000258] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Objective Neuroplasticity enables the brain to establish new crossmodal connections or reorganize old connections which are essential to perceiving a multisensorial world. The intent of this review is to identify and summarize the current developments in neuroplasticity and crossmodal connectivity, and deepen understanding of how crossmodal connectivity develops in the normal, healthy brain, highlighting novel perspectives about the principles that guide this connectivity. Methods To the above end, a narrative review is carried out. The data documented in prior relevant studies in neuroscience, psychology and other related fields available in a wide range of prominent electronic databases are critically assessed, synthesized, interpreted with qualitative rather than quantitative elements, and linked together to form new propositions and hypotheses about neuroplasticity and crossmodal connectivity. Results Three major themes are identified. First, it appears that neuroplasticity operates by following eight fundamental principles and crossmodal integration operates by following three principles. Second, two different forms of crossmodal connectivity, namely direct crossmodal connectivity and indirect crossmodal connectivity, are suggested to operate in both unisensory and multisensory perception. Third, three principles possibly guide the development of crossmodal connectivity into adulthood. These are labeled as the principle of innate crossmodality, the principle of evolution-driven 'neuromodular' reorganization and the principle of multimodal experience. These principles are combined to develop a three-factor interaction model of crossmodal connectivity. Conclusions The hypothesized principles and the proposed model together advance understanding of neuroplasticity, the nature of crossmodal connectivity, and how such connectivity develops in the normal, healthy brain.
Collapse
|
8
|
Miller LJ, Marco EJ, Chu RC, Camarata S. Editorial: Sensory Processing Across the Lifespan: A 25-Year Initiative to Understand Neurophysiology, Behaviors, and Treatment Effectiveness for Sensory Processing. Front Integr Neurosci 2021; 15:652218. [PMID: 33897385 PMCID: PMC8063042 DOI: 10.3389/fnint.2021.652218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 02/24/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- Lucy Jane Miller
- Department of Pediatrics (Emeritus), University of Colorado, Denver, CO, United States.,Sensory Therapies and Research Institute for Sensory Processing Disorder, Centennial, CO, United States
| | - Elysa J Marco
- Cortica (United States), San Diego, CA, United States
| | - Robyn C Chu
- Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, United States.,Growing Healthy Children Therapy Services, Rescue, CA, United States
| | - Stephen Camarata
- School of Medicine, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
9
|
Wang Z, Yu L, Xu J, Stein BE, Rowland BA. Experience Creates the Multisensory Transform in the Superior Colliculus. Front Integr Neurosci 2020; 14:18. [PMID: 32425761 PMCID: PMC7212431 DOI: 10.3389/fnint.2020.00018] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/18/2020] [Indexed: 11/15/2022] Open
Abstract
Although the ability to integrate information across the senses is compromised in some individuals for unknown reasons, similar defects have been observed when animals are reared without multisensory experience. The experience-dependent development of multisensory integration has been studied most extensively using the visual-auditory neuron of the cat superior colliculus (SC) as a neural model. In the normally-developed adult, SC neurons react to concordant visual-auditory stimuli by integrating their inputs in real-time to produce non-linearly amplified multisensory responses. However, when prevented from gathering visual-auditory experience, their multisensory responses are no more robust than their responses to the individual component stimuli. The mechanisms operating in this defective state are poorly understood. Here we examined the responses of SC neurons in “naïve” (i.e., dark-reared) and “neurotypic” (i.e., normally-reared) animals on a millisecond-by-millisecond basis to determine whether multisensory experience changes the operation by which unisensory signals are converted into multisensory outputs (the “multisensory transform”), or whether it changes the dynamics of the unisensory inputs to that transform (e.g., their synchronization and/or alignment). The results reveal that the major impact of experience was on the multisensory transform itself. Whereas neurotypic multisensory responses exhibited non-linear amplification near their onset followed by linear amplification thereafter, the naive responses showed no integration in the initial phase of the response and a computation consistent with competition in its later phases. The results suggest that multisensory experience creates an entirely new computation by which convergent unisensory inputs are used cooperatively to enhance the physiological salience of cross-modal events and thereby facilitate normal perception and behavior.
Collapse
Affiliation(s)
- Zhengyang Wang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| |
Collapse
|
10
|
Colonius H, Diederich A. Formal models and quantitative measures of multisensory integration: a selective overview. Eur J Neurosci 2020; 51:1161-1178. [DOI: 10.1111/ejn.13813] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Revised: 12/18/2017] [Accepted: 12/20/2017] [Indexed: 11/26/2022]
Affiliation(s)
- Hans Colonius
- Department of Psychology Carl von Ossietzky Universität Oldenburg Oldenburg 26111 Germany
- Department of Psychological Sciences Purdue University West Lafayette IN USA
| | - Adele Diederich
- Department of Psychological Sciences Purdue University West Lafayette IN USA
- Life Sciences and Chemistry Jacobs University Bremen Bremen Germany
| |
Collapse
|
11
|
Stein BE, Rowland BA. Using superior colliculus principles of multisensory integration to reverse hemianopia. Neuropsychologia 2020; 141:107413. [PMID: 32113921 DOI: 10.1016/j.neuropsychologia.2020.107413] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2019] [Revised: 02/04/2020] [Accepted: 02/24/2020] [Indexed: 11/18/2022]
Abstract
The diversity of our senses conveys many advantages; it enables them to compensate for one another when needed, and the information they provide about a common event can be integrated to facilitate its processing and, ultimately, adaptive responses. These cooperative interactions are produced by multisensory neurons. A well-studied model in this context is the multisensory neuron in the output layers of the superior colliculus (SC). These neurons integrate and amplify their cross-modal (e.g., visual-auditory) inputs, thereby enhancing the physiological salience of the initiating event and the probability that it will elicit SC-mediated detection, localization, and orientation behavior. Repeated experience with the same visual-auditory stimulus can also increase the neuron's sensitivity to these individual inputs. This observation raised the possibility that such plasticity could be engaged to restore visual responsiveness when compromised. For example, unilateral lesions of visual cortex compromise the visual responsiveness of neurons in the multisensory output layers of the ipsilesional SC and produces profound contralesional blindness (hemianopia). The possibility that multisensory plasticity could restore the visual responses of these neurons, and reverse blindness, was tested in the cat model of hemianopia. Hemianopic subjects were repeatedly presented with spatiotemporally congruent visual-auditory stimulus pairs in the blinded hemifield on a daily or weekly basis. After several weeks of this multisensory exposure paradigm, visual responsiveness was restored in SC neurons and behavioral responses were elicited by visual stimuli in the previously blind hemifield. The constraints on the effectiveness of this procedure proved to be the same as those constraining SC multisensory plasticity: whereas repetitions of a congruent visual-auditory stimulus was highly effective, neither exposure to its individual component stimuli, nor to these stimuli in non-congruent configurations was effective. The restored visual responsiveness proved to be robust, highly competitive with that in the intact hemifield, and sufficient to support visual discrimination.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA.
| |
Collapse
|
12
|
Electrophysiological correlates of emotional crossmodal processing in binge drinking. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 18:1076-1088. [PMID: 30094563 DOI: 10.3758/s13415-018-0623-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Emotional crossmodal integration (i.e., multisensorial decoding of emotions) is a crucial process that ensures adaptive social behaviors and responses to the environment. Recent evidence suggests that in binge drinking-an excessive alcohol consumption pattern associated with psychological and cerebral deficits-crossmodal integration is preserved at the behavioral level. Although some studies have suggested brain modifications during affective processing in binge drinking, nothing is known about the cerebral correlates of crossmodal integration. In the current study, we asked 53 university students (17 binge drinkers, 17 moderate drinkers, 19 nondrinkers) to perform an emotional crossmodal task while their behavioral and neurophysiological responses were recorded. Participants had to identify happiness and anger in three conditions (unimodal, crossmodal congruent, crossmodal incongruent) and two modalities (face and/or voice). Binge drinkers did not significantly differ from moderate drinkers and nondrinkers at the behavioral level. However, widespread cerebral modifications were found at perceptual (N100) and mainly at decisional (P3b) stages in binge drinkers, indexed by slower brain processing and stronger activity. These cerebral modifications were mostly related to anger processing and crossmodal integration. This study highlights higher electrophysiological activity in the absence of behavioral deficits, which could index a potential compensation process in binge drinkers. In line with results found in severe alcohol-use disorders, these electrophysiological findings show modified anger processing, which might have a deleterious impact on social functioning. Moreover, this study suggests impaired crossmodal integration at early stages of alcohol-related disorders.
Collapse
|
13
|
Sensorimotor maps can be dynamically calibrated using an adaptive-filter model of the cerebellum. PLoS Comput Biol 2019; 15:e1007187. [PMID: 31295248 PMCID: PMC6622474 DOI: 10.1371/journal.pcbi.1007187] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Accepted: 06/16/2019] [Indexed: 11/19/2022] Open
Abstract
Substantial experimental evidence suggests the cerebellum is involved in calibrating sensorimotor maps. Consistent with this involvement is the well-known, but little understood, massive cerebellar projection to maps in the superior colliculus. Map calibration would be a significant new role for the cerebellum given the ubiquity of map representations in the brain, but how it could perform such a task is unclear. Here we investigated a dynamic method for map calibration, based on electrophysiological recordings from the superior colliculus, that used a standard adaptive-filter cerebellar model. The method proved effective for complex distortions of both unimodal and bimodal maps, and also for predictive map-based tracking of moving targets. These results provide the first computational evidence for a novel role for the cerebellum in dynamic sensorimotor map calibration, of potential importance for coordinate alignment during ongoing motor control, and for map calibration in future biomimetic systems. This computational evidence also provides testable experimental predictions concerning the role of the connections between cerebellum and superior colliculus in previously observed dynamic coordinate transformations. The human brain contains a structure known as the cerebellum, which contains a vast number of neurons–around 80% of the total ~90 billion. We believe the cerebellum is involved in learning motor skills, and so is vitally important for accurately controlling the movements of our body, amongst other things. However, like most regions of the brain, we still do not fully understand the role of the cerebellum and evidence for new roles is appearing all the time. One such new role is in the calibration of sensorimotor maps in the brain that link our sensory perception to motor function, such as when a visual stimulus causes a redirect of our gaze. We investigated this problem by connecting a mathematical model of the cerebellar cortical microcircuit to simulated sensory maps in the superior colliculus that are used to control orienting movements. We found the error signal generated by inaccurate orienting movements could be used to accurately calibrate sensorimotor maps, and to allow predictive tracking of moving targets. This finding points to a potentially widespread role for the cerebellum in calibrating the sensorimotor maps that are ubiquitous in the brain and could prove useful in controlling the movements of multi-joint robots.
Collapse
|
14
|
Shaikh D, Bodenhagen L, Manoonpong P. Concurrent intramodal learning enhances multisensory responses of symmetric crossmodal learning in robotic audio-visual tracking. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2018.10.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
15
|
Cross-Modal Competition: The Default Computation for Multisensory Processing. J Neurosci 2018; 39:1374-1385. [PMID: 30573648 DOI: 10.1523/jneurosci.1806-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 12/04/2018] [Accepted: 12/08/2018] [Indexed: 11/21/2022] Open
Abstract
Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.
Collapse
|
16
|
Follmann R, Goldsmith CJ, Stein W. Multimodal sensory information is represented by a combinatorial code in a sensorimotor system. PLoS Biol 2018; 16:e2004527. [PMID: 30321170 PMCID: PMC6201955 DOI: 10.1371/journal.pbio.2004527] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 10/25/2018] [Accepted: 10/02/2018] [Indexed: 11/22/2022] Open
Abstract
A ubiquitous feature of the nervous system is the processing of simultaneously arriving sensory inputs from different modalities. Yet, because of the difficulties of monitoring large populations of neurons with the single resolution required to determine their sensory responses, the cellular mechanisms of how populations of neurons encode different sensory modalities often remain enigmatic. We studied multimodal information encoding in a small sensorimotor system of the crustacean stomatogastric nervous system that drives rhythmic motor activity for the processing of food. This system is experimentally advantageous, as it produces a fictive behavioral output in vitro, and distinct sensory modalities can be selectively activated. It has the additional advantage that all sensory information is routed through a hub ganglion, the commissural ganglion, a structure with fewer than 220 neurons. Using optical imaging of a population of commissural neurons to track each individual neuron's response across sensory modalities, we provide evidence that multimodal information is encoded via a combinatorial code of recruited neurons. By selectively stimulating chemosensory and mechanosensory inputs that are functionally important for processing of food, we find that these two modalities were processed in a distributed network comprising the majority of commissural neurons imaged. In a total of 12 commissural ganglia, we show that 98% of all imaged neurons were involved in sensory processing, with the two modalities being processed by a highly overlapping set of neurons. Of these, 80% were multimodal, 18% were unimodal, and only 2% of the neurons did not respond to either modality. Differences between modalities were represented by the identities of the neurons participating in each sensory condition and by differences in response sign (excitation versus inhibition), with 46% changing their responses in the other modality. Consistent with the hypothesis that the commissural network encodes different sensory conditions in the combination of activated neurons, a new combination of excitation and inhibition was found when both pathways were activated simultaneously. The responses to this bimodal condition were distinct from either unimodal condition, and for 30% of the neurons, they were not predictive from the individual unimodal responses. Thus, in a sensorimotor network, different sensory modalities are encoded using a combinatorial code of neurons that are activated or inhibited. This provides motor networks with the ability to differentially respond to categorically different sensory conditions and may serve as a model to understand higher-level processing of multimodal information.
Collapse
Affiliation(s)
- Rosangela Follmann
- School of Biological Sciences, Illinois State University, Normal, Illinois, United States of America
| | | | - Wolfgang Stein
- School of Biological Sciences, Illinois State University, Normal, Illinois, United States of America
| |
Collapse
|
17
|
Development of the Mechanisms Governing Midbrain Multisensory Integration. J Neurosci 2018; 38:3453-3465. [PMID: 29496891 DOI: 10.1523/jneurosci.2631-17.2018] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Revised: 12/15/2017] [Accepted: 01/19/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to integrate information across multiple senses enhances the brain's ability to detect, localize, and identify external events. This process has been well documented in single neurons in the superior colliculus (SC), which synthesize concordant combinations of visual, auditory, and/or somatosensory signals to enhance the vigor of their responses. This increases the physiological salience of crossmodal events and, in turn, the speed and accuracy of SC-mediated behavioral responses to them. However, this capability is not an innate feature of the circuit and only develops postnatally after the animal acquires sufficient experience with covariant crossmodal events to form links between their modality-specific components. Of critical importance in this process are tectopetal influences from association cortex. Recent findings suggest that, despite its intuitive appeal, a simple generic associative rule cannot explain how this circuit develops its ability to integrate those crossmodal inputs to produce enhanced multisensory responses. The present neurocomputational model explains how this development can be understood as a transition from a default state in which crossmodal SC inputs interact competitively to one in which they interact cooperatively. Crucial to this transition is the operation of a learning rule requiring coactivation among tectopetal afferents for engagement. The model successfully replicates findings of multisensory development in normal cats and cats of either sex reared with special experience. In doing so, it explains how the cortico-SC projections can use crossmodal experience to craft the multisensory integration capabilities of the SC and adapt them to the environment in which they will be used.SIGNIFICANCE STATEMENT The brain's remarkable ability to integrate information across the senses is not present at birth, but typically develops in early life as experience with crossmodal cues is acquired. Recent empirical findings suggest that the mechanisms supporting this development must be more complex than previously believed. The present work integrates these data with what is already known about the underlying circuit in the midbrain to create and test a mechanistic model of multisensory development. This model represents a novel and comprehensive framework that explains how midbrain circuits acquire multisensory experience and reveals how disruptions in this neurotypic developmental trajectory yield divergent outcomes that will affect the multisensory processing capabilities of the mature brain.
Collapse
|
18
|
Parmiani P, Lucchetti C, Franchi G. Whisker and Nose Tactile Sense Guide Rat Behavior in a Skilled Reaching Task. Front Behav Neurosci 2018. [PMID: 29515377 PMCID: PMC5826357 DOI: 10.3389/fnbeh.2018.00024] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Skilled reaching is a complex movement in which a forelimb is extended to grasp food for eating. Video-recordings analysis of control rats enables us to distinguish several components of skilled reaching: Orient, approaching the front wall of the reaching box and poking the nose into the slot to locate the food pellet; Transport, advancing the forelimb through the slot to reach-grasp the pellet; and Withdrawal of the grasped food to eat. Although food location and skilled reaching is guided by olfaction, the importance of whisker/nose tactile sense in rats suggests that this too could play a role in reaching behavior. To test this hypothesis, we studied skilled reaching in rats trained in a single-pellet reaching task before and after bilateral whisker trimming and bilateral infraorbital nerve (ION) severing. During the task, bilaterally trimmed rats showed impaired Orient with respect to controls. Specifically, they detected the presence of the wall by hitting it with their nose (rather than their whiskers), and then located the slot through repetitive nose touches. The number of nose touches preceding poking was significantly higher in comparison to controls. On the other hand, macrovibrissae trimming resulted in no change in reaching/grasping or withdrawal components of skilled reaching. Bilaterally ION-severed rats, displayed a marked change in the structure of their skilled reaching. With respect to controls, in ION-severed rats: (a) approaches to the front wall were significantly reduced at 3–5 and 6–8 days; (b) nose pokes were significantly reduced at 3–5 days, and the slot was only located after many repetitive nose touches; (c) the reaching-grasping-retracting movement never appeared at 3–5 days; (d) explorative paw movements, equal to zero in controls, reached significance at 9–11 days; and (e) the restored reaching-grasping-retracting sequence was globally slower than in controls, but the success rate was the same. These findings strongly indicate that whisker trimming affected Orient, but not the reaching-grasping movement, while ION severing impaired both Orient (persistently) and reaching-grasping-retracting (transiently, for 1–2 weeks) components of skilled reaching in rats.
Collapse
Affiliation(s)
- Pierantonio Parmiani
- Department of Biomedical and Specialty Surgical Sciences, Section of Human Physiology, University of Ferrara, Ferrara, Italy.,Center for Translational Neurophysiology, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - Cristina Lucchetti
- Department of Biomedical, Metabolic and Neural Sciences, Section of Physiology and Neuroscience, University of Modena and Reggio Emilia, Modena, Italy
| | - Gianfranco Franchi
- Department of Biomedical and Specialty Surgical Sciences, Section of Human Physiology, University of Ferrara, Ferrara, Italy
| |
Collapse
|
19
|
Bach EC, Vaughan JW, Stein BE, Rowland BA. Pulsed Stimuli Elicit More Robust Multisensory Enhancement than Expected. Front Integr Neurosci 2018; 11:40. [PMID: 29354037 PMCID: PMC5758560 DOI: 10.3389/fnint.2017.00040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 12/15/2017] [Indexed: 11/28/2022] Open
Abstract
Neurons in the superior colliculus (SC) integrate cross-modal inputs to generate responses that are more robust than to either input alone, and are frequently greater than their sum (superadditive enhancement). Previously, the principles of a real-time multisensory transform were identified and used to accurately predict a neuron's responses to combinations of brief flashes and noise bursts. However, environmental stimuli frequently have more complex temporal structures that elicit very different response dynamics than previously examined. The present study tested whether such stimuli (i.e., pulsed) would be treated similarly by the multisensory transform. Pulsing visual and auditory stimuli elicited responses composed of higher discharge rates that had multiple peaks temporally aligned to the stimulus pulses. Combinations pulsed cues elicited multiple peaks of superadditive enhancement within the response window. Measured over the entire response, this resulted in larger enhancements than expected given enhancements elicited by non-pulsed (“sustained”) stimuli. However, as with sustained stimuli, the dynamics of multisensory responses to pulsed stimuli were highly related to the temporal dynamics of the unisensory inputs. This suggests that the specific characteristics of the multisensory transform are not determined by the external features of the cross-modal stimulus configuration; rather the temporal structure and alignment of the unisensory inputs is the dominant driving factor in the magnitudes of the multisensory product.
Collapse
Affiliation(s)
- Eva C Bach
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - John W Vaughan
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Barry E Stein
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Benjamin A Rowland
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| |
Collapse
|
20
|
Newlands SD, Abbatematteo B, Wei M, Carney LH, Luan H. Convergence of linear acceleration and yaw rotation signals on non-eye movement neurons in the vestibular nucleus of macaques. J Neurophysiol 2018; 119:73-83. [PMID: 28978765 DOI: 10.1152/jn.00382.2017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
Roughly half of all vestibular nucleus neurons without eye movement sensitivity respond to both angular rotation and linear acceleration. Linear acceleration signals arise from otolith organs, and rotation signals arise from semicircular canals. In the vestibular nerve, these signals are carried by different afferents. Vestibular nucleus neurons represent the first point of convergence for these distinct sensory signals. This study systematically evaluated how rotational and translational signals interact in single neurons in the vestibular nuclei: multisensory integration at the first opportunity for convergence between these two independent vestibular sensory signals. Single-unit recordings were made from the vestibular nuclei of awake macaques during yaw rotation, translation in the horizontal plane, and combinations of rotation and translation at different frequencies. The overall response magnitude of the combined translation and rotation was generally less than the sum of the magnitudes in responses to the stimuli applied independently. However, we found that under conditions in which the peaks of the rotational and translational responses were coincident these signals were approximately additive. With presentation of rotation and translation at different frequencies, rotation was attenuated more than translation, regardless of which was at a higher frequency. These data suggest a nonlinear interaction between these two sensory modalities in the vestibular nuclei, in which coincident peak responses are proportionally stronger than other, off-peak interactions. These results are similar to those reported for other forms of multisensory integration, such as audio-visual integration in the superior colliculus. NEW & NOTEWORTHY This is the first study to systematically explore the interaction of rotational and translational signals in the vestibular nuclei through independent manipulation. The results of this study demonstrate nonlinear integration leading to maximum response amplitude when the timing and direction of peak rotational and translational responses are coincident.
Collapse
Affiliation(s)
- Shawn D Newlands
- Department of Otolaryngology, University of Rochester Medical Center , Rochester, New York.,Department of Neuroscience, University of Rochester Medical Center , Rochester, New York
| | - Ben Abbatematteo
- Department of Biomedical Engineering, University of Rochester , Rochester, New York
| | - Min Wei
- Department of Otolaryngology, University of Rochester Medical Center , Rochester, New York
| | - Laurel H Carney
- Department of Biomedical Engineering, University of Rochester , Rochester, New York.,Department of Neuroscience, University of Rochester Medical Center , Rochester, New York
| | - Hongge Luan
- Department of Otolaryngology, University of Rochester Medical Center , Rochester, New York
| |
Collapse
|
21
|
Bremen P, Massoudi R, Van Wanrooij MM, Van Opstal AJ. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man. Front Syst Neurosci 2017; 11:89. [PMID: 29238295 PMCID: PMC5712580 DOI: 10.3389/fnsys.2017.00089] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 11/16/2017] [Indexed: 11/13/2022] Open
Abstract
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Collapse
Affiliation(s)
- Peter Bremen
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands
| | - Rooholla Massoudi
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - Marc M Van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - A J Van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|