1
|
Smyre SA, Bean NL, Stein BE, Rowland BA. The brain can develop conflicting multisensory principles to guide behavior. Cereb Cortex 2024; 34:bhae247. [PMID: 38879756 PMCID: PMC11179994 DOI: 10.1093/cercor/bhae247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/23/2024] [Accepted: 05/30/2024] [Indexed: 06/19/2024] Open
Abstract
Midbrain multisensory neurons undergo a significant postnatal transition in how they process cross-modal (e.g. visual-auditory) signals. In early stages, signals derived from common events are processed competitively; however, at later stages they are processed cooperatively such that their salience is enhanced. This transition reflects adaptation to cross-modal configurations that are consistently experienced and become informative about which correspond to common events. Tested here was the assumption that overt behaviors follow a similar maturation. Cats were reared in omnidirectional sound thereby compromising the experience needed for this developmental process. Animals were then repeatedly exposed to different configurations of visual and auditory stimuli (e.g. spatiotemporally congruent or spatially disparate) that varied on each side of space and their behavior was assessed using a detection/localization task. Animals showed enhanced performance to stimuli consistent with the experience provided: congruent stimuli elicited enhanced behaviors where spatially congruent cross-modal experience was provided, and spatially disparate stimuli elicited enhanced behaviors where spatially disparate cross-modal experience was provided. Cross-modal configurations not consistent with experience did not enhance responses. The presumptive benefit of such flexibility in the multisensory developmental process is to sensitize neural circuits (and the behaviors they control) to the features of the environment in which they will function. These experiments reveal that these processes have a high degree of flexibility, such that two (conflicting) multisensory principles can be implemented by cross-modal experience on opposite sides of space even within the same animal.
Collapse
Affiliation(s)
- Scott A Smyre
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| |
Collapse
|
2
|
Stocke S, Samuelsen CL. Multisensory Integration Underlies the Distinct Representation of Odor-Taste Mixtures in the Gustatory Cortex of Behaving Rats. J Neurosci 2024; 44:e0071242024. [PMID: 38548337 PMCID: PMC11097261 DOI: 10.1523/jneurosci.0071-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 02/21/2024] [Accepted: 03/14/2024] [Indexed: 05/15/2024] Open
Abstract
The perception of food relies on the integration of olfactory and gustatory signals originating from the mouth. This multisensory process generates robust associations between odors and tastes, significantly influencing the perceptual judgment of flavors. However, the specific neural substrates underlying this integrative process remain unclear. Previous electrophysiological studies identified the gustatory cortex as a site of convergent olfactory and gustatory signals, but whether neurons represent multimodal odor-taste mixtures as distinct from their unimodal odor and taste components is unknown. To investigate this, we recorded single-unit activity in the gustatory cortex of behaving female rats during the intraoral delivery of individual odors, individual tastes, and odor-taste mixtures. Our results demonstrate that chemoselective neurons in the gustatory cortex are broadly responsive to intraoral chemosensory stimuli, exhibiting time-varying multiphasic changes in activity. In a subset of these chemoselective neurons, odor-taste mixtures elicit nonlinear cross-modal responses that distinguish them from their olfactory and gustatory components. These findings provide novel insights into multimodal chemosensory processing by the gustatory cortex, highlighting the distinct representation of unimodal and multimodal intraoral chemosensory signals. Overall, our findings suggest that olfactory and gustatory signals interact nonlinearly in the gustatory cortex to enhance the identity coding of both unimodal and multimodal chemosensory stimuli.
Collapse
Affiliation(s)
- Sanaya Stocke
- Departments of Biology, University of Louisville, Louisville, Kentucky 40292
| | - Chad L Samuelsen
- Anatomical Sciences and Neurobiology, University of Louisville, Louisville, Kentucky 40292
| |
Collapse
|
3
|
Bean NL, Stein BE, Rowland BA. Cross-modal exposure restores multisensory enhancement after hemianopia. Cereb Cortex 2023; 33:11036-11046. [PMID: 37724427 PMCID: PMC10646694 DOI: 10.1093/cercor/bhad343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 08/28/2023] [Accepted: 08/30/2023] [Indexed: 09/20/2023] Open
Abstract
Hemianopia is a common consequence of unilateral damage to visual cortex that manifests as a profound blindness in contralesional space. A noninvasive cross-modal (visual-auditory) exposure paradigm has been developed in an animal model to ameliorate this disorder. Repeated stimulation of a visual-auditory stimulus restores overt responses to visual stimuli in the blinded hemifield. It is believed to accomplish this by enhancing the visual sensitivity of circuits remaining after a lesion of visual cortex; in particular, circuits involving the multisensory neurons of the superior colliculus. Neurons in this midbrain structure are known to integrate spatiotemporally congruent visual and auditory signals to amplify their responses, which, in turn, enhances behavioral performance. Here we evaluated the relationship between the rehabilitation of hemianopia and this process of multisensory integration. Induction of hemianopia also eliminated multisensory enhancement in the blinded hemifield. Both vision and multisensory enhancement rapidly recovered with the rehabilitative cross-modal exposures. However, although both reached pre-lesion levels at similar rates, they did so with different spatial patterns. The results suggest that the capability for multisensory integration and enhancement is not a pre-requisite for visual recovery in hemianopia, and that the underlying mechanisms for recovery may be more complex than currently appreciated.
Collapse
Affiliation(s)
- Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| |
Collapse
|
4
|
Bean NL, Smyre SA, Stein BE, Rowland BA. Noise-rearing precludes the behavioral benefits of multisensory integration. Cereb Cortex 2023; 33:948-958. [PMID: 35332919 PMCID: PMC9930622 DOI: 10.1093/cercor/bhac113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 02/23/2022] [Accepted: 02/24/2022] [Indexed: 11/14/2022] Open
Abstract
Concordant visual-auditory stimuli enhance the responses of individual superior colliculus (SC) neurons. This neuronal capacity for "multisensory integration" is not innate: it is acquired only after substantial cross-modal (e.g. auditory-visual) experience. Masking transient auditory cues by raising animals in omnidirectional sound ("noise-rearing") precludes their ability to obtain this experience and the ability of the SC to construct a normal multisensory (auditory-visual) transform. SC responses to combinations of concordant visual-auditory stimuli are depressed, rather than enhanced. The present experiments examined the behavioral consequence of this rearing condition in a simple detection/localization task. In the first experiment, the auditory component of the concordant cross-modal pair was novel, and only the visual stimulus was a target. In the second experiment, both component stimuli were targets. Noise-reared animals failed to show multisensory performance benefits in either experiment. These results reveal a close parallel between behavior and single neuron physiology in the multisensory deficits that are induced when noise disrupts early visual-auditory experience.
Collapse
Affiliation(s)
- Naomi L Bean
- Corresponding author: Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States.
| | | | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| |
Collapse
|
5
|
Jiang H, Stanford TR, Rowland BA, Stein BE. Association Cortex Is Essential to Reverse Hemianopia by Multisensory Training. Cereb Cortex 2021; 31:5015-5023. [PMID: 34056645 DOI: 10.1093/cercor/bhab138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 04/19/2021] [Accepted: 04/19/2021] [Indexed: 11/14/2022] Open
Abstract
Hemianopia induced by unilateral visual cortex lesions can be resolved by repeatedly exposing the blinded hemifield to auditory-visual stimuli. This rehabilitative "training" paradigm depends on mechanisms of multisensory plasticity that restore the lost visual responsiveness of multisensory neurons in the ipsilesional superior colliculus (SC) so that they can once again support vision in the blinded hemifield. These changes are thought to operate via the convergent visual and auditory signals relayed to the SC from association cortex (the anterior ectosylvian sulcus [AES], in cat). The present study tested this assumption by cryogenically deactivating ipsilesional AES in hemianopic, anesthetized cats during weekly multisensory training sessions. No signs of visual recovery were evident in this condition, even after providing animals with up to twice the number of training sessions required for effective rehabilitation. Subsequent training under the same conditions, but with AES active, reversed the hemianopia within the normal timeframe. These results indicate that the corticotectal circuit that is normally engaged in SC multisensory plasticity has to be operational for the brain to use visual-auditory experience to resolve hemianopia.
Collapse
Affiliation(s)
- Huai Jiang
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Terrence R Stanford
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| |
Collapse
|
6
|
Vestibular Stimulation May Drive Multisensory Processing: Principles for Targeted Sensorimotor Therapy (TSMT). Brain Sci 2021; 11:brainsci11081111. [PMID: 34439730 PMCID: PMC8393350 DOI: 10.3390/brainsci11081111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 08/20/2021] [Accepted: 08/20/2021] [Indexed: 12/01/2022] Open
Abstract
At birth, the vestibular system is fully mature, whilst higher order sensory processing is yet to develop in the full-term neonate. The current paper lays out a theoretical framework to account for the role vestibular stimulation may have driving multisensory and sensorimotor integration. Accordingly, vestibular stimulation, by activating the parieto-insular vestibular cortex, and/or the posterior parietal cortex may provide the cortical input for multisensory neurons in the superior colliculus that is needed for multisensory processing. Furthermore, we propose that motor development, by inducing change of reference frames, may shape the receptive field of multisensory neurons. This, by leading to lack of spatial contingency between formally contingent stimuli, may cause degradation of prior motor responses. Additionally, we offer a testable hypothesis explaining the beneficial effect of sensory integration therapies regarding attentional processes. Key concepts of a sensorimotor integration therapy (e.g., targeted sensorimotor therapy (TSMT)) are also put into a neurological context. TSMT utilizes specific tools and instruments. It is administered in 8-weeks long successive treatment regimens, each gradually increasing vestibular and postural stimulation, so sensory-motor integration is facilitated, and muscle strength is increased. Empirically TSMT is indicated for various diseases. Theoretical foundations of this sensorimotor therapy are discussed.
Collapse
|
7
|
Dakos AS, Jiang H, Stein BE, Rowland BA. Using the Principles of Multisensory Integration to Reverse Hemianopia. Cereb Cortex 2021; 30:2030-2041. [PMID: 31799618 DOI: 10.1093/cercor/bhz220] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 08/03/2019] [Accepted: 08/28/2019] [Indexed: 11/14/2022] Open
Abstract
Hemianopia can be rehabilitated by an auditory-visual "training" procedure, which restores visual responsiveness in midbrain neurons indirectly compromised by the cortical lesion and reinstates vision in contralesional space. Presumably, these rehabilitative changes are induced via mechanisms of multisensory integration/plasticity. If so, the paradigm should fail if the stimulus configurations violate the spatiotemporal principles that govern these midbrain processes. To test this possibility, hemianopic cats were provided spatially or temporally noncongruent auditory-visual training. Rehabilitation failed in all cases even after approximately twice the number of training trials normally required for recovery, and even after animals learned to approach the location of the undetected visual stimulus. When training was repeated with these stimuli in spatiotemporal concordance, hemianopia was resolved. The results identify the conditions needed to engage changes in remaining neural circuits required to support vision in the absence of visual cortex, and have implications for rehabilitative strategies in human patients.
Collapse
Affiliation(s)
| | - Huai Jiang
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA
| | - Barry E Stein
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA
| | - Benjamin A Rowland
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA
| |
Collapse
|
8
|
Smyre SA, Wang Z, Stein BE, Rowland BA. Multisensory enhancement of overt behavior requires multisensory experience. Eur J Neurosci 2021; 54:4514-4527. [PMID: 34013578 DOI: 10.1111/ejn.15315] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/11/2021] [Accepted: 05/14/2021] [Indexed: 11/27/2022]
Abstract
The superior colliculus (SC) is richly endowed with neurons that integrate cues from different senses to enhance their physiological responses and the overt behaviors they mediate. However, in the absence of experience with cross-modal combinations (e.g., visual-auditory), they fail to develop this characteristic multisensory capability: Their multisensory responses are no greater than their most effective unisensory responses. Presumably, this impairment in neural development would be reflected as corresponding impairments in SC-mediated behavioral capabilities such as detection and localization performance. Here, we tested that assumption directly in cats raised to adulthood in darkness. They, along with a normally reared cohort, were trained to approach brief visual or auditory stimuli. The animals were then tested with these stimuli individually and in combination under ambient light conditions consistent with their rearing conditions and home environment as well as under the opposite lighting condition. As expected, normally reared animals detected and localized the cross-modal combinations significantly better than their individual component stimuli. However, dark-reared animals showed significant defects in multisensory detection and localization performance. The results indicate that a physiological impairment in single multisensory SC neurons is predictive of an impairment in overt multisensory behaviors.
Collapse
Affiliation(s)
- Scott A Smyre
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Zhengyang Wang
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Barry E Stein
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Benjamin A Rowland
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| |
Collapse
|
9
|
Miller LJ, Marco EJ, Chu RC, Camarata S. Editorial: Sensory Processing Across the Lifespan: A 25-Year Initiative to Understand Neurophysiology, Behaviors, and Treatment Effectiveness for Sensory Processing. Front Integr Neurosci 2021; 15:652218. [PMID: 33897385 PMCID: PMC8063042 DOI: 10.3389/fnint.2021.652218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 02/24/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- Lucy Jane Miller
- Department of Pediatrics (Emeritus), University of Colorado, Denver, CO, United States.,Sensory Therapies and Research Institute for Sensory Processing Disorder, Centennial, CO, United States
| | - Elysa J Marco
- Cortica (United States), San Diego, CA, United States
| | - Robyn C Chu
- Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, United States.,Growing Healthy Children Therapy Services, Rescue, CA, United States
| | - Stephen Camarata
- School of Medicine, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
10
|
Bean NL, Stein BE, Rowland BA. Stimulus value gates multisensory integration. Eur J Neurosci 2021; 53:3142-3159. [PMID: 33667027 DOI: 10.1111/ejn.15167] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 02/18/2021] [Accepted: 02/22/2021] [Indexed: 11/28/2022]
Abstract
The brain enhances its perceptual and behavioral decisions by integrating information from its multiple senses in what are believed to be optimal ways. This phenomenon of "multisensory integration" appears to be pre-conscious, effortless, and highly efficient. The present experiments examined whether experience could modify this seemingly automatic process. Cats were trained in a localization task in which congruent pairs of auditory-visual stimuli are normally integrated to enhance detection and orientation/approach performance. Consistent with the results of previous studies, animals more reliably detected and approached cross-modal pairs than their modality-specific component stimuli, regardless of whether the pairings were novel or familiar. However, when provided evidence that one of the modality-specific component stimuli had no value (it was not rewarded) animals ceased integrating it with other cues, and it lost its previous ability to enhance approach behaviors. Cross-modal pairings involving that stimulus failed to elicit enhanced responses even when the paired stimuli were congruent and mutually informative. However, the stimulus regained its ability to enhance responses when it was associated with reward. This suggests that experience can selectively block access of stimuli (i.e., filter inputs) to the multisensory computation. Because this filtering process results in the loss of useful information, its operation and behavioral consequences are not optimal. Nevertheless, the process can be of substantial value in natural environments, rich in dynamic stimuli, by using experience to minimize the impact of stimuli unlikely to be of biological significance, and reducing the complexity of the problem of matching signals across the senses.
Collapse
Affiliation(s)
- Naomi L Bean
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Barry E Stein
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | | |
Collapse
|
11
|
Oess T, Löhr MPR, Schmid D, Ernst MO, Neumann H. From Near-Optimal Bayesian Integration to Neuromorphic Hardware: A Neural Network Model of Multisensory Integration. Front Neurorobot 2020; 14:29. [PMID: 32499692 PMCID: PMC7243343 DOI: 10.3389/fnbot.2020.00029] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 04/22/2020] [Indexed: 11/18/2022] Open
Abstract
While interacting with the world our senses and nervous system are constantly challenged to identify the origin and coherence of sensory input signals of various intensities. This problem becomes apparent when stimuli from different modalities need to be combined, e.g., to find out whether an auditory stimulus and a visual stimulus belong to the same object. To cope with this problem, humans and most other animal species are equipped with complex neural circuits to enable fast and reliable combination of signals from various sensory organs. This multisensory integration starts in the brain stem to facilitate unconscious reflexes and continues on ascending pathways to cortical areas for further processing. To investigate the underlying mechanisms in detail, we developed a canonical neural network model for multisensory integration that resembles neurophysiological findings. For example, the model comprises multisensory integration neurons that receive excitatory and inhibitory inputs from unimodal auditory and visual neurons, respectively, as well as feedback from cortex. Such feedback projections facilitate multisensory response enhancement and lead to the commonly observed inverse effectiveness of neural activity in multisensory neurons. Two versions of the model are implemented, a rate-based neural network model for qualitative analysis and a variant that employs spiking neurons for deployment on a neuromorphic processing. This dual approach allows to create an evaluation environment with the ability to test model performances with real world inputs. As a platform for deployment we chose IBM's neurosynaptic chip TrueNorth. Behavioral studies in humans indicate that temporal and spatial offsets as well as reliability of stimuli are critical parameters for integrating signals from different modalities. The model reproduces such behavior in experiments with different sets of stimuli. In particular, model performance for stimuli with varying spatial offset is tested. In addition, we demonstrate that due to the emergent properties of network dynamics model performance is close to optimal Bayesian inference for integration of multimodal sensory signals. Furthermore, the implementation of the model on a neuromorphic processing chip enables a complete neuromorphic processing cascade from sensory perception to multisensory integration and the evaluation of model performance for real world inputs.
Collapse
Affiliation(s)
- Timo Oess
- Applied Cognitive Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Maximilian P R Löhr
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Daniel Schmid
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Marc O Ernst
- Applied Cognitive Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Heiko Neumann
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| |
Collapse
|
12
|
Chien SE, Chen YC, Matsumoto A, Yamashita W, Shih KT, Tsujimura SI, Yeh SL. The modulation of background color on perceiving audiovisual simultaneity. Vision Res 2020; 172:1-10. [PMID: 32388209 DOI: 10.1016/j.visres.2020.04.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 04/15/2020] [Accepted: 04/16/2020] [Indexed: 11/28/2022]
Abstract
Perceiving simultaneity is critical in integrating visual and auditory signals that give rise to a unified perception. We examined whether background color modulates people's perception of audiovisual simultaneity. Two hypotheses were proposed and examined: (1) the red-impairment hypothesis: visual processing speed deteriorates when viewing a red background because the magnocellular system is inhibited by red light; and (2) the blue-enhancement hypothesis: the detection of both visual and auditory signals is enhanced when viewing a blue background because it stimulates the blue-light sensitive intrinsically photosensitive retinal ganglion cells (ipRGCs), which trigger a higher alert state. Participants were exposed to different backgrounds while performing an audiovisual simultaneity judgment (SJ) task: a flash and a beep were presented at pre-designated stimulus onset asynchronies (SOAs) and participants judged whether or not the two stimuli were presented simultaneously. Experiment 1 demonstrated a shift of the point of subjective simultaneity (PSS) toward the visual-leading condition in the red compared to the blue background when the flash was presented in the periphery. In Experiment 2, the stimulation of ipRGCs was specifically manipulated to test the blue-enhancement hypothesis. The results showed no support for this hypothesis, perhaps due to top-down cortical modulations. Taken together, the shift of PSS toward the visual-leading condition in the red background was attributed to impaired visual processing speed with respect to auditory processing speed, caused by the inhibition of the magnocellular system under red light.
Collapse
Affiliation(s)
- Sung-En Chien
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Yi-Chuan Chen
- Department of Medicine, Mackay Medical College, New Taipei City, Taiwan
| | - Akiko Matsumoto
- Faculty of Science and Engineering, Kagoshima University, Kagoshima, Japan
| | - Wakayo Yamashita
- Faculty of Science and Engineering, Kagoshima University, Kagoshima, Japan
| | - Kuaug-Tsu Shih
- Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan
| | - Sei-Ichi Tsujimura
- Faculty of Design and Architecture, Nagoya City University, Nagoya, Japan
| | - Su-Ling Yeh
- Department of Psychology, National Taiwan University, Taipei, Taiwan; Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan; Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei, Taiwan; Center for the Advanced Study in the Behavioral Sciences, Stanford University, USA.
| |
Collapse
|
13
|
Wang Z, Yu L, Xu J, Stein BE, Rowland BA. Experience Creates the Multisensory Transform in the Superior Colliculus. Front Integr Neurosci 2020; 14:18. [PMID: 32425761 PMCID: PMC7212431 DOI: 10.3389/fnint.2020.00018] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/18/2020] [Indexed: 11/15/2022] Open
Abstract
Although the ability to integrate information across the senses is compromised in some individuals for unknown reasons, similar defects have been observed when animals are reared without multisensory experience. The experience-dependent development of multisensory integration has been studied most extensively using the visual-auditory neuron of the cat superior colliculus (SC) as a neural model. In the normally-developed adult, SC neurons react to concordant visual-auditory stimuli by integrating their inputs in real-time to produce non-linearly amplified multisensory responses. However, when prevented from gathering visual-auditory experience, their multisensory responses are no more robust than their responses to the individual component stimuli. The mechanisms operating in this defective state are poorly understood. Here we examined the responses of SC neurons in “naïve” (i.e., dark-reared) and “neurotypic” (i.e., normally-reared) animals on a millisecond-by-millisecond basis to determine whether multisensory experience changes the operation by which unisensory signals are converted into multisensory outputs (the “multisensory transform”), or whether it changes the dynamics of the unisensory inputs to that transform (e.g., their synchronization and/or alignment). The results reveal that the major impact of experience was on the multisensory transform itself. Whereas neurotypic multisensory responses exhibited non-linear amplification near their onset followed by linear amplification thereafter, the naive responses showed no integration in the initial phase of the response and a computation consistent with competition in its later phases. The results suggest that multisensory experience creates an entirely new computation by which convergent unisensory inputs are used cooperatively to enhance the physiological salience of cross-modal events and thereby facilitate normal perception and behavior.
Collapse
Affiliation(s)
- Zhengyang Wang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| |
Collapse
|
14
|
Stein BE, Rowland BA. Using superior colliculus principles of multisensory integration to reverse hemianopia. Neuropsychologia 2020; 141:107413. [PMID: 32113921 DOI: 10.1016/j.neuropsychologia.2020.107413] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2019] [Revised: 02/04/2020] [Accepted: 02/24/2020] [Indexed: 11/18/2022]
Abstract
The diversity of our senses conveys many advantages; it enables them to compensate for one another when needed, and the information they provide about a common event can be integrated to facilitate its processing and, ultimately, adaptive responses. These cooperative interactions are produced by multisensory neurons. A well-studied model in this context is the multisensory neuron in the output layers of the superior colliculus (SC). These neurons integrate and amplify their cross-modal (e.g., visual-auditory) inputs, thereby enhancing the physiological salience of the initiating event and the probability that it will elicit SC-mediated detection, localization, and orientation behavior. Repeated experience with the same visual-auditory stimulus can also increase the neuron's sensitivity to these individual inputs. This observation raised the possibility that such plasticity could be engaged to restore visual responsiveness when compromised. For example, unilateral lesions of visual cortex compromise the visual responsiveness of neurons in the multisensory output layers of the ipsilesional SC and produces profound contralesional blindness (hemianopia). The possibility that multisensory plasticity could restore the visual responses of these neurons, and reverse blindness, was tested in the cat model of hemianopia. Hemianopic subjects were repeatedly presented with spatiotemporally congruent visual-auditory stimulus pairs in the blinded hemifield on a daily or weekly basis. After several weeks of this multisensory exposure paradigm, visual responsiveness was restored in SC neurons and behavioral responses were elicited by visual stimuli in the previously blind hemifield. The constraints on the effectiveness of this procedure proved to be the same as those constraining SC multisensory plasticity: whereas repetitions of a congruent visual-auditory stimulus was highly effective, neither exposure to its individual component stimuli, nor to these stimuli in non-congruent configurations was effective. The restored visual responsiveness proved to be robust, highly competitive with that in the intact hemifield, and sufficient to support visual discrimination.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA.
| |
Collapse
|
15
|
Jiang H, Rowland BA, Stein BE. Reversing Hemianopia by Multisensory Training Under Anesthesia. Front Syst Neurosci 2020; 14:4. [PMID: 32076401 PMCID: PMC7006460 DOI: 10.3389/fnsys.2020.00004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 01/13/2020] [Indexed: 02/04/2023] Open
Abstract
Hemianopia is characterized by blindness in one half of the visual field and is a common consequence of stroke and unilateral injury to the visual cortex. There are few effective rehabilitative strategies that can relieve it. Using the cat as an animal model of hemianopia, we found that blindness induced by lesions targeting all contiguous areas of the visual cortex could be rapidly reversed by a non-invasive, multisensory (auditory-visual) exposure procedure even while animals were anesthetized. Surprisingly few trials were required to reinstate vision in the previously blind hemisphere. That rehabilitation was possible under anesthesia indicates that the visuomotor behaviors commonly believed to be essential are not required for this recovery, nor are factors such as attention, motivation, reward, or the various other cognitive features that are generally thought to facilitate neuro-rehabilitative therapies.
Collapse
Affiliation(s)
- Huai Jiang
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, United States
| |
Collapse
|
16
|
Sugiyama S, Kinukawa T, Takeuchi N, Nishihara M, Shioiri T, Inui K. Tactile Cross-Modal Acceleration Effects on Auditory Steady-State Response. Front Integr Neurosci 2019; 13:72. [PMID: 31920574 PMCID: PMC6927992 DOI: 10.3389/fnint.2019.00072] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Accepted: 12/02/2019] [Indexed: 01/09/2023] Open
Abstract
In the sensory cortex, cross-modal interaction occurs during the early cortical stages of processing; however, its effect on the speed of neuronal activity remains unclear. In this study, we used magnetoencephalography (MEG) to investigate whether tactile stimulation influences auditory steady-state responses (ASSRs). To this end, a 0.5-ms electrical pulse was randomly presented to the dorsum of the left or right hand of 12 healthy volunteers at 700 ms while a train of 25-ms pure tones were applied to the left or right side at 75 dB for 1,200 ms. Peak latencies of 40-Hz ASSR were measured. Our results indicated that tactile stimulation significantly shortened subsequent ASSR latency. This cross-modal effect was observed from approximately 50 ms to 125 ms after the onset of tactile stimulation. The somatosensory information that appeared to converge on the auditory system may have arisen during the early processing stages, with the reduced ASSR latency indicating that a new sensory event from the cross-modal inputs served to increase the speed of ongoing sensory processing. Collectively, our findings indicate that ASSR latency changes are a sensitive index of accelerated processing.
Collapse
Affiliation(s)
- Shunsuke Sugiyama
- Department of Psychiatry and Psychotherapy, Gifu University Graduate School of Medicine, Gifu, Japan
| | - Tomoaki Kinukawa
- Department of Anesthesiology, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | | | - Makoto Nishihara
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
| | - Toshiki Shioiri
- Department of Psychiatry and Psychotherapy, Gifu University Graduate School of Medicine, Gifu, Japan
| | - Koji Inui
- Departmernt of Functioning and Disability, Institute for Developmental Research, Kasugai, Japan
| |
Collapse
|
17
|
Volpe G, Gori M. Multisensory Interactive Technologies for Primary Education: From Science to Technology. Front Psychol 2019; 10:1076. [PMID: 31316410 PMCID: PMC6611336 DOI: 10.3389/fpsyg.2019.01076] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 04/25/2019] [Indexed: 12/02/2022] Open
Abstract
While technology is increasingly used in the classroom, we observe at the same time that making teachers and students accept it is more difficult than expected. In this work, we focus on multisensory technologies and we argue that the intersection between current challenges in pedagogical practices and recent scientific evidence opens novel opportunities for these technologies to bring a significant benefit to the learning process. In our view, multisensory technologies are ideal for effectively supporting an embodied and enactive pedagogical approach exploiting the best-suited sensory modality to teach a concept at school. This represents a great opportunity for designing technologies, which are both grounded on robust scientific evidence and tailored to the actual needs of teachers and students. Based on our experience in technology-enhanced learning projects, we propose six golden rules we deem important for catching this opportunity and fully exploiting it.
Collapse
Affiliation(s)
- Gualtiero Volpe
- Casa Paganini-InfoMus, DIBRIS, University of Genoa, Genoa, Italy
| | - Monica Gori
- U-Vip Unit, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
18
|
Cross-Modal Competition: The Default Computation for Multisensory Processing. J Neurosci 2018; 39:1374-1385. [PMID: 30573648 DOI: 10.1523/jneurosci.1806-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 12/04/2018] [Accepted: 12/08/2018] [Indexed: 11/21/2022] Open
Abstract
Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.
Collapse
|
19
|
Xu J, Bi T, Wu J, Meng F, Wang K, Hu J, Han X, Zhang J, Zhou X, Keniston L, Yu L. Spatial receptive field shift by preceding cross-modal stimulation in the cat superior colliculus. J Physiol 2018; 596:5033-5050. [PMID: 30144059 DOI: 10.1113/jp275427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 08/21/2018] [Indexed: 12/11/2022] Open
Abstract
KEY POINTS It has been known for some time that sensory information of one type can bias the spatial perception of another modality. However, there is a lack of evidence of this occurring in individual neurons. In the present study, we found that the spatial receptive field of superior colliculus multisensory neurons could be dynamically shifted by a preceding stimulus in a different modality. The extent to which the receptive field shifted was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. This result provides a neural mechanism that could underlie the process of cross-modal spatial calibration. ABSTRACT Psychophysical studies have shown that the different senses can be spatially entrained by each other. This can be observed in certain phenomena, such as ventriloquism, in which a visual stimulus can attract the perceived location of a spatially discordant sound. However, the neural mechanism underlying this cross-modal spatial recalibration has remained unclear, as has whether it takes place dynamically. We explored these issues in multisensory neurons of the cat superior colliculus (SC), a midbrain structure that involves both cross-modal and sensorimotor integration. Sequential cross-modal stimulation showed that the preceding stimulus can shift the receptive field (RF) of the lagging response. This cross-modal spatial calibration took place in both auditory and visual RFs, although auditory RFs shifted slightly more. By contrast, if a preceding stimulus was from the same modality, it failed to induce a similarly substantial RF shift. The extent of the RF shift was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. A narrow time gap and high stimulus salience were able to induce larger RF shifts. In addition, when both visual and auditory stimuli were presented simultaneously, a substantial RF shift toward the location-fixed stimulus was also induced. These results, taken together, reveal an online cross-modal process and reflect the details of the organization of SC inter-sensory spatial calibration.
Collapse
Affiliation(s)
- Jinghong Xu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Tingting Bi
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Fanzhu Meng
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Kun Wang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiawei Hu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiao Han
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD, USA
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| |
Collapse
|
20
|
Effect of acceleration of auditory inputs on the primary somatosensory cortex in humans. Sci Rep 2018; 8:12883. [PMID: 30150686 PMCID: PMC6110726 DOI: 10.1038/s41598-018-31319-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 08/17/2018] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interaction occurs during the early stages of processing in the sensory cortex; however, its effect on neuronal activity speed remains unclear. We used magnetoencephalography to investigate whether auditory stimulation influences the initial cortical activity in the primary somatosensory cortex. A 25-ms pure tone was randomly presented to the left or right side of healthy volunteers at 1000 ms when electrical pulses were applied to the left or right median nerve at 20 Hz for 1500 ms because we did not observe any cross-modal effect elicited by a single pulse. The latency of N20 m originating from Brodmann's area 3b was measured for each pulse. The auditory stimulation significantly shortened the N20 m latency at 1050 and 1100 ms. This reduction in N20 m latency was identical for the ipsilateral and contralateral sounds for both latency points. Therefore, somatosensory-auditory interaction, such as input to the area 3b from the thalamus, occurred during the early stages of synaptic transmission. Auditory information that converged on the somatosensory system was considered to have arisen from the early stages of the feedforward pathway. Acceleration of information processing through the cross-modal interaction seemed to be partly due to faster processing in the sensory cortex.
Collapse
|
21
|
Noel JP, Blanke O, Serino A. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference. Ann N Y Acad Sci 2018; 1426:146-165. [PMID: 29876922 DOI: 10.1111/nyas.13867] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/24/2018] [Accepted: 05/02/2018] [Indexed: 01/09/2023]
Abstract
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience (LNCO), Center for Neuroprosthetics (CNP), Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Neurology, University of Geneva, Geneva, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
22
|
Development of the Mechanisms Governing Midbrain Multisensory Integration. J Neurosci 2018; 38:3453-3465. [PMID: 29496891 DOI: 10.1523/jneurosci.2631-17.2018] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Revised: 12/15/2017] [Accepted: 01/19/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to integrate information across multiple senses enhances the brain's ability to detect, localize, and identify external events. This process has been well documented in single neurons in the superior colliculus (SC), which synthesize concordant combinations of visual, auditory, and/or somatosensory signals to enhance the vigor of their responses. This increases the physiological salience of crossmodal events and, in turn, the speed and accuracy of SC-mediated behavioral responses to them. However, this capability is not an innate feature of the circuit and only develops postnatally after the animal acquires sufficient experience with covariant crossmodal events to form links between their modality-specific components. Of critical importance in this process are tectopetal influences from association cortex. Recent findings suggest that, despite its intuitive appeal, a simple generic associative rule cannot explain how this circuit develops its ability to integrate those crossmodal inputs to produce enhanced multisensory responses. The present neurocomputational model explains how this development can be understood as a transition from a default state in which crossmodal SC inputs interact competitively to one in which they interact cooperatively. Crucial to this transition is the operation of a learning rule requiring coactivation among tectopetal afferents for engagement. The model successfully replicates findings of multisensory development in normal cats and cats of either sex reared with special experience. In doing so, it explains how the cortico-SC projections can use crossmodal experience to craft the multisensory integration capabilities of the SC and adapt them to the environment in which they will be used.SIGNIFICANCE STATEMENT The brain's remarkable ability to integrate information across the senses is not present at birth, but typically develops in early life as experience with crossmodal cues is acquired. Recent empirical findings suggest that the mechanisms supporting this development must be more complex than previously believed. The present work integrates these data with what is already known about the underlying circuit in the midbrain to create and test a mechanistic model of multisensory development. This model represents a novel and comprehensive framework that explains how midbrain circuits acquire multisensory experience and reveals how disruptions in this neurotypic developmental trajectory yield divergent outcomes that will affect the multisensory processing capabilities of the mature brain.
Collapse
|
23
|
Xu J, Bi T, Keniston L, Zhang J, Zhou X, Yu L. Deactivation of Association Cortices Disrupted the Congruence of Visual and Auditory Receptive Fields in Superior Colliculus Neurons. Cereb Cortex 2017; 27:5568-5578. [PMID: 27797831 DOI: 10.1093/cercor/bhw324] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Indexed: 11/13/2022] Open
Abstract
Physiological and behavioral studies in cats show that corticotectal inputs play a critical role in the information-processing capabilities of neurons in the deeper layers of the superior colliculus (SC). Among them, the sensory inputs from functionally related associational cortices are especially critical for SC multisensory integration. However, the underlying mechanism supporting this influence is still unclear. Here, results demonstrate that deactivation of relevant cortices can both dislocate SC visual and auditory spatial receptive fields (RFs) and decrease their overall size, resulting in reduced alignment. Further analysis demonstrated that this RF separation is significantly correlated with the decrement of neurons' multisensory enhancement and is most pronounced in low stimulus intensity conditions. In addition, cortical deactivation could influence the degree of stimulus effectiveness, thereby illustrating the means by which higher order cortices may modify the multisensory activity of SC.
Collapse
Affiliation(s)
- Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| | - Tingting Bi
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD 21853, USA
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China.,Collaborative Innovation Center for Brain Science, East China Normal University, Shanghai 200062, China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Science, East China Normal University, Shanghai, 200062, China
| |
Collapse
|
24
|
Bremen P, Massoudi R, Van Wanrooij MM, Van Opstal AJ. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man. Front Syst Neurosci 2017; 11:89. [PMID: 29238295 PMCID: PMC5712580 DOI: 10.3389/fnsys.2017.00089] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 11/16/2017] [Indexed: 11/13/2022] Open
Abstract
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Collapse
Affiliation(s)
- Peter Bremen
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands
| | - Rooholla Massoudi
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - Marc M Van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - A J Van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
25
|
Cuppini C, Ursino M, Magosso E, Ross LA, Foxe JJ, Molholm S. A Computational Analysis of Neural Mechanisms Underlying the Maturation of Multisensory Speech Integration in Neurotypical Children and Those on the Autism Spectrum. Front Hum Neurosci 2017; 11:518. [PMID: 29163099 PMCID: PMC5670153 DOI: 10.3389/fnhum.2017.00518] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2017] [Accepted: 10/11/2017] [Indexed: 11/13/2022] Open
Abstract
Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electric, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Mauro Ursino
- Department of Electric, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Elisa Magosso
- Department of Electric, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Lars A. Ross
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - John J. Foxe
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
- Department of Neuroscience and The Del Monte Institute for Neuroscience, University of Rochester School of Medicine, Rochester, NY, United States
| | - Sophie Molholm
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| |
Collapse
|
26
|
The normal environment delays the development of multisensory integration. Sci Rep 2017; 7:4772. [PMID: 28684852 PMCID: PMC5500544 DOI: 10.1038/s41598-017-05118-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 05/24/2017] [Indexed: 11/08/2022] Open
Abstract
Multisensory neurons in animals whose cross-modal experiences are compromised during early life fail to develop the ability to integrate information across those senses. Consequently, they lack the ability to increase the physiological salience of the events that provide the convergent cross-modal inputs. The present study demonstrates that superior colliculus (SC) neurons in animals whose visual-auditory experience is compromised early in life by noise-rearing can develop visual-auditory multisensory integration capabilities rapidly when periodically exposed to a single set of visual-auditory stimuli in a controlled laboratory paradigm. However, they remain compromised if their experiences are limited to a normal housing environment. These observations seem counterintuitive given that multisensory integrative capabilities ordinarily develop during early life in normal environments, in which a wide variety of sensory stimuli facilitate the functional organization of complex neural circuits at multiple levels of the neuraxis. However, the very richness and inherent variability of sensory stimuli in normal environments will lead to a less regular coupling of any given set of cross-modal cues than does the otherwise "impoverished" laboratory exposure paradigm. That this poses no significant problem for the neonate, but does for the adult, indicates a maturational shift in the requirements for the development of multisensory integration capabilities.
Collapse
|
27
|
Multisensory Integration Uses a Real-Time Unisensory-Multisensory Transform. J Neurosci 2017; 37:5183-5194. [PMID: 28450539 DOI: 10.1523/jneurosci.2767-16.2017] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 03/01/2017] [Accepted: 03/06/2017] [Indexed: 11/21/2022] Open
Abstract
The manner in which the brain integrates different sensory inputs to facilitate perception and behavior has been the subject of numerous speculations. By examining multisensory neurons in cat superior colliculus, the present study demonstrated that two operational principles are sufficient to understand how this remarkable result is achieved: (1) unisensory signals are integrated continuously and in real time as soon as they arrive at their common target neuron and (2) the resultant multisensory computation is modified in shape and timing by a delayed, calibrating inhibition. These principles were tested for descriptive sufficiency by embedding them in a neurocomputational model and using it to predict a neuron's moment-by-moment multisensory response given only knowledge of its responses to the individual modality-specific component cues. The predictions proved to be highly accurate, reliable, and unbiased and were, in most cases, not statistically distinguishable from the neuron's actual instantaneous multisensory response at any phase throughout its entire duration. The model was also able to explain why different multisensory products are often observed in different neurons at different time points, as well as the higher-order properties of multisensory integration, such as the dependency of multisensory products on the temporal alignment of crossmodal cues. These observations not only reveal this fundamental integrative operation, but also identify quantitatively the multisensory transform used by each neuron. As a result, they provide a means of comparing the integrative profiles among neurons and evaluating how they are affected by changes in intrinsic or extrinsic factors.SIGNIFICANCE STATEMENT Multisensory integration is the process by which the brain combines information from multiple sensory sources (e.g., vision and audition) to maximize an organism's ability to identify and respond to environmental stimuli. The actual transformative process by which the neural products of multisensory integration are achieved is poorly understood. By focusing on the millisecond-by-millisecond differences between a neuron's unisensory component responses and its integrated multisensory response, it was found that this multisensory transform can be described by two basic principles: unisensory information is integrated in real time and the multisensory response is shaped by calibrating inhibition. It is now possible to use these principles to predict a neuron's multisensory response accurately armed only with knowledge of its unisensory responses.
Collapse
|
28
|
Yau JM, DeAngelis GC, Angelaki DE. Dissecting neural circuits for multisensory integration and crossmodal processing. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140203. [PMID: 26240418 DOI: 10.1098/rstb.2014.0203] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
We rely on rich and complex sensory information to perceive and understand our environment. Our multisensory experience of the world depends on the brain's remarkable ability to combine signals across sensory systems. Behavioural, neurophysiological and neuroimaging experiments have established principles of multisensory integration and candidate neural mechanisms. Here we review how targeted manipulation of neural activity using invasive and non-invasive neuromodulation techniques have advanced our understanding of multisensory processing. Neuromodulation studies have provided detailed characterizations of brain networks causally involved in multisensory integration. Despite substantial progress, important questions regarding multisensory networks remain unanswered. Critically, experimental approaches will need to be combined with theory in order to understand how distributed activity across multisensory networks collectively supports perception.
Collapse
Affiliation(s)
- Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
29
|
The onset of visual experience gates auditory cortex critical periods. Nat Commun 2016; 7:10416. [PMID: 26786281 PMCID: PMC4736048 DOI: 10.1038/ncomms10416] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2015] [Accepted: 12/08/2015] [Indexed: 01/19/2023] Open
Abstract
Sensory systems influence one another during development and deprivation can lead to cross-modal plasticity. As auditory function begins before vision, we investigate the effect of manipulating visual experience during auditory cortex critical periods (CPs) by assessing the influence of early, normal and delayed eyelid opening on hearing loss-induced changes to membrane and inhibitory synaptic properties. Early eyelid opening closes the auditory cortex CPs precociously and dark rearing prevents this effect. In contrast, delayed eyelid opening extends the auditory cortex CPs by several additional days. The CP for recovery from hearing loss is also closed prematurely by early eyelid opening and extended by delayed eyelid opening. Furthermore, when coupled with transient hearing loss that animals normally fully recover from, very early visual experience leads to inhibitory deficits that persist into adulthood. Finally, we demonstrate a functional projection from the visual to auditory cortex that could mediate these effects. Visual and auditory systems influence each other during development. Here, the authors show that the onset of eyelid opening regulates critical points during which the auditory cortex is sensitive to hearing loss or the restoration of hearing
Collapse
|
30
|
Yu L, Xu J, Rowland BA, Stein BE. Multisensory Plasticity in Superior Colliculus Neurons is Mediated by Association Cortex. Cereb Cortex 2014; 26:1130-7. [PMID: 25552270 DOI: 10.1093/cercor/bhu295] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The ability to integrate information from different senses, and thereby facilitate detecting and localizing events, normally develops gradually in cat superior colliculus (SC) neurons as experience with cross-modal events is acquired. Here, we demonstrate that the portal for this experience-based change is association cortex. Unilaterally deactivating this cortex whenever visual-auditory events were present resulted in the failure of ipsilateral SC neurons to develop the ability to integrate those cross-modal inputs, even though they retained the ability to respond to them. In contrast, their counterparts in the opposite SC developed this capacity normally. The deficits were eliminated by providing cross-modal experience when cortex was active. These observations underscore the collaborative developmental processes that take place among different levels of the neuraxis to adapt the brain's multisensory (and sensorimotor) circuits to the environment in which they will be used.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Sciences, East China Normal University, Shanghai 200062, China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Sciences, East China Normal University, Shanghai 200062, China
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC 27157, USA
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC 27157, USA
| |
Collapse
|
31
|
Stein BE, Stanford TR, Rowland BA. Development of multisensory integration from the perspective of the individual neuron. Nat Rev Neurosci 2014; 15:520-35. [PMID: 25158358 DOI: 10.1038/nrn3742] [Citation(s) in RCA: 211] [Impact Index Per Article: 21.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The ability to use cues from multiple senses in concert is a fundamental aspect of brain function. It maximizes the brain’s use of the information available to it at any given moment and enhances the physiological salience of external events. Because each sense conveys a unique perspective of the external world, synthesizing information across senses affords computational benefits that cannot otherwise be achieved. Multisensory integration not only has substantial survival value but can also create unique experiences that emerge when signals from different sensory channels are bound together. However, neurons in a newborn’s brain are not capable of multisensory integration, and studies in the midbrain have shown that the development of this process is not predetermined. Rather, its emergence and maturation critically depend on cross-modal experiences that alter the underlying neural circuit in such a way that optimizes multisensory integrative capabilities for the environment in which the animal will function.
Collapse
|