1
|
Bao X, Lomber SG. Visual modulation of auditory evoked potentials in the cat. Sci Rep 2024; 14:7177. [PMID: 38531940 DOI: 10.1038/s41598-024-57075-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/14/2024] [Indexed: 03/28/2024] Open
Abstract
Visual modulation of the auditory system is not only a neural substrate for multisensory processing, but also serves as a backup input underlying cross-modal plasticity in deaf individuals. Event-related potential (ERP) studies in humans have provided evidence of a multiple-stage audiovisual interactions, ranging from tens to hundreds of milliseconds after the presentation of stimuli. However, it is still unknown if the temporal course of visual modulation in the auditory ERPs can be characterized in animal models. EEG signals were recorded in sedated cats from subdermal needle electrodes. The auditory stimuli (clicks) and visual stimuli (flashes) were timed by two independent Poison processes and were presented either simultaneously or alone. The visual-only ERPs were subtracted from audiovisual ERPs before being compared to the auditory-only ERPs. N1 amplitude showed a trend of transiting from suppression-to-facilitation with a disruption at ~ 100-ms flash-to-click delay. We concluded that visual modulation as a function of SOA with extended range is more complex than previously characterized with short SOAs and its periodic pattern can be interpreted with "phase resetting" hypothesis.
Collapse
Affiliation(s)
- Xiaohan Bao
- Integrated Program in Neuroscience, McGill University, Montreal, QC, H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, McIntyre Medical Sciences Building, Rm 1223, 3655 Promenade Sir William Osler, Montreal, QC, H3G 1Y6, Canada.
| |
Collapse
|
2
|
Smyre SA, Bean NL, Stein BE, Rowland BA. Predictability alters multisensory responses by modulating unisensory inputs. Front Neurosci 2023; 17:1150168. [PMID: 37065927 PMCID: PMC10090419 DOI: 10.3389/fnins.2023.1150168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/13/2023] [Indexed: 03/30/2023] Open
Abstract
The multisensory (deep) layers of the superior colliculus (SC) play an important role in detecting, localizing, and guiding orientation responses to salient events in the environment. Essential to this role is the ability of SC neurons to enhance their responses to events detected by more than one sensory modality and to become desensitized (‘attenuated’ or ‘habituated’) or sensitized (‘potentiated’) to events that are predictable via modulatory dynamics. To identify the nature of these modulatory dynamics, we examined how the repetition of different sensory stimuli affected the unisensory and multisensory responses of neurons in the cat SC. Neurons were presented with 2HZ stimulus trains of three identical visual, auditory, or combined visual–auditory stimuli, followed by a fourth stimulus that was either the same or different (‘switch’). Modulatory dynamics proved to be sensory-specific: they did not transfer when the stimulus switched to another modality. However, they did transfer when switching from the visual–auditory stimulus train to either of its modality-specific component stimuli and vice versa. These observations suggest that predictions, in the form of modulatory dynamics induced by stimulus repetition, are independently sourced from and applied to the modality-specific inputs to the multisensory neuron. This falsifies several plausible mechanisms for these modulatory dynamics: they neither produce general changes in the neuron’s transform, nor are they dependent on the neuron’s output.
Collapse
|
3
|
Keum D, Pultorak K, Meredith MA, Medina AE. Effects of developmental alcohol exposure on cortical multisensory integration. Eur J Neurosci 2023; 57:784-795. [PMID: 36610022 PMCID: PMC9991967 DOI: 10.1111/ejn.15907] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 12/08/2022] [Accepted: 01/03/2023] [Indexed: 01/09/2023]
Abstract
Fetal alcohol spectrum disorder (FASD) is one of the most common causes of mental disabilities in the world with a prevalence of 1%-6% of all births. Sensory processing deficits and cognitive problems are a major feature in this condition. Because developmental alcohol exposure can impair neuronal plasticity, and neuronal plasticity is crucial for the establishment of neuronal circuits in sensory areas, we predicted that exposure to alcohol during the third trimester equivalent of human gestation would disrupt the development of multisensory integration (MSI) in the rostral portion of the posterior parietal cortex (PPr), an integrative visual-tactile area. We conducted in vivo electrophysiology in 17 ferrets from four groups (saline/alcohol; infancy/adolescence). A total of 1157 neurons were recorded after visual, tactile and combined visual-tactile stimulation. A multisensory (MS) enhancement or suppression is characterized by a significantly increased or decreased number of elicited spikes after combined visual-tactile stimulation compared to the strongest unimodal (visual or tactile) response. At the neuronal level, those in infant animals were more prone to show MS suppression whereas adolescents were more prone to show MS enhancement. Although alcohol-treated animals showed similar developmental changes between infancy and adolescence, they always 'lagged behind' controls showing more MS suppression and less enhancement. Our findings suggest that alcohol exposure during the last months of human gestation would stunt the development of MSI, which could underlie sensory problems seen in FASD.
Collapse
Affiliation(s)
- Dongil Keum
- Department of Pediatrics, University of Maryland, School of Medicine. Baltimore, MD
| | - Katie Pultorak
- Department of Pediatrics, University of Maryland, School of Medicine. Baltimore, MD
| | - M. Alex Meredith
- Department of Anatomy and Neurobiology, Virginia Commonwealth University. Richmond VA
| | - Alexandre E. Medina
- Department of Pediatrics, University of Maryland, School of Medicine. Baltimore, MD
| |
Collapse
|
4
|
Bean NL, Stein BE, Rowland BA. Stimulus value gates multisensory integration. Eur J Neurosci 2021; 53:3142-3159. [PMID: 33667027 DOI: 10.1111/ejn.15167] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 02/18/2021] [Accepted: 02/22/2021] [Indexed: 11/28/2022]
Abstract
The brain enhances its perceptual and behavioral decisions by integrating information from its multiple senses in what are believed to be optimal ways. This phenomenon of "multisensory integration" appears to be pre-conscious, effortless, and highly efficient. The present experiments examined whether experience could modify this seemingly automatic process. Cats were trained in a localization task in which congruent pairs of auditory-visual stimuli are normally integrated to enhance detection and orientation/approach performance. Consistent with the results of previous studies, animals more reliably detected and approached cross-modal pairs than their modality-specific component stimuli, regardless of whether the pairings were novel or familiar. However, when provided evidence that one of the modality-specific component stimuli had no value (it was not rewarded) animals ceased integrating it with other cues, and it lost its previous ability to enhance approach behaviors. Cross-modal pairings involving that stimulus failed to elicit enhanced responses even when the paired stimuli were congruent and mutually informative. However, the stimulus regained its ability to enhance responses when it was associated with reward. This suggests that experience can selectively block access of stimuli (i.e., filter inputs) to the multisensory computation. Because this filtering process results in the loss of useful information, its operation and behavioral consequences are not optimal. Nevertheless, the process can be of substantial value in natural environments, rich in dynamic stimuli, by using experience to minimize the impact of stimuli unlikely to be of biological significance, and reducing the complexity of the problem of matching signals across the senses.
Collapse
Affiliation(s)
- Naomi L Bean
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Barry E Stein
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | | |
Collapse
|
5
|
Oess T, Löhr MPR, Schmid D, Ernst MO, Neumann H. From Near-Optimal Bayesian Integration to Neuromorphic Hardware: A Neural Network Model of Multisensory Integration. Front Neurorobot 2020; 14:29. [PMID: 32499692 PMCID: PMC7243343 DOI: 10.3389/fnbot.2020.00029] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 04/22/2020] [Indexed: 11/18/2022] Open
Abstract
While interacting with the world our senses and nervous system are constantly challenged to identify the origin and coherence of sensory input signals of various intensities. This problem becomes apparent when stimuli from different modalities need to be combined, e.g., to find out whether an auditory stimulus and a visual stimulus belong to the same object. To cope with this problem, humans and most other animal species are equipped with complex neural circuits to enable fast and reliable combination of signals from various sensory organs. This multisensory integration starts in the brain stem to facilitate unconscious reflexes and continues on ascending pathways to cortical areas for further processing. To investigate the underlying mechanisms in detail, we developed a canonical neural network model for multisensory integration that resembles neurophysiological findings. For example, the model comprises multisensory integration neurons that receive excitatory and inhibitory inputs from unimodal auditory and visual neurons, respectively, as well as feedback from cortex. Such feedback projections facilitate multisensory response enhancement and lead to the commonly observed inverse effectiveness of neural activity in multisensory neurons. Two versions of the model are implemented, a rate-based neural network model for qualitative analysis and a variant that employs spiking neurons for deployment on a neuromorphic processing. This dual approach allows to create an evaluation environment with the ability to test model performances with real world inputs. As a platform for deployment we chose IBM's neurosynaptic chip TrueNorth. Behavioral studies in humans indicate that temporal and spatial offsets as well as reliability of stimuli are critical parameters for integrating signals from different modalities. The model reproduces such behavior in experiments with different sets of stimuli. In particular, model performance for stimuli with varying spatial offset is tested. In addition, we demonstrate that due to the emergent properties of network dynamics model performance is close to optimal Bayesian inference for integration of multimodal sensory signals. Furthermore, the implementation of the model on a neuromorphic processing chip enables a complete neuromorphic processing cascade from sensory perception to multisensory integration and the evaluation of model performance for real world inputs.
Collapse
Affiliation(s)
- Timo Oess
- Applied Cognitive Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Maximilian P R Löhr
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Daniel Schmid
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Marc O Ernst
- Applied Cognitive Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Heiko Neumann
- Vision and Perception Science Lab, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| |
Collapse
|
6
|
Barros P, Eppe M, Parisi GI, Liu X, Wermter S. Expectation Learning for Stimulus Prediction Across Modalities Improves Unisensory Classification. Front Robot AI 2019; 6:137. [PMID: 33501152 PMCID: PMC7806099 DOI: 10.3389/frobt.2019.00137] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2019] [Accepted: 11/25/2019] [Indexed: 11/25/2022] Open
Abstract
Expectation learning is a unsupervised learning process which uses multisensory bindings to enhance unisensory perception. For instance, as humans, we learn to associate a barking sound with the visual appearance of a dog, and we continuously fine-tune this association over time, as we learn, e.g., to associate high-pitched barking with small dogs. In this work, we address the problem of developing a computational model that addresses important properties of expectation learning, in particular focusing on the lack of explicit external supervision other than temporal co-occurrence. To this end, we present a novel hybrid neural model based on audio-visual autoencoders and a recurrent self-organizing network for multisensory bindings that facilitate stimulus reconstructions across different sensory modalities. We refer to this mechanism as stimulus prediction across modalities and demonstrate that the proposed model is capable of learning concept bindings by evaluating it on unisensory classification tasks for audio-visual stimuli using the 43,500 Youtube videos from the animal subset of the AudioSet corpus.
Collapse
Affiliation(s)
- Pablo Barros
- Knowledge Technology, Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Manfred Eppe
- Knowledge Technology, Department of Informatics, University of Hamburg, Hamburg, Germany
| | - German I Parisi
- Knowledge Technology, Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Xun Liu
- Department of Psychology, University of CAS, Beijing, China
| | - Stefan Wermter
- Knowledge Technology, Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
7
|
Li H, Liu N, Li Y, Weidner R, Fink GR, Chen Q. The Simon Effect Based on Allocentric and Egocentric Reference Frame: Common and Specific Neural Correlates. Sci Rep 2019; 9:13727. [PMID: 31551429 PMCID: PMC6760495 DOI: 10.1038/s41598-019-49990-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Accepted: 09/04/2019] [Indexed: 11/09/2022] Open
Abstract
An object's location can be represented either relative to an observer's body effectors (egocentric reference frame) or relative to another external object (allocentric reference frame). In non-spatial tasks, an object's task-irrelevant egocentric position conflicts with the side of a task-relevant manual response, which defines the classical Simon effect. Growing evidence suggests that the Simon effect occurs not only based on conflicting positions within the egocentric but also within the allocentric reference frame. Although neural mechanisms underlying the egocentric Simon effect have been extensively researched, neural mechanisms underlying the allocentric Simon effect and their potential interaction with those underlying its egocentric variant remain to be explored. In this fMRI study, spatial congruency between the task-irrelevant egocentric and allocentric target positions and the task-relevant response hand was orthogonally manipulated. Behaviorally, a significant Simon effect was observed for both reference frames. Neurally, three sub-regions in the frontoparietal network were involved in different aspects of the Simon effect, depending on the source of the task-irrelevant object locations. The right precentral gyrus, extending to the right SMA, was generally activated by Simon conflicts, irrespective of the spatial reference frame involved, and showed no additive activity to Simon conflicts. In contrast, the right postcentral gyrus was specifically involved in Simon conflicts induced by task-irrelevant allocentric, rather than egocentric, representations. Furthermore, a right lateral frontoparietal network showed increased neural activity whenever the egocentric and allocentric target locations were incongruent, indicating its functional role as a mismatch detector that monitors the discrepancy concerning allocentric and egocentric object locations.
Collapse
Affiliation(s)
- Hui Li
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, 510631, China
| | - Nan Liu
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, 510631, China
| | - You Li
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, 510631, China
| | - Ralph Weidner
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Center Jülich, 52425, Jülich, Germany
| | - Gereon R Fink
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Center Jülich, 52425, Jülich, Germany
- Department of Neurology, University Hospital Cologne, 50937, Cologne, Germany
| | - Qi Chen
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, 510631, China.
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, P.R. China.
| |
Collapse
|
8
|
Shaikh D, Bodenhagen L, Manoonpong P. Concurrent intramodal learning enhances multisensory responses of symmetric crossmodal learning in robotic audio-visual tracking. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2018.10.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
9
|
Cross-Modal Competition: The Default Computation for Multisensory Processing. J Neurosci 2018; 39:1374-1385. [PMID: 30573648 DOI: 10.1523/jneurosci.1806-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 12/04/2018] [Accepted: 12/08/2018] [Indexed: 11/21/2022] Open
Abstract
Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.
Collapse
|
10
|
Development of the Mechanisms Governing Midbrain Multisensory Integration. J Neurosci 2018; 38:3453-3465. [PMID: 29496891 DOI: 10.1523/jneurosci.2631-17.2018] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Revised: 12/15/2017] [Accepted: 01/19/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to integrate information across multiple senses enhances the brain's ability to detect, localize, and identify external events. This process has been well documented in single neurons in the superior colliculus (SC), which synthesize concordant combinations of visual, auditory, and/or somatosensory signals to enhance the vigor of their responses. This increases the physiological salience of crossmodal events and, in turn, the speed and accuracy of SC-mediated behavioral responses to them. However, this capability is not an innate feature of the circuit and only develops postnatally after the animal acquires sufficient experience with covariant crossmodal events to form links between their modality-specific components. Of critical importance in this process are tectopetal influences from association cortex. Recent findings suggest that, despite its intuitive appeal, a simple generic associative rule cannot explain how this circuit develops its ability to integrate those crossmodal inputs to produce enhanced multisensory responses. The present neurocomputational model explains how this development can be understood as a transition from a default state in which crossmodal SC inputs interact competitively to one in which they interact cooperatively. Crucial to this transition is the operation of a learning rule requiring coactivation among tectopetal afferents for engagement. The model successfully replicates findings of multisensory development in normal cats and cats of either sex reared with special experience. In doing so, it explains how the cortico-SC projections can use crossmodal experience to craft the multisensory integration capabilities of the SC and adapt them to the environment in which they will be used.SIGNIFICANCE STATEMENT The brain's remarkable ability to integrate information across the senses is not present at birth, but typically develops in early life as experience with crossmodal cues is acquired. Recent empirical findings suggest that the mechanisms supporting this development must be more complex than previously believed. The present work integrates these data with what is already known about the underlying circuit in the midbrain to create and test a mechanistic model of multisensory development. This model represents a novel and comprehensive framework that explains how midbrain circuits acquire multisensory experience and reveals how disruptions in this neurotypic developmental trajectory yield divergent outcomes that will affect the multisensory processing capabilities of the mature brain.
Collapse
|
11
|
Bach EC, Vaughan JW, Stein BE, Rowland BA. Pulsed Stimuli Elicit More Robust Multisensory Enhancement than Expected. Front Integr Neurosci 2018; 11:40. [PMID: 29354037 PMCID: PMC5758560 DOI: 10.3389/fnint.2017.00040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 12/15/2017] [Indexed: 11/28/2022] Open
Abstract
Neurons in the superior colliculus (SC) integrate cross-modal inputs to generate responses that are more robust than to either input alone, and are frequently greater than their sum (superadditive enhancement). Previously, the principles of a real-time multisensory transform were identified and used to accurately predict a neuron's responses to combinations of brief flashes and noise bursts. However, environmental stimuli frequently have more complex temporal structures that elicit very different response dynamics than previously examined. The present study tested whether such stimuli (i.e., pulsed) would be treated similarly by the multisensory transform. Pulsing visual and auditory stimuli elicited responses composed of higher discharge rates that had multiple peaks temporally aligned to the stimulus pulses. Combinations pulsed cues elicited multiple peaks of superadditive enhancement within the response window. Measured over the entire response, this resulted in larger enhancements than expected given enhancements elicited by non-pulsed (“sustained”) stimuli. However, as with sustained stimuli, the dynamics of multisensory responses to pulsed stimuli were highly related to the temporal dynamics of the unisensory inputs. This suggests that the specific characteristics of the multisensory transform are not determined by the external features of the cross-modal stimulus configuration; rather the temporal structure and alignment of the unisensory inputs is the dominant driving factor in the magnitudes of the multisensory product.
Collapse
Affiliation(s)
- Eva C Bach
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - John W Vaughan
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Barry E Stein
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Benjamin A Rowland
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| |
Collapse
|
12
|
Multisensory Integration Uses a Real-Time Unisensory-Multisensory Transform. J Neurosci 2017; 37:5183-5194. [PMID: 28450539 DOI: 10.1523/jneurosci.2767-16.2017] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 03/01/2017] [Accepted: 03/06/2017] [Indexed: 11/21/2022] Open
Abstract
The manner in which the brain integrates different sensory inputs to facilitate perception and behavior has been the subject of numerous speculations. By examining multisensory neurons in cat superior colliculus, the present study demonstrated that two operational principles are sufficient to understand how this remarkable result is achieved: (1) unisensory signals are integrated continuously and in real time as soon as they arrive at their common target neuron and (2) the resultant multisensory computation is modified in shape and timing by a delayed, calibrating inhibition. These principles were tested for descriptive sufficiency by embedding them in a neurocomputational model and using it to predict a neuron's moment-by-moment multisensory response given only knowledge of its responses to the individual modality-specific component cues. The predictions proved to be highly accurate, reliable, and unbiased and were, in most cases, not statistically distinguishable from the neuron's actual instantaneous multisensory response at any phase throughout its entire duration. The model was also able to explain why different multisensory products are often observed in different neurons at different time points, as well as the higher-order properties of multisensory integration, such as the dependency of multisensory products on the temporal alignment of crossmodal cues. These observations not only reveal this fundamental integrative operation, but also identify quantitatively the multisensory transform used by each neuron. As a result, they provide a means of comparing the integrative profiles among neurons and evaluating how they are affected by changes in intrinsic or extrinsic factors.SIGNIFICANCE STATEMENT Multisensory integration is the process by which the brain combines information from multiple sensory sources (e.g., vision and audition) to maximize an organism's ability to identify and respond to environmental stimuli. The actual transformative process by which the neural products of multisensory integration are achieved is poorly understood. By focusing on the millisecond-by-millisecond differences between a neuron's unisensory component responses and its integrated multisensory response, it was found that this multisensory transform can be described by two basic principles: unisensory information is integrated in real time and the multisensory response is shaped by calibrating inhibition. It is now possible to use these principles to predict a neuron's multisensory response accurately armed only with knowledge of its unisensory responses.
Collapse
|
13
|
Kardamakis AA, Pérez-Fernández J, Grillner S. Spatiotemporal interplay between multisensory excitation and recruited inhibition in the lamprey optic tectum. eLife 2016; 5. [PMID: 27635636 PMCID: PMC5026466 DOI: 10.7554/elife.16472] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 08/14/2016] [Indexed: 11/23/2022] Open
Abstract
Animals integrate the different senses to facilitate event-detection for navigation in their environment. In vertebrates, the optic tectum (superior colliculus) commands gaze shifts by synaptic integration of different sensory modalities. Recent works suggest that tectum can elaborate gaze reorientation commands on its own, rather than merely acting as a relay from upstream/forebrain circuits to downstream premotor centers. We show that tectal circuits can perform multisensory computations independently and, hence, configure final motor commands. Single tectal neurons receive converging visual and electrosensory inputs, as investigated in the lamprey - a phylogenetically conserved vertebrate. When these two sensory inputs overlap in space and time, response enhancement of output neurons occurs locally in the tectum, whereas surrounding areas and temporally misaligned inputs are inhibited. Retinal and electrosensory afferents elicit local monosynaptic excitation, quickly followed by inhibition via recruitment of GABAergic interneurons. Multisensory inputs can thus regulate event-detection within tectum through local inhibition without forebrain control. DOI:http://dx.doi.org/10.7554/eLife.16472.001 Many events occur around us simultaneously, which we detect through our senses. A critical task is to decide which of these events is the most important to look at in a given moment of time. This problem is solved by an ancient area of the brain called the optic tectum (known as the superior colliculus in mammals). The different senses are represented as superimposed maps in the optic tectum. Events that occur in different locations activate different areas of the map. Neurons in the optic tectum combine the responses from different senses to direct the animal’s attention and increase how reliably important events are detected. If an event is simultaneously registered by two senses, then certain neurons in the optic tectum will enhance their activity. By contrast, if two senses provide conflicting information about how different events progress, then these same neurons will be silenced. While this phenomenon of ‘multisensory integration’ is well described, little is known about how the optic tectum performs this integration. Kardamakis, Pérez-Fernández and Grillner have now studied multisensory integration in fish called lampreys, which belong to the oldest group of backboned animals. These fish can navigate using electroreception – the ability to detect electrical signals from the environment. Experiments that examined the connections between neurons in the optic tectum and monitored their activity revealed a neural circuit that consists of two types of neurons: inhibitory interneurons, and projecting neurons that connect the optic tectum to different motor centers in the brainstem. The circuit contains neurons that can receive inputs from both vision and electroreception when these senses are both activated from the same point in space. Incoming signals from the two senses activate the areas on the sensory maps that correspond to the location where the event occurred. This triggers the activity of the interneurons, which immediately send ‘stop’ signals. Thus, while an area of the sensory map and its output neurons are activated, the surrounding areas of the tectum are inhibited. Overall, the findings presented by Kardamakis, Pérez-Fernández and Grillner suggest that the optic tectum can direct attention to a particular event without requiring input from other brain areas. This ability has most likely been preserved throughout evolution. Future studies will aim to determine how the commands generated by the optic tectum circuit are translated into movements. DOI:http://dx.doi.org/10.7554/eLife.16472.002
Collapse
Affiliation(s)
| | | | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
14
|
A dynamical framework to relate perceptual variability with multisensory information processing. Sci Rep 2016; 6:31280. [PMID: 27502974 PMCID: PMC4977493 DOI: 10.1038/srep31280] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Accepted: 07/15/2016] [Indexed: 11/29/2022] Open
Abstract
Multisensory processing involves participation of individual sensory streams, e.g., vision, audition to facilitate perception of environmental stimuli. An experimental realization of the underlying complexity is captured by the “McGurk-effect”- incongruent auditory and visual vocalization stimuli eliciting perception of illusory speech sounds. Further studies have established that time-delay between onset of auditory and visual signals (AV lag) and perturbations in the unisensory streams are key variables that modulate perception. However, as of now only few quantitative theoretical frameworks have been proposed to understand the interplay among these psychophysical variables or the neural systems level interactions that govern perceptual variability. Here, we propose a dynamic systems model consisting of the basic ingredients of any multisensory processing, two unisensory and one multisensory sub-system (nodes) as reported by several researchers. The nodes are connected such that biophysically inspired coupling parameters and time delays become key parameters of this network. We observed that zero AV lag results in maximum synchronization of constituent nodes and the degree of synchronization decreases when we have non-zero lags. The attractor states of this network can thus be interpreted as the facilitator for stabilizing specific perceptual experience. Thereby, the dynamic model presents a quantitative framework for understanding multisensory information processing.
Collapse
|
15
|
Daemi M, Harris LR, Crawford JD. Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework. Front Comput Neurosci 2016; 10:62. [PMID: 27445780 PMCID: PMC4917558 DOI: 10.3389/fncom.2016.00062] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2016] [Accepted: 06/09/2016] [Indexed: 11/25/2022] Open
Abstract
Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.
Collapse
Affiliation(s)
- Mehdi Daemi
- Department of Biology and Neuroscience Graduate Diploma, York UniversityToronto, ON, Canada; Centre for Vision Research, York UniversityToronto, ON, Canada; Canadian Action and Perception NetworkToronto, ON, Canada; Department of Psychology, York UniversityToronto, ON, Canada
| | - Laurence R Harris
- Department of Biology and Neuroscience Graduate Diploma, York UniversityToronto, ON, Canada; Centre for Vision Research, York UniversityToronto, ON, Canada; Department of Psychology, York UniversityToronto, ON, Canada; School of Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| | - J Douglas Crawford
- Department of Biology and Neuroscience Graduate Diploma, York UniversityToronto, ON, Canada; Centre for Vision Research, York UniversityToronto, ON, Canada; Canadian Action and Perception NetworkToronto, ON, Canada; Department of Psychology, York UniversityToronto, ON, Canada; School of Kinesiology and Health Sciences, York UniversityToronto, ON, Canada; NSERC Brain and Action Program, York UniversityToronto, Canada
| |
Collapse
|
16
|
Yau JM, DeAngelis GC, Angelaki DE. Dissecting neural circuits for multisensory integration and crossmodal processing. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140203. [PMID: 26240418 DOI: 10.1098/rstb.2014.0203] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
We rely on rich and complex sensory information to perceive and understand our environment. Our multisensory experience of the world depends on the brain's remarkable ability to combine signals across sensory systems. Behavioural, neurophysiological and neuroimaging experiments have established principles of multisensory integration and candidate neural mechanisms. Here we review how targeted manipulation of neural activity using invasive and non-invasive neuromodulation techniques have advanced our understanding of multisensory processing. Neuromodulation studies have provided detailed characterizations of brain networks causally involved in multisensory integration. Despite substantial progress, important questions regarding multisensory networks remain unanswered. Critically, experimental approaches will need to be combined with theory in order to understand how distributed activity across multisensory networks collectively supports perception.
Collapse
Affiliation(s)
- Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
17
|
Felch DL, Khakhalin AS, Aizenman CD. Multisensory integration in the developing tectum is constrained by the balance of excitation and inhibition. eLife 2016; 5. [PMID: 27218449 PMCID: PMC4912350 DOI: 10.7554/elife.15600] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2016] [Accepted: 05/23/2016] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration (MSI) is the process that allows the brain to bind together spatiotemporally congruent inputs from different sensory modalities to produce single salient representations. While the phenomenology of MSI in vertebrate brains is well described, relatively little is known about cellular and synaptic mechanisms underlying this phenomenon. Here we use an isolated brain preparation to describe cellular mechanisms underlying development of MSI between visual and mechanosensory inputs in the optic tectum of Xenopus tadpoles. We find MSI is highly dependent on the temporal interval between crossmodal stimulus pairs. Over a key developmental period, the temporal window for MSI significantly narrows and is selectively tuned to specific interstimulus intervals. These changes in MSI correlate with developmental increases in evoked synaptic inhibition, and inhibitory blockade reverses observed developmental changes in MSI. We propose a model in which development of recurrent inhibition mediates development of temporal aspects of MSI in the tectum.
Collapse
Affiliation(s)
- Daniel L Felch
- Department of Neuroscience, Brown University, Providence, United States.,Department of Cell and Molecular Biology, Tulane University, New Orleans, United States
| | - Arseny S Khakhalin
- Department of Neuroscience, Brown University, Providence, United States.,Department of Biology, Bard College, New York, United States
| | - Carlos D Aizenman
- Department of Neuroscience, Brown University, Providence, United States
| |
Collapse
|
18
|
Jiang H, Stein BE, McHaffie JG. Multisensory training reverses midbrain lesion-induced changes and ameliorates haemianopia. Nat Commun 2015; 6:7263. [PMID: 26021613 PMCID: PMC6193257 DOI: 10.1038/ncomms8263] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2014] [Accepted: 04/23/2015] [Indexed: 11/09/2022] Open
Abstract
Failure to attend to visual cues is a common consequence of visual cortex injury. Here, we report on a behavioural strategy whereby cross-modal (auditory-visual) training reinstates visuomotor competencies in animals rendered haemianopic by complete unilateral visual cortex ablation. The re-emergence of visual behaviours is correlated with the reinstatement of visual responsiveness in deep layer neurons of the ipsilesional superior colliculus (SC). This functional recovery is produced by training-induced alterations in descending influences from association cortex that allowed these midbrain neurons to once again transform visual cues into appropriate orientation behaviours. The findings underscore the inherent plasticity and functional breadth of phylogenetically older visuomotor circuits that can express visual capabilities thought to have been subsumed by more recently evolved brain regions. These observations suggest the need for reevaluating current concepts of functional segregation in the visual system and have important implications for strategies aimed at ameliorating trauma-induced visual deficits in humans.
Collapse
Affiliation(s)
- Huai Jiang
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina 27157-1010 USA
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina 27157-1010 USA
| | - John G McHaffie
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina 27157-1010 USA
| |
Collapse
|
19
|
Kilteni K, Maselli A, Kording KP, Slater M. Over my fake body: body ownership illusions for studying the multisensory basis of own-body perception. Front Hum Neurosci 2015; 9:141. [PMID: 25852524 PMCID: PMC4371812 DOI: 10.3389/fnhum.2015.00141] [Citation(s) in RCA: 205] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Accepted: 02/28/2015] [Indexed: 11/13/2022] Open
Abstract
Which is my body and how do I distinguish it from the bodies of others, or from objects in the surrounding environment? The perception of our own body and more particularly our sense of body ownership is taken for granted. Nevertheless, experimental findings from body ownership illusions (BOIs), show that under specific multisensory conditions, we can experience artificial body parts or fake bodies as our own body parts or body, respectively. The aim of the present paper is to discuss how and why BOIs are induced. We review several experimental findings concerning the spatial, temporal, and semantic principles of crossmodal stimuli that have been applied to induce BOIs. On the basis of these principles, we discuss theoretical approaches concerning the underlying mechanism of BOIs. We propose a conceptualization based on Bayesian causal inference for addressing how our nervous system could infer whether an object belongs to our own body, using multisensory, sensorimotor, and semantic information, and we discuss how this can account for several experimental findings. Finally, we point to neural network models as an implementational framework within which the computational problem behind BOIs could be addressed in the future.
Collapse
Affiliation(s)
- Konstantina Kilteni
- Event Lab, Department of Personality, Evaluation and Psychological Treatment, University of Barcelona Barcelona, Spain ; IR3C Institute for Brain, Cognition, and Behaviour, University of Barcelona Barcelona, Spain
| | - Antonella Maselli
- Event Lab, Department of Personality, Evaluation and Psychological Treatment, University of Barcelona Barcelona, Spain
| | - Konrad P Kording
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago Chicago, IL, USA ; Department of Physical Medicine and Rehabilitation, Northwestern University Chicago, IL, USA ; Department of Physiology, Northwestern University Chicago, IL, USA
| | - Mel Slater
- Event Lab, Department of Personality, Evaluation and Psychological Treatment, University of Barcelona Barcelona, Spain ; IR3C Institute for Brain, Cognition, and Behaviour, University of Barcelona Barcelona, Spain ; Institució Catalana de Recerca i Estudis Avançats, Passeig Lluís Companys 23 Barcelona, Spain
| |
Collapse
|
20
|
Bauer J, Magg S, Wermter S. Attention modeled as information in learning multisensory integration. Neural Netw 2015; 65:44-52. [PMID: 25688997 DOI: 10.1016/j.neunet.2015.01.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2014] [Revised: 01/14/2015] [Accepted: 01/18/2015] [Indexed: 11/30/2022]
Abstract
Top-down cognitive processes affect the way bottom-up cross-sensory stimuli are integrated. In this paper, we therefore extend a successful previous neural network model of learning multisensory integration in the superior colliculus (SC) by top-down, attentional input and train it on different classes of cross-modal stimuli. The network not only learns to integrate cross-modal stimuli, but the model also reproduces neurons specializing in different combinations of modalities as well as behavioral and neurophysiological phenomena associated with spatial and feature-based attention. Importantly, we do not provide the model with any information about which input neurons are sensory and which are attentional. If the basic mechanisms of our model-self-organized learning of input statistics and divisive normalization-play a major role in the ontogenesis of the SC, then this work shows that these mechanisms suffice to explain a wide range of aspects both of bottom-up multisensory integration and the top-down influence on multisensory integration.
Collapse
Affiliation(s)
- Johannes Bauer
- University of Hamburg, Department of Informatics, Knowledge Technology, WTM, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany.
| | - Sven Magg
- University of Hamburg, Department of Informatics, Knowledge Technology, WTM, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany.
| | - Stefan Wermter
- University of Hamburg, Department of Informatics, Knowledge Technology, WTM, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany.
| |
Collapse
|
21
|
Ursino M, Cuppini C, Magosso E. Neurocomputational approaches to modelling multisensory integration in the brain: A review. Neural Netw 2014; 60:141-65. [DOI: 10.1016/j.neunet.2014.08.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2014] [Revised: 08/05/2014] [Accepted: 08/07/2014] [Indexed: 10/24/2022]
|
22
|
van Atteveldt N, Murray MM, Thut G, Schroeder CE. Multisensory integration: flexible use of general operations. Neuron 2014; 81:1240-1253. [PMID: 24656248 DOI: 10.1016/j.neuron.2014.02.044] [Citation(s) in RCA: 176] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2014] [Indexed: 11/25/2022]
Abstract
Research into the anatomical substrates and "principles" for integrating inputs from separate sensory surfaces has yielded divergent findings. This suggests that multisensory integration is flexible and context dependent and underlines the need for dynamically adaptive neuronal integration mechanisms. We propose that flexible multisensory integration can be explained by a combination of canonical, population-level integrative operations, such as oscillatory phase resetting and divisive normalization. These canonical operations subsume multisensory integration into a fundamental set of principles as to how the brain integrates all sorts of information, and they are being used proactively and adaptively. We illustrate this proposition by unifying recent findings from different research themes such as timing, behavioral goal, and experience-related differences in integration.
Collapse
Affiliation(s)
- Nienke van Atteveldt
- Neuroimaging & Neuromodeling group, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences, Meibergdreef 47, 1105 BA Amsterdam, The Netherlands; Department of Educational Neuroscience, Faculty of Psychology & Education and Institute LEARN!, VU University Amsterdam, van der Boechorststraat 1, 1081 BT Amsterdam, The Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology & Neuroscience, Maastricht University, P.O. Box 616, 6200 MD Maastricht, The Netherlands.
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (the LINE), Neuropsychology and Neurorehabilitation Service and Radiodiagnostic Service, University Hospital Center and University of Lausanne, Avenue Pierre Decker 5, 1011 Lausanne, Switzerland; EEG Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Rue du Bugnon 46, 1011 Lausanne, Switzerland
| | - Gregor Thut
- Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow, G12 8QB, UK
| | - Charles E Schroeder
- Columbia University, Department Psychiatry, and the New York State Psychiatric Institute, 1051 Riverside Drive, New York, NY 10032, USA; Nathan S. Kline Institute, Cognitive Neuroscience & Schizophrenia Program, 140 Old Orangeburg Road, Orangeburg, NY 10962, USA.
| |
Collapse
|
23
|
Rowland BA, Stein BE. A model of the temporal dynamics of multisensory enhancement. Neurosci Biobehav Rev 2013; 41:78-84. [PMID: 24374382 DOI: 10.1016/j.neubiorev.2013.12.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Revised: 11/04/2013] [Accepted: 12/10/2013] [Indexed: 11/29/2022]
Abstract
The senses transduce different forms of environmental energy, and the brain synthesizes information across them to enhance responses to salient biological events. We hypothesize that the potency of multisensory integration is attributable to the convergence of independent and temporally aligned signals derived from cross-modal stimulus configurations onto multisensory neurons. The temporal profile of multisensory integration in neurons of the deep superior colliculus (SC) is consistent with this hypothesis. The responses of these neurons to visual, auditory, and combinations of visual-auditory stimuli reveal that multisensory integration takes place in real-time; that is, the input signals are integrated as soon as they arrive at the target neuron. Interactions between cross-modal signals may appear to reflect linear or nonlinear computations on a moment-by-moment basis, the aggregate of which determines the net product of multisensory integration. Modeling observations presented here suggest that the early nonlinear components of the temporal profile of multisensory integration can be explained with a simple spiking neuron model, and do not require more sophisticated assumptions about the underlying biology. A transition from nonlinear "super-additive" computation to linear, additive computation can be accomplished via scaled inhibition. The findings provide a set of design constraints for artificial implementations seeking to exploit the basic principles and potency of biological multisensory integration in contexts of sensory substitution or augmentation.
Collapse
Affiliation(s)
| | - Barry E Stein
- Wake Forest School of Medicine, Winston-Salem, NC 27157, United States.
| |
Collapse
|
24
|
Ghose D, Wallace MT. Heterogeneity in the spatial receptive field architecture of multisensory neurons of the superior colliculus and its effects on multisensory integration. Neuroscience 2013; 256:147-62. [PMID: 24183964 DOI: 10.1016/j.neuroscience.2013.10.044] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2013] [Revised: 10/08/2013] [Accepted: 10/22/2013] [Indexed: 11/15/2022]
Abstract
Multisensory integration has been widely studied in neurons of the mammalian superior colliculus (SC). This has led to the description of various determinants of multisensory integration, including those based on stimulus- and neuron-specific factors. The most widely characterized of these illustrate the importance of the spatial and temporal relationships of the paired stimuli as well as their relative effectiveness in eliciting a response in determining the final integrated output. Although these stimulus-specific factors have generally been considered in isolation (i.e., manipulating stimulus location while holding all other factors constant), they have an intrinsic interdependency that has yet to be fully elucidated. For example, changes in stimulus location will likely also impact both the temporal profile of response and the effectiveness of the stimulus. The importance of better describing this interdependency is further reinforced by the fact that SC neurons have large receptive fields, and that responses at different locations within these receptive fields are far from equivalent. To address these issues, the current study was designed to examine the interdependency between the stimulus factors of space and effectiveness in dictating the multisensory responses of SC neurons. The results show that neuronal responsiveness changes dramatically with changes in stimulus location - highlighting a marked heterogeneity in the spatial receptive fields of SC neurons. More importantly, this receptive field heterogeneity played a major role in the integrative product exhibited by stimulus pairings, such that pairings at weakly responsive locations of the receptive fields resulted in the largest multisensory interactions. Together these results provide greater insight into the interrelationship of the factors underlying multisensory integration in SC neurons, and may have important mechanistic implications for multisensory integration and the role it plays in shaping SC-mediated behaviors.
Collapse
Affiliation(s)
- D Ghose
- Department of Psychology, Vanderbilt University, Nashville, TN, United States; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, United States.
| | - M T Wallace
- Department of Psychology, Vanderbilt University, Nashville, TN, United States; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, United States; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States; Department of Psychiatry, Vanderbilt University, Nashville, TN, United States; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
25
|
Yu L, Xu J, Rowland BA, Stein BE. Development of cortical influences on superior colliculus multisensory neurons: effects of dark-rearing. Eur J Neurosci 2013; 37:1594-601. [PMID: 23534923 DOI: 10.1111/ejn.12182] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2012] [Revised: 02/08/2013] [Accepted: 02/11/2013] [Indexed: 11/27/2022]
Abstract
Rearing cats from birth to adulthood in darkness prevents neurons in the superior colliculus (SC) from developing the capability to integrate visual and non-visual (e.g. visual-auditory) inputs. Presumably, this developmental anomaly is due to a lack of experience with the combination of those cues, which is essential to form associative links between them. The visual-auditory multisensory integration capacity of SC neurons has also been shown to depend on the functional integrity of converging visual and auditory inputs from the ipsilateral association cortex. Disrupting these cortico-collicular projections at any stage of life results in a pattern of outcomes similar to those found after dark-rearing; SC neurons respond to stimuli in both sensory modalities, but cannot integrate the information they provide. Thus, it is possible that dark-rearing compromises the development of these descending tecto-petal connections and the essential influences they convey. However, the results of the present experiments, using cortical deactivation to assess the presence of cortico-collicular influences, demonstrate that dark-rearing does not prevent the association cortex from developing robust influences over SC multisensory responses. In fact, dark-rearing may increase their potency over that observed in normally-reared animals. Nevertheless, their influences are still insufficient to support SC multisensory integration. It appears that cross-modal experience shapes the cortical influence to selectively enhance responses to cross-modal stimulus combinations that are likely to be derived from the same event. In the absence of this experience, the cortex develops an indiscriminate excitatory influence over its multisensory SC target neurons.
Collapse
Affiliation(s)
- Liping Yu
- School of Life Science, East China Normal University, Shanghai, China, 2000062
| | | | | | | |
Collapse
|
26
|
Cuppini C, Magosso E, Rowland B, Stein B, Ursino M. Hebbian mechanisms help explain development of multisensory integration in the superior colliculus: a neural network model. BIOLOGICAL CYBERNETICS 2012; 106:691-713. [PMID: 23011260 PMCID: PMC3552306 DOI: 10.1007/s00422-012-0511-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2011] [Accepted: 07/11/2012] [Indexed: 06/01/2023]
Abstract
The superior colliculus (SC) integrates relevant sensory information (visual, auditory, somatosensory) from several cortical and subcortical structures, to program orientation responses to external events. However, this capacity is not present at birth, and it is acquired only through interactions with cross-modal events during maturation. Mathematical models provide a quantitative framework, valuable in helping to clarify the specific neural mechanisms underlying the maturation of the multisensory integration in the SC. We extended a neural network model of the adult SC (Cuppini et al., Front Integr Neurosci 4:1-15, 2010) to describe the development of this phenomenon starting from an immature state, based on known or suspected anatomy and physiology, in which: (1) AES afferents are present but weak, (2) Responses are driven from non-AES afferents, and (3) The visual inputs have a marginal spatial tuning. Sensory experience was modeled by repeatedly presenting modality-specific and cross-modal stimuli. Synapses in the network were modified by simple Hebbian learning rules. As a consequence of this exposure, (1) Receptive fields shrink and come into spatial register, and (2) SC neurons gained the adult characteristic integrative properties: enhancement, depression, and inverse effectiveness. Importantly, the unique architecture of the model guided the development so that integration became dependent on the relationship between the cortical input and the SC. Manipulations of the statistics of the experience during the development changed the integrative profiles of the neurons, and results matched well with the results of physiological studies.
Collapse
Affiliation(s)
- C Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna, Bologna, Italy.
| | | | | | | | | |
Collapse
|
27
|
Audio-visual localization with hierarchical topographic maps: Modeling the superior colliculus. Neurocomputing 2012. [DOI: 10.1016/j.neucom.2012.05.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
28
|
Yu L, Rowland BA, Xu J, Stein BE. Multisensory plasticity in adulthood: cross-modal experience enhances neuronal excitability and exposes silent inputs. J Neurophysiol 2012; 109:464-74. [PMID: 23114212 DOI: 10.1152/jn.00739.2012] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
Multisensory superior colliculus neurons in cats were found to retain substantial plasticity to short-term, site-specific experience with cross-modal stimuli well into adulthood. Following cross-modal exposure trials, these neurons substantially increased their sensitivity to the cross-modal stimulus configuration as well as to its individual component stimuli. In many cases, the exposure experience also revealed a previously ineffective or "silent" input channel, rendering it overtly responsive. These experience-induced changes required relatively few exposure trials and could be retained for more than 1 h. However, their induction was generally restricted to experience with cross-modal stimuli. Only rarely were they induced by exposure to a modality-specific stimulus and were never induced by stimulating a previously ineffective input channel. This short-term plasticity likely provides substantial benefits to the organism in dealing with ongoing and sequential events that take place at a given location in space and may reflect the ability of multisensory superior colliculus neurons to rapidly alter their response properties to accommodate to changes in environmental challenges and event probabilities.
Collapse
Affiliation(s)
- Liping Yu
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157-1010, USA
| | | | | | | |
Collapse
|
29
|
Ghose D, Barnett ZP, Wallace MT. Impact of response duration on multisensory integration. J Neurophysiol 2012; 108:2534-44. [PMID: 22896723 DOI: 10.1152/jn.00286.2012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Multisensory neurons in the superior colliculus (SC) have been shown to have large receptive fields that are heterogeneous in nature. These neurons have the capacity to integrate their different sensory inputs, a process that has been shown to depend on the physical characteristics of the stimuli that are combined (i.e., spatial and temporal relationship and relative effectiveness). Recent work has highlighted the interdependence of these factors in driving multisensory integration, adding a layer of complexity to our understanding of multisensory processes. In the present study our goal was to add to this understanding by characterizing how stimulus location impacts the temporal dynamics of multisensory responses in cat SC neurons. The results illustrate that locations within the spatial receptive fields (SRFs) of these neurons can be divided into those showing short-duration responses and long-duration response profiles. Most importantly, discharge duration appears to be a good determinant of multisensory integration, such that short-duration responses are typically associated with a high magnitude of multisensory integration (i.e., superadditive responses) while long-duration responses are typically associated with low integrative capacity. These results further reinforce the complexity of the integrative features of SC neurons and show that the large SRFs of these neurons are characterized by vastly differing temporal dynamics, dynamics that strongly shape the integrative capacity of these neurons.
Collapse
Affiliation(s)
- Dipanwita Ghose
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37240, USA.
| | | | | |
Collapse
|
30
|
Perrault T, Rowland B, Stein B. The Organization and Plasticity of Multisensory Integration in the Midbrain. Front Neurosci 2011. [DOI: 10.1201/b11092-20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
31
|
Perrault T, Rowland B, Stein B. The Organization and Plasticity of Multisensory Integration in the Midbrain. Front Neurosci 2011. [DOI: 10.1201/9781439812174-20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
32
|
|
33
|
Hun Ki Lim, Keniston LP, Cios KJ. Modeling of Multisensory Convergence with a Network of Spiking Neurons: A Reverse Engineering Approach. IEEE Trans Biomed Eng 2011; 58:1940-9. [DOI: 10.1109/tbme.2011.2125962] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
34
|
A normalization model of multisensory integration. Nat Neurosci 2011; 14:775-82. [PMID: 21552274 PMCID: PMC3102778 DOI: 10.1038/nn.2815] [Citation(s) in RCA: 181] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2010] [Accepted: 03/21/2011] [Indexed: 11/08/2022]
Abstract
Responses of neurons that integrate multiple sensory inputs are traditionally characterized in terms of a set of empirical principles. However, a simple computational framework that accounts for these empirical features of multisensory integration has not been established. We propose that divisive normalization, acting at the stage of multisensory integration, can account for many of the empirical principles of multisensory integration shown by single neurons, such as the principle of inverse effectiveness and the spatial principle. This model, which uses a simple functional operation (normalization) for which there is considerable experimental support, also accounts for the recent observation that the mathematical rule by which multisensory neurons combine their inputs changes with cue reliability. The normalization model, which makes a strong testable prediction regarding cross-modal suppression, may therefore provide a simple unifying computational account of the important features of multisensory integration by neurons.
Collapse
|
35
|
Cuppini C, Magosso E, Ursino M. Organization, maturation, and plasticity of multisensory integration: insights from computational modeling studies. Front Psychol 2011; 2:77. [PMID: 21687448 PMCID: PMC3110383 DOI: 10.3389/fpsyg.2011.00077] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2010] [Accepted: 04/12/2011] [Indexed: 11/15/2022] Open
Abstract
In this paper, we present two neural network models – devoted to two specific and widely investigated aspects of multisensory integration – in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual–auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth – develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space – where multimodal integration occurs – may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy
| | | | | |
Collapse
|
36
|
Lim HK, Keniston LP, Shin JH, Allman BL, Meredith MA, Cios KJ. Connectional parameters determine multisensory processing in a spiking network model of multisensory convergence. Exp Brain Res 2011; 213:329-39. [PMID: 21484394 DOI: 10.1007/s00221-011-2671-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2010] [Accepted: 03/30/2011] [Indexed: 02/02/2023]
Abstract
For the brain to synthesize information from different sensory modalities, connections from different sensory systems must converge onto individual neurons. However, despite being the definitive, first step in the multisensory process, little is known about multisensory convergence at the neuronal level. This lack of knowledge may be due to the difficulty for biological experiments to manipulate and test the connectional parameters that define convergence. Therefore, the present study used a computational network of spiking neurons to measure the influence of convergence from two separate projection areas on the responses of neurons in a convergent area. Systematic changes in the proportion of extrinsic projections, the proportion of intrinsic connections, or the amount of local inhibitory contacts affected the multisensory properties of neurons in the convergent area by influencing (1) the proportion of multisensory neurons generated, (2) the proportion of neurons that generate integrated multisensory responses, and (3) the magnitude of multisensory integration. These simulations provide insight into the connectional parameters of convergence that contribute to the generation of populations of multisensory neurons in different neural regions as well as indicate that the simple effect of multisensory convergence is sufficient to generate multisensory properties like those of biological multisensory neurons.
Collapse
Affiliation(s)
- H K Lim
- Department of Computer Science, School of Engineering, Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | | | | | | | | | | |
Collapse
|
37
|
Stein BE, Rowland BA. Organization and plasticity in multisensory integration: early and late experience affects its governing principles. PROGRESS IN BRAIN RESEARCH 2011; 191:145-63. [PMID: 21741550 DOI: 10.1016/b978-0-444-53752-2.00007-2] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Neurons in the midbrain superior colliculus (SC) have the ability to integrate information from different senses to profoundly increase their sensitivity to external events. This not only enhances an organism's ability to detect and localize these events, but to program appropriate motor responses to them. The survival value of this process of multisensory integration is self-evident, and its physiological and behavioral manifestations have been studied extensively in adult and developing cats and monkeys. These studies have revealed, that contrary to expectations based on some developmental theories this process is not present in the newborn's brain. The data show that is acquired only gradually during postnatal life as a consequence of at least two factors: the maturation of cooperative interactions between association cortex and the SC, and extensive experience with cross-modal cues. Using these factors, the brain is able to craft the underlying neural circuits and the fundamental principles that govern multisensory integration so that they are adapted to the ecological circumstances in which they will be used.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA.
| | | |
Collapse
|
38
|
Bürck M, Friedel P, Sichert AB, Vossen C, van Hemmen JL. Optimality in mono- and multisensory map formation. BIOLOGICAL CYBERNETICS 2010; 103:1-20. [PMID: 20502911 DOI: 10.1007/s00422-010-0393-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2010] [Accepted: 04/10/2010] [Indexed: 05/29/2023]
Abstract
In the struggle for survival in a complex and dynamic environment, nature has developed a multitude of sophisticated sensory systems. In order to exploit the information provided by these sensory systems, higher vertebrates reconstruct the spatio-temporal environment from each of the sensory systems they have at their disposal. That is, for each modality the animal computes a neuronal representation of the outside world, a monosensory neuronal map. Here we present a universal framework that allows to calculate the specific layout of the involved neuronal network by means of a general mathematical principle, viz., stochastic optimality. In order to illustrate the use of this theoretical framework, we provide a step-by-step tutorial of how to apply our model. In so doing, we present a spatial and a temporal example of optimal stimulus reconstruction which underline the advantages of our approach. That is, given a known physical signal transmission and rudimental knowledge of the detection process, our approach allows to estimate the possible performance and to predict neuronal properties of biological sensory systems. Finally, information from different sensory modalities has to be integrated so as to gain a unified perception of reality for further processing, e.g., for distinct motor commands. We briefly discuss concepts of multimodal interaction and how a multimodal space can evolve by alignment of monosensory maps.
Collapse
Affiliation(s)
- Moritz Bürck
- Technical University of Munich, Munich, Germany.
| | | | | | | | | |
Collapse
|
39
|
Magosso E. Integrating Information From Vision and Touch: A Neural Network Modeling Study. ACTA ACUST UNITED AC 2010; 14:598-612. [DOI: 10.1109/titb.2010.2040750] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
40
|
Cuppini C, Ursino M, Magosso E, Rowland BA, Stein BE. An emergent model of multisensory integration in superior colliculus neurons. Front Integr Neurosci 2010; 4:6. [PMID: 20431725 PMCID: PMC2861478 DOI: 10.3389/fnint.2010.00006] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Accepted: 03/03/2010] [Indexed: 11/21/2022] Open
Abstract
Neurons in the cat superior colliculus (SC) integrate information from different senses to enhance their responses to cross-modal stimuli. These multisensory SC neurons receive multiple converging unisensory inputs from many sources; those received from association cortex are critical for the manifestation of multisensory integration. The mechanisms underlying this characteristic property of SC neurons are not completely understood, but can be clarified with the use of mathematical models and computer simulations. Thus the objective of the current effort was to present a plausible model that can explain the main physiological features of multisensory integration based on the current neurological literature regarding the influences received by SC from cortical and subcortical sources. The model assumes the presence of competitive mechanisms between inputs, nonlinearities in NMDA receptor responses, and provides a priori synaptic weights to mimic the normal responses of SC neurons. As a result, it provides a basis for understanding the dependence of multisensory enhancement on an intact association cortex, and simulates the changes in the SC response that occur during NMDA receptor blockade. Finally, it makes testable predictions about why significant response differences are obtained in multisensory SC neurons when they are confronted with pairs of cross-modal and within-modal stimuli. By postulating plausible biological mechanisms to complement those that are already known, the model provides a basis for understanding how SC neurons are capable of engaging in this remarkable process.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electronics, Computer Science and Systems, University of Bologna Bologna, Italy
| | | | | | | | | |
Collapse
|
41
|
Winges SA, Eonta SE, Soechting JF. Does temporal asynchrony affect multimodal curvature detection? Exp Brain Res 2010; 203:1-9. [PMID: 20213147 DOI: 10.1007/s00221-010-2200-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2010] [Accepted: 02/15/2010] [Indexed: 11/30/2022]
Abstract
Multiple sensory modalities gather information about our surroundings to plan appropriate movements based on the properties of the environment and the objects within it. This study was designed to examine the sensitivity of visual and haptic information alone and together for detecting curvature. When both visual and haptic information were present, temporal delays in signal onset were used to determine the effect of asynchronous sensory information on the interference of vision on the haptic estimate of curvature. Even under the largest temporal delays where visual and haptic information were clearly disparate, the presentation of visual information influenced the haptic perception of curvature. The uncertainty associated with the unimodal vision condition was smaller than that in the unimodal haptic condition, regardless of whether the haptic information was procured actively or under robot assistance for curvature detection. When both visual and haptic information were available, the uncertainty was not reduced; it was equal to that of the unimodal haptic condition. The weighting of the visual and haptic information was highly variable across subjects with some subjects making judgments based largely on haptic information, while others tended to rely on visual information equally or to a larger extent than the haptic information.
Collapse
Affiliation(s)
- Sara A Winges
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA.
| | | | | |
Collapse
|
42
|
Hiramoto M, Cline HT. Convergence of multisensory inputs in Xenopus tadpole tectum. Dev Neurobiol 2010; 69:959-71. [PMID: 19813244 DOI: 10.1002/dneu.20754] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The integration of multisensory information takes place in the optic tectum where visual and auditory/mechanosensory inputs converge and regulate motor outputs. The circuits that integrate multisensory information are poorly understood. In an effort to identify the basic components of a multisensory integrative circuit, we determined the projections of the mechanosensory input from the periphery to the optic tectum and compared their distribution to the retinotectal inputs in Xenopus laevis tadpoles using dye-labeling methods. The peripheral ganglia of the lateral line system project to the ipsilateral hindbrain and the axons representing mechanosensory inputs along the anterior/posterior body axis are mapped along the ventrodorsal axis in the axon tract in the dorsal column of the hindbrain. Hindbrain neurons project axons to the contralateral optic tectum. The neurons from anterior and posterior hindbrain regions project axons to the dorsal and ventral tectum, respectively. While the retinotectal axons project to a superficial lamina in the tectal neuropil, the hindbrain axons project to a deep neuropil layer. Calcium imaging showed that multimodal inputs converge on tectal neurons. The layer-specific projections of the hindbrain and retinal axons suggest a functional segregation of sensory inputs to proximal and distal tectal cell dendrites, respectively.
Collapse
Affiliation(s)
- Masaki Hiramoto
- Department of Cell Biology, The Scripps Research Institute, La Jolla, California, USA
| | | |
Collapse
|
43
|
Multisensory integration in the superior colliculus requires synergy among corticocollicular inputs. J Neurosci 2009; 29:6580-92. [PMID: 19458228 DOI: 10.1523/jneurosci.0525-09.2009] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Influences from the visual (AEV), auditory (FAES), and somatosensory (SIV) divisions of the cat anterior ectosylvian sulcus (AES) play a critical role in rendering superior colliculus (SC) neurons capable of multisensory integration. However, it is not known whether this is accomplished via their independent sensory-specific action or via some cross-modal cooperative action that emerges as a consequence of their convergence on SC neurons. Using visual-auditory SC neurons as a model, we examined how selective and combined deactivation of FAES and AEV affected SC multisensory (visual-auditory) and unisensory (visual-visual) integration capabilities. As noted earlier, multisensory integration yielded SC responses that were significantly greater than those evoked by the most effective individual component stimulus. This multisensory "response enhancement" was more evident when the component stimuli were weakly effective. Conversely, unisensory integration was dominated by the lack of response enhancement. During cryogenic deactivation of FAES and/or AEV, the unisensory responses of SC neurons were only modestly affected; however, their multisensory response enhancement showed a significant downward shift and was eliminated. The shift was similar in magnitude for deactivation of either AES subregion and, in general, only marginally greater when both were deactivated simultaneously. These data reveal that SC multisensory integration is dependent on the cooperative action of distinct subsets of unisensory corticofugal afferents, afferents whose sensory combination matches the multisensory profile of their midbrain target neurons, and whose functional synergy is specific to rendering SC neurons capable of synthesizing information from those particular senses.
Collapse
|
44
|
Abstract
Pooling and synthesizing signals across different senses often enhances responses to the event from which they are derived. Here, we examine whether multisensory response enhancements are attributable to a redundant target effect (two stimuli rather than one) or if there is some special quality inherent in the combination of cues from different senses. To test these possibilities, the performance of animals in localizing and detecting spatiotemporally concordant visual and auditory stimuli was examined when these stimuli were presented individually (visual or auditory) or in cross-modal (visual-auditory) and within-modal (visual-visual, auditory-auditory) combinations. Performance enhancements proved to be far greater for combinations of cross-modal than within-modal stimuli and support the idea that the behavioral products derived from multisensory integration are not attributable to simple target redundancy. One likely explanation is that whereas cross-modal signals offer statistically independent samples of the environment, within-modal signals can exhibit substantial covariance, and consequently multisensory integration can yield more substantial error reduction than unisensory integration.
Collapse
|
45
|
Fuentes-Santamaria V, Alvarado JC, McHaffie JG, Stein BE. Axon morphologies and convergence patterns of projections from different sensory-specific cortices of the anterior ectosylvian sulcus onto multisensory neurons in the cat superior colliculus. Cereb Cortex 2009; 19:2902-15. [PMID: 19359347 DOI: 10.1093/cercor/bhp060] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Corticofugal projections to the thalamus reveal 2 axonal morphologies, each associated with specific physiological attributes. These determine the functional characteristics of thalamic neurons. It is not clear, however, whether such features characterize the corticofugal projections that mediate multisensory integration in superior colliculus (SC) neurons. The cortico-collicular projections from cat anterior ectosylvian sulcus (AES) are derived from its visual, auditory, and somatosensory representations and are critical for multisensory integration. Following tracer injections into each subdivision, 2 types of cortico-collicular axons were observed. Most were categorized as type I and consisted of small-caliber axons traversing long distances without branching, bearing mainly small boutons. The less frequent type II had thicker axons, more complex branching patterns, larger boutons, and more complex terminal boutons. Following combinatorial injections of 2 different fluorescent tracers into defined AES subdivisions, fibers from each were seen converging onto individual SC neurons and indicate that such convergence, like that in the corticothalamic system, is mediated by 2 distinct morphological types of axon terminals. Nevertheless, and despite the conservation of axonal morphologies across different subcortical systems, it is not yet clear if the concomitant physiological attributes described in the thalamus are directly applicable to multisensory integration.
Collapse
Affiliation(s)
- Veronica Fuentes-Santamaria
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157, USA.
| | | | | | | |
Collapse
|
46
|
Rowland BA, Stein BE. Temporal profiles of response enhancement in multisensory integration. Front Neurosci 2008; 2:218-24. [PMID: 19225595 PMCID: PMC2622754 DOI: 10.3389/neuro.01.033.2008] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2008] [Accepted: 11/08/2008] [Indexed: 11/13/2022] Open
Abstract
Animals have evolved multiple senses that transduce different forms of energy as a way of increasing their sensitivity to environmental events. Each sense provides a unique and independent perspective on the world, and very often a single event stimulates several of them. In order to make best use of the available information, the brain has also evolved the capacity to integrate information across the senses ("multisensory integration"). This facilitates the detection, localization, and identification of a given event, and has obvious survival value for the individual and the species. Multisensory responses in the superior colliculus (SC) evidence shorter latencies and are more robust at their onset. This is the phenomenon of initial response enhancement in multisensory integration, which is believed to represent a real time fusion of information across the senses. The present paper reviews two recent reports describing how the timing and robustness of sensory responses change as a consequence of multisensory integration in the model system of the SC.
Collapse
Affiliation(s)
- Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine Winston-Salem, NC, USA
| | | |
Collapse
|
47
|
Alvarado JC, Rowland BA, Stanford TR, Stein BE. A neural network model of multisensory integration also accounts for unisensory integration in superior colliculus. Brain Res 2008; 1242:13-23. [PMID: 18486113 DOI: 10.1016/j.brainres.2008.03.074] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2008] [Revised: 03/17/2008] [Accepted: 03/19/2008] [Indexed: 10/22/2022]
Abstract
Sensory integration is a characteristic feature of superior colliculus (SC) neurons. A recent neural network model of single-neuron integration derived a set of basic biological constraints sufficient to replicate a number of physiological findings pertaining to multisensory responses. The present study examined the accuracy of this model in predicting the responses of SC neurons to pairs of visual stimuli placed within their receptive fields. The accuracy of this model was compared to that of three other computational models (additive, averaging and maximum operator) previously used to fit these data. Each neuron's behavior was assessed by examining its mean responses to the component stimuli individually and together, and each model's performance was assessed to determine how close its prediction came to the actual mean response of each neuron and the magnitude of its predicted residual error. Predictions from the additive model significantly overshot the actual responses of SC neurons and predictions from the averaging model significantly undershot them. Only the predictions of the maximum operator and neural network model were not significantly different from the actual responses. However, the neural network model outperformed even the maximum operator model in predicting the responses of these neurons. The neural network model is derived from a larger model that also has substantial predictive power in multisensory integration, and provides a single computational vehicle for assessing the responses of SC neurons to different combinations of cross-modal and within-modal stimuli of different efficacies.
Collapse
Affiliation(s)
- Juan Carlos Alvarado
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157, USA.
| | | | | | | |
Collapse
|
48
|
Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci 2008; 9:255-66. [PMID: 18354398 DOI: 10.1038/nrn2331] [Citation(s) in RCA: 925] [Impact Index Per Article: 57.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
For thousands of years science philosophers have been impressed by how effectively the senses work together to enhance the salience of biologically meaningful events. However, they really had no idea how this was accomplished. Recent insights into the underlying physiological mechanisms reveal that, in at least one circuit, this ability depends on an intimate dialogue among neurons at multiple levels of the neuraxis; this dialogue cannot take place until long after birth and might require a specific kind of experience. Understanding the acquisition and usage of multisensory integration in the midbrain and cerebral cortex of mammals has been aided by a multiplicity of approaches. Here we examine some of the fundamental advances that have been made and some of the challenging questions that remain.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157, USA.
| | | |
Collapse
|
49
|
Fuentes-Santamaria V, Alvarado JC, Stein BE, McHaffie JG. Cortex contacts both output neurons and nitrergic interneurons in the superior colliculus: direct and indirect routes for multisensory integration. Cereb Cortex 2007; 18:1640-52. [PMID: 18003596 DOI: 10.1093/cercor/bhm192] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The ability of cat superior colliculus (SC) neurons to integrate information from different senses is thought to depend on direct projections from regions along the anterior ectosylvian sulcus (AES). However, electrical stimulation of AES also activates SC output neurons polysynaptically. In the present study, we found that nitric oxide (NO)-containing (nitrergic) interneurons are a target of AES projections, forming a component of this cortico-SC circuitry. The dendritic and axonal processes of these corticorecipient nitrergic interneurons apposed the soma and dendrites of presumptive SC output neurons. Often, an individual cortical fiber targeted both an output neuron and a neighboring nitrergic interneuron that, in turn, contacted the output neuron. Many (46%) nitrergic neurons also colocalized with gamma-aminobutyric acid (GABA), suggesting that a substantial subset have the potential for inhibiting output neurons. These observations suggest that nitrergic interneurons are positioned to convey cortical influences onto SC output neurons disynaptically via nitrergic mechanisms as well as conventional neurotransmitter systems utilizing GABA and other, possibly excitatory, neurotransmitters. In addition, because NO also acts as a retrograde messenger, cortically mediated NO release from the postsynaptic elements of nitrergic interneurons could influence presynaptic cortico-SC terminals that directly contact output neurons.
Collapse
Affiliation(s)
- Veronica Fuentes-Santamaria
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157, USA.
| | | | | | | |
Collapse
|