1
|
Cuppini C, Shams L, Magosso E, Ursino M. A biologically inspired neurocomputational model for audiovisual integration and causal inference. Eur J Neurosci 2018; 46:2481-2498. [PMID: 28949035 DOI: 10.1111/ejn.13725] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 09/18/2017] [Accepted: 09/19/2017] [Indexed: 11/28/2022]
Abstract
Recently, experimental and theoretical research has focused on the brain's abilities to extract information from a noisy sensory environment and how cross-modal inputs are processed to solve the causal inference problem to provide the best estimate of external events. Despite the empirical evidence suggesting that the nervous system uses a statistically optimal and probabilistic approach in addressing these problems, little is known about the brain's architecture needed to implement these computations. The aim of this work was to realize a mathematical model, based on physiologically plausible hypotheses, to analyze the neural mechanisms underlying multisensory perception and causal inference. The model consists of three layers topologically organized: two encode auditory and visual stimuli, separately, and are reciprocally connected via excitatory synapses and send excitatory connections to the third downstream layer. This synaptic organization realizes two mechanisms of cross-modal interactions: the first is responsible for the sensory representation of the external stimuli, while the second solves the causal inference problem. We tested the network by comparing its results to behavioral data reported in the literature. Among others, the network can account for the ventriloquism illusion, the pattern of sensory bias and the percept of unity as a function of the spatial auditory-visual distance, and the dependence of the auditory error on the causal inference. Finally, simulations results are consistent with probability matching as the perceptual strategy used in auditory-visual spatial localization tasks, agreeing with the behavioral data. The model makes untested predictions that can be investigated in future behavioral experiments.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, I40136, Bologna, Italy
| | - Ladan Shams
- Department of Psychology, Department of BioEngineering, Interdepartmental Neuroscience Program, University of California, Los Angeles, CA, USA
| | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, I40136, Bologna, Italy
| | - Mauro Ursino
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, I40136, Bologna, Italy
| |
Collapse
|
2
|
Magosso E, Cuppini C, Bertini C. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation. Front Comput Neurosci 2018; 11:113. [PMID: 29326578 PMCID: PMC5736575 DOI: 10.3389/fncom.2017.00113] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Accepted: 12/04/2017] [Indexed: 12/12/2022] Open
Abstract
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.
Collapse
Affiliation(s)
- Elisa Magosso
- Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Cesena, Italy
| | - Cristiano Cuppini
- Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Cesena, Italy
| | - Caterina Bertini
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, Italy.,Department of Psychology, University of Bologna, Italy
| |
Collapse
|
3
|
Bruns P, Röder B. Spatial and frequency specificity of the ventriloquism aftereffect revisited. PSYCHOLOGICAL RESEARCH 2017; 83:1400-1415. [PMID: 29285647 DOI: 10.1007/s00426-017-0965-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Accepted: 12/18/2017] [Indexed: 11/28/2022]
Abstract
Exposure to audiovisual stimuli with a consistent spatial misalignment seems to result in a recalibration of unisensory auditory spatial representations. The previous studies have suggested that this so-called ventriloquism aftereffect is confined to the trained region of space, but yielded inconsistent results as to whether or not recalibration generalizes to untrained sound frequencies. Here, we reassessed the spatial and frequency specificity of the ventriloquism aftereffect by testing whether auditory spatial perception can be independently recalibrated for two different sound frequencies and/or at two different spatial locations. Recalibration was confined to locations within the trained hemifield, suggesting that spatial representations were independently adjusted for the two hemifields. The frequency specificity of the ventriloquism aftereffect depended on the presence or the absence of conflicting audiovisual adaptation stimuli within the same hemifield. Moreover, adaptation of two different sound frequencies in opposite directions (leftward vs. rightward) resulted in a selective suppression of leftward recalibration, even when the adapting stimuli were presented in different hemifields. Thus, instead of representing a fixed stimulus-driven process, cross-modal recalibration seems to critically depend on the sensory context and takes into account inconsistencies in the cross-modal input.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany. .,Department of Cognitive, Linguistic and Psychological Sciences, Brown University, 190 Thayer Street, Providence, RI, 02912, USA.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
4
|
Ursino M, Cuppini C, Magosso E. Multisensory Bayesian Inference Depends on Synapse Maturation during Training: Theoretical Analysis and Neural Modeling Implementation. Neural Comput 2017; 29:735-782. [DOI: 10.1162/neco_a_00935] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding—the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. The work includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In cross-modal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs on a trial-by-trial basis, according to the reliability of the individual cues.
Collapse
Affiliation(s)
- Mauro Ursino
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| | - Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| |
Collapse
|
5
|
Magosso E, Bertini C, Cuppini C, Ursino M. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction. Neuropsychologia 2016; 91:120-140. [DOI: 10.1016/j.neuropsychologia.2016.07.015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2016] [Revised: 06/17/2016] [Accepted: 07/12/2016] [Indexed: 11/16/2022]
|
6
|
Ursino M, Cuppini C, Magosso E. Neurocomputational approaches to modelling multisensory integration in the brain: A review. Neural Netw 2014; 60:141-65. [DOI: 10.1016/j.neunet.2014.08.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2014] [Revised: 08/05/2014] [Accepted: 08/07/2014] [Indexed: 10/24/2022]
|
7
|
Shrem T, Deouell LY. Frequency-dependent auditory space representation in the human planum temporale. Front Hum Neurosci 2014; 8:524. [PMID: 25100973 PMCID: PMC4106454 DOI: 10.3389/fnhum.2014.00524] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2014] [Accepted: 06/27/2014] [Indexed: 12/04/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) findings suggest that a part of the planum temporale (PT) is involved in representing spatial properties of acoustic information. Here, we tested whether this representation of space is frequency-dependent or generalizes across spectral content, as required from high order sensory representations. Using sounds with two different spectral content and two spatial locations in individually tailored virtual acoustic environment, we compared three conditions in a sparse-fMRI experiment: Single Location, in which two sounds were both presented from one location; Fixed Mapping, in which there was one-to-one mapping between two sounds and two locations; and Mixed Mapping, in which the two sounds were equally likely to appear at either one of the two locations. We surmised that only neurons tuned to both location and frequency should be differentially adapted by the Mixed and Fixed mappings. Replicating our previous findings, we found adaptation to spatial location in the PT. Importantly, activation was higher for Mixed Mapping than for Fixed Mapping blocks, even though the two sounds and the two locations appeared equally in both conditions. These results show that spatially tuned neurons in the human PT are not invariant to the spectral content of sounds.
Collapse
Affiliation(s)
- Talia Shrem
- Human Cognitive Neuroscience Lab, Department of Psychology, Social Sciences Faculty, The Hebrew University of Jerusalem Jerusalem, Israel
| | - Leon Y Deouell
- Human Cognitive Neuroscience Lab, Department of Psychology, Social Sciences Faculty, The Hebrew University of Jerusalem Jerusalem, Israel ; Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem Jerusalem, Israel
| |
Collapse
|