51
|
Badde S, Navarro KT, Landy MS. Modality-specific attention attenuates visual-tactile integration and recalibration effects by reducing prior expectations of a common source for vision and touch. Cognition 2020; 197:104170. [PMID: 32036027 DOI: 10.1016/j.cognition.2019.104170] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 10/25/2022]
Abstract
At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Karen T Navarro
- Department of Psychology, University of Minnesota, 75 E River Rd., Minneapolis, MN, 55455, USA
| | - Michael S Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
52
|
Park J, Son B, Han I, Lee W. Effect of Cutaneous Feedback on the Perception of Virtual Object Weight during Manipulation. Sci Rep 2020; 10:1357. [PMID: 31992799 PMCID: PMC6987230 DOI: 10.1038/s41598-020-58247-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 01/13/2020] [Indexed: 11/24/2022] Open
Abstract
Haptic interface technologies for virtual reality applica have been developed to increase the reality and manipulability of a virtual object by creating a diverse tactile sensation. Most evaluation of the haptic technologies, however, have been limited to the haptic perception of the tactile stimuli via static virtual objects. Noting this, we investigated the effect of lateral cutaneous feedback, along with kinesthetic feedback on the perception of virtual object weight during manipulation. We modeled the physical interaction between a participant’s finger avatars and virtual objects. The haptic stimuli were rendered with custom-built haptic feedback systems that can provide kinesthetic and lateral cutaneous feedback to the participant. We conducted two virtual object manipulation experiments, 1. a virtual object manipulation with one finger, and 2. the pull-out and lift-up of a virtual object grasped with a precision grip. The results of Experiment 1 indicate that the participants felt the virtual object rendered with lateral cutaneous feedback significantly heavier than with only kinesthetic feedback (p < 0.05 for mref = 100 and 200 g). Similarly, the participants of Experiment 2 felt the virtual objects significantly heavier when lateral cutaneous feedback was available (p < 0.05 for mref = 100, 200, and 300 g). Therefore, the additional lateral cutaneous feedback to the force feedback led the participants to feel the virtual object heavier than without the cutaneous feedback. The results also indicate that the contact force applied to a virtual object during manipulation can be a function of the perceived object weight p = 0.005 for Experiment 1 and p = 0.2 for Experiment 2.
Collapse
Affiliation(s)
- Jaeyoung Park
- Korea Institute of Science and Technology, Robotics and Media Institute, Seoul, 02792, South Korea.
| | - Bukun Son
- Seoul National University, Department of Mechanical Engineering, Seoul, 08826, South Korea
| | - Ilhwan Han
- Korea Institute of Science and Technology, Robotics and Media Institute, Seoul, 02792, South Korea
| | - Woochan Lee
- Incheon National University, Department of Electrical Engineering, Incheon, 22012, South Korea.
| |
Collapse
|
53
|
Cappelloni MS, Shivkumar S, Haefner RM, Maddox RK. Task-uninformative visual stimuli improve auditory spatial discrimination in humans but not the ideal observer. PLoS One 2019; 14:e0215417. [PMID: 31498804 PMCID: PMC6733465 DOI: 10.1371/journal.pone.0215417] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 08/27/2019] [Indexed: 11/19/2022] Open
Abstract
In order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain’s integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory spatial discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were spatially aligned with the auditory stimuli—even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.
Collapse
Affiliation(s)
- Madeline S. Cappelloni
- Biomedical Engineering, University of Rochester, Rochester, New York, United States of America
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, New York, United States of America
| | - Sabyasachi Shivkumar
- Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| | - Ralf M. Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, New York, United States of America
| | - Ross K. Maddox
- Biomedical Engineering, University of Rochester, Rochester, New York, United States of America
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, New York, United States of America
- Neuroscience, University of Rochester, Rochester, New York, United States of America
- * E-mail:
| |
Collapse
|
54
|
Probabilistic Representation in Human Visual Cortex Reflects Uncertainty in Serial Decisions. J Neurosci 2019; 39:8164-8176. [PMID: 31481435 DOI: 10.1523/jneurosci.3212-18.2019] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 07/24/2019] [Accepted: 07/24/2019] [Indexed: 01/16/2023] Open
Abstract
How does the brain represent the reliability of its sensory evidence? Here, we test whether sensory uncertainty is encoded in cortical population activity as the width of a probability distribution, a hypothesis that lies at the heart of Bayesian models of neural coding. We probe the neural representation of uncertainty by capitalizing on a well-known behavioral bias called serial dependence. Human observers of either sex reported the orientation of stimuli presented in sequence, while activity in visual cortex was measured with fMRI. We decoded probability distributions from population-level activity and found that serial dependence effects in behavior are consistent with a statistically advantageous sensory integration strategy, in which uncertain sensory information is given less weight. More fundamentally, our results suggest that probability distributions decoded from human visual cortex reflect the sensory uncertainty that observers rely on in their decisions, providing critical evidence for Bayesian theories of perception.SIGNIFICANCE STATEMENT Virtually any decision that people make is based on uncertain and incomplete information. Although uncertainty plays a major role in decision-making, we have but a nascent understanding of its neural basis. Here, we probe the neural code of uncertainty by capitalizing on a well-known perceptual illusion. We developed a computational model to explain the illusion, and tested it in behavioral and neuroimaging experiments. This revealed that the illusion is not a mistake of perception, but rather reflects a rational decision under uncertainty. No less important, we discovered that the uncertainty that people use in this decision is represented in brain activity as the width of a probability distribution, providing critical evidence for current Bayesian theories of decision-making.
Collapse
|
55
|
Fang Y, Yu Z, Liu JK, Chen F. A unified neural circuit of causal inference and multisensory integration. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.067] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
56
|
Richards MD, Goltz HC, Wong AM. Audiovisual perception in amblyopia: A review and synthesis. Exp Eye Res 2019; 183:68-75. [DOI: 10.1016/j.exer.2018.04.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Revised: 04/27/2018] [Accepted: 04/28/2018] [Indexed: 11/15/2022]
|
57
|
Gustafsson L. A Case of Near-Optimal Sensory Integration Based on Kohonen Self-Organizing Maps. Neural Comput 2019; 31:1419-1429. [PMID: 31113302 DOI: 10.1162/neco_a_01200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This letter shows by digital simulation that a simple rule applied to one-dimensional self-organized maps for integrating sensory perceptions from two identical sources yielding position information as integers, corrupted by independent noise sources, yields almost statistically optimal results for position estimation as determined by maximum likelihood estimation. There is no learning of the corrupting noise sources nor is any information about the statistics of the noise sources available to the integrating process. The simple rule employed yields a measure of the quality of the estimated position of the source. The letter also shows that if the Bayesian estimates, which are rational numbers, are rounded in order to comply with the stipulation that integers be identified, the Bayesian estimation will have a larger variance than the proposed integration.
Collapse
Affiliation(s)
- Lennart Gustafsson
- Department of Computer Science, Electrical and Space Engineering, Luleå University of Engineering, 971 87 Luleå, Sweden
| |
Collapse
|
58
|
Meijer D, Veselič S, Calafiore C, Noppeney U. Integration of audiovisual spatial signals is not consistent with maximum likelihood estimation. Cortex 2019; 119:74-88. [PMID: 31082680 PMCID: PMC6864592 DOI: 10.1016/j.cortex.2019.03.026] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 03/26/2019] [Accepted: 03/28/2019] [Indexed: 01/01/2023]
Abstract
Multisensory perception is regarded as one of the most prominent examples where human behaviour conforms to the computational principles of maximum likelihood estimation (MLE). In particular, observers are thought to integrate auditory and visual spatial cues weighted in proportion to their relative sensory reliabilities into the most reliable and unbiased percept consistent with MLE. Yet, evidence to date has been inconsistent. The current pre-registered, large-scale (N = 36) replication study investigated the extent to which human behaviour for audiovisual localization is in line with maximum likelihood estimation. The acquired psychophysics data show that while observers were able to reduce their multisensory variance relative to the unisensory variances in accordance with MLE, they weighed the visual signals significantly stronger than predicted by MLE. Simulations show that this dissociation can be explained by a greater sensitivity of standard estimation procedures to detect deviations from MLE predictions for sensory weights than for audiovisual variances. Our results therefore suggest that observers did not integrate audiovisual spatial signals weighted exactly in proportion to their relative reliabilities for localization. These small deviations from the predictions of maximum likelihood estimation may be explained by observers' uncertainty about the world's causal structure as accounted for by Bayesian causal inference.
Collapse
Affiliation(s)
- David Meijer
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK.
| | - Sebastijan Veselič
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Carmelo Calafiore
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Uta Noppeney
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
59
|
Stengård E, van den Berg R. Imperfect Bayesian inference in visual perception. PLoS Comput Biol 2019; 15:e1006465. [PMID: 30998675 PMCID: PMC6472731 DOI: 10.1371/journal.pcbi.1006465] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 03/08/2019] [Indexed: 11/24/2022] Open
Abstract
Optimal Bayesian models have been highly successful in describing human performance on perceptual decision-making tasks, such as cue combination and visual search. However, recent studies have argued that these models are often overly flexible and therefore lack explanatory power. Moreover, there are indications that neural computation is inherently imprecise, which makes it implausible that humans would perform optimally on any non-trivial task. Here, we reconsider human performance on a visual-search task by using an approach that constrains model flexibility and tests for computational imperfections. Subjects performed a target detection task in which targets and distractors were tilted ellipses with orientations drawn from Gaussian distributions with different means. We varied the amount of overlap between these distributions to create multiple levels of external uncertainty. We also varied the level of sensory noise, by testing subjects under both short and unlimited display times. On average, empirical performance-measured as d'-fell 18.1% short of optimal performance. We found no evidence that the magnitude of this suboptimality was affected by the level of internal or external uncertainty. The data were well accounted for by a Bayesian model with imperfections in its computations. This "imperfect Bayesian" model convincingly outperformed the "flawless Bayesian" model as well as all ten heuristic models that we tested. These results suggest that perception is founded on Bayesian principles, but with suboptimalities in the implementation of these principles. The view of perception as imperfect Bayesian inference can provide a middle ground between traditional Bayesian and anti-Bayesian views.
Collapse
Affiliation(s)
- Elina Stengård
- Department of Psychology, University of Uppsala, Uppsala, Sweden
| | | |
Collapse
|
60
|
Arnold DH, Petrie K, Murray C, Johnston A. Suboptimal human multisensory cue combination. Sci Rep 2019; 9:5155. [PMID: 30914673 PMCID: PMC6435731 DOI: 10.1038/s41598-018-37888-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 11/05/2018] [Indexed: 11/25/2022] Open
Abstract
Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception.
Collapse
Affiliation(s)
- Derek H Arnold
- School of Psychology, The University of Queensland, St Lucia, Queensland, 4102, Australia.
| | - Kirstie Petrie
- School of Psychology, The University of Queensland, St Lucia, Queensland, 4102, Australia
| | - Cailem Murray
- School of Psychology, The University of Queensland, St Lucia, Queensland, 4102, Australia
| | - Alan Johnston
- Experimental Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
61
|
Legge ELG. Comparative spatial memory and cue use: The contributions of Marcia L. Spetch to the study of small-scale spatial cognition. Behav Processes 2019; 159:65-79. [PMID: 30611849 DOI: 10.1016/j.beproc.2018.12.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 12/23/2018] [Accepted: 12/23/2018] [Indexed: 11/25/2022]
Abstract
Dr. Marcia Spetch is a Canadian experimental psychologist who specializes in the study of comparative cognition. Her research over the past four decades has covered many diverse topics, but focused primarily on the comparative study of small-scale spatial cognition, navigation, decision making, and risky choice. Over the course of her career Dr. Spetch has had a profound influence on the study of these topics, and for her work she was named a Fellow of the Association for Psychological Science in 2012, and a Fellow of the Royal Society of Canada in 2017. In this review, I provide a biographical sketch of Dr. Spetch's academic career, and revisit her contributions to the study of small-scale spatial cognition in two broad areas: the use of environmental geometric cues, and how animals cope with cue conflict. The goal of this review is to highlight the contributions of Dr. Spetch, her students, and her collaborators to the field of comparative cognition and the study of small-scale spatial cognition. As such, this review stands to serve as a tribute and testament to Dr. Spetch's scientific legacy.
Collapse
Affiliation(s)
- Eric L G Legge
- Department of Psychology, MacEwan University, 10700 - 104 Avenue, City Centre Campus, Edmonton, AB, T5J 4S2, Canada.
| |
Collapse
|
62
|
Ursino M, Cuppini C, Magosso E, Beierholm U, Shams L. Explaining the Effect of Likelihood Manipulation and Prior Through a Neural Network of the Audiovisual Perception of Space. Multisens Res 2019; 32:111-144. [PMID: 31059469 DOI: 10.1163/22134808-20191324] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 01/04/2019] [Indexed: 11/19/2022]
Abstract
Results in the recent literature suggest that multisensory integration in the brain follows the rules of Bayesian inference. However, how neural circuits can realize such inference and how it can be learned from experience is still the subject of active research. The aim of this work is to use a recent neurocomputational model to investigate how the likelihood and prior can be encoded in synapses, and how they affect audio-visual perception, in a variety of conditions characterized by different experience, different cue reliabilities and temporal asynchrony. The model considers two unisensory networks (auditory and visual) with plastic receptive fields and plastic crossmodal synapses, trained during a learning period. During training visual and auditory stimuli are more frequent and more tuned close to the fovea. Model simulations after training have been performed in crossmodal conditions to assess the auditory and visual perception bias: visual stimuli were positioned at different azimuth (±10° from the fovea) coupled with an auditory stimulus at various audio-visual distances (±20°). The cue reliability has been altered by using visual stimuli with two different contrast levels. Model predictions are compared with behavioral data. Results show that model predictions agree with behavioral data, in a variety of conditions characterized by a different role of prior and likelihood. Finally, the effect of a different unimodal or crossmodal prior, re-learning, temporal correlation among input stimuli, and visual damage (hemianopia) are tested, to reveal the possible use of the model in the clarification of important multisensory problems.
Collapse
Affiliation(s)
- Mauro Ursino
- 1Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Cristiano Cuppini
- 1Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Elisa Magosso
- 1Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Ulrik Beierholm
- 2Department of Psychology, Durham University, United Kingdom
| | - Ladan Shams
- 3Department of Psychology, Department of BioEngineering, Interdepartmental Neuroscience Program, University of California, Los Angeles, CA, USA
| |
Collapse
|
63
|
The dynamic effect of context on interval timing in children and adults. Acta Psychol (Amst) 2019; 192:87-93. [PMID: 30458315 DOI: 10.1016/j.actpsy.2018.10.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Revised: 10/02/2018] [Accepted: 10/12/2018] [Indexed: 11/23/2022] Open
Abstract
Human reproductions of time intervals are often biased toward previously perceived durations, resulting in a central tendency effect. The aim of the current study was to compare this effect of temporal context on time reproductions within children and adults. Children aged from 5 to 7 years, as well as adults, performed a ready-set-go reproduction task with a short and a long duration distribution. A central tendency effect was observed both in children and adults, with no age-difference in the effect of global context on temporal performance. However, the analysis of the effect of local context (trial-by-trial) indicated that younger children relied more on the duration (objective duration) presented in the most recent trial than adults. In addition, statistical analyses of the influence on temporal performance of recently reproduced durations by subjects (subjective duration) revealed that temporal reproductions in adults were influenced by performance drifts, i.e., their evaluation of their temporal error, while children simply relied on the value of reproduced durations on the recent trials. We argue that the central tendency effect was larger in young children due to their noisier internal representation of durations: A noisy system led participants to base their estimation on experienced duration rather than on the evaluation of their judgment.
Collapse
|
64
|
Abstract
Many skills rely on performing noisy mental computations on noisy sensory measurements. Bayesian models suggest that humans compensate for measurement noise and reduce behavioral variability by biasing perception toward prior expectations. Whether a similar strategy is employed to compensate for noise in downstream mental and sensorimotor computations is not known. We tested humans in a battery of tasks and found that tasks which involved more complex mental transformations resulted in increased bias, suggesting that humans are able to mitigate the effect of noise in both sensorimotor and mental transformations. These results indicate that humans delay inference in order to account for both measurement noise and noise in downstream computations. Humans compensate for sensory noise by biasing sensory estimates toward prior expectations, as predicted by models of Bayesian inference. Here, the authors show that humans perform ‘late inference’ downstream of sensory processing to mitigate the effects of noisy internal mental computations.
Collapse
|
65
|
Alamia A, Solopchuk O, Zénon A. Strong Conscious Cues Suppress Preferential Gaze Allocation to Unconscious Cues. Front Hum Neurosci 2018; 12:427. [PMID: 30459582 PMCID: PMC6232777 DOI: 10.3389/fnhum.2018.00427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 10/01/2018] [Indexed: 11/17/2022] Open
Abstract
Visual attention allows relevant information to be selected for further processing. Both conscious and unconscious visual stimuli can bias attentional allocation, but how these two types of visual information interact to guide attention remains unclear. In this study, we explored attentional allocation during a motion discrimination task with varied motion strength and unconscious associations between stimuli and cues. Participants were instructed to report the motion direction of two colored patches of dots. Unbeknown to participants, dot colors were sometimes informative of the correct response. We found that subjects learnt the associations between colors and motion direction but failed to report this association using the questionnaire filled at the end of the experiment, confirming that learning remained unconscious. The eye movement analyses revealed that allocation of attention to unconscious sources of information occurred mostly when motion coherence was low, indicating that unconscious cues influence attentional allocation only in the absence of strong conscious cues. All in all, our results reveal that conscious and unconscious sources of information interact with each other to influence attentional allocation and suggest a selection process that weights cues in proportion to their reliability.
Collapse
Affiliation(s)
- Andrea Alamia
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
| | - Oleg Solopchuk
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
| | - Alexandre Zénon
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
- UMR5287 Institut de Neurosciences Cognitives et Intégratives d’Aquitaine (INCIA), CNRS, Bordeaux, France
| |
Collapse
|
66
|
Majka P, Rosa MGP, Bai S, Chan JM, Huo BX, Jermakow N, Lin MK, Takahashi YS, Wolkowicz IH, Worthy KH, Rajan R, Reser DH, Wójcik DK, Okano H, Mitra PP. Unidirectional monosynaptic connections from auditory areas to the primary visual cortex in the marmoset monkey. Brain Struct Funct 2018; 224:111-131. [PMID: 30288557 PMCID: PMC6373361 DOI: 10.1007/s00429-018-1764-4] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Accepted: 09/27/2018] [Indexed: 11/26/2022]
Abstract
Until the late twentieth century, it was believed that different sensory modalities were processed by largely independent pathways in the primate cortex, with cross-modal integration only occurring in specialized polysensory areas. This model was challenged by the finding that the peripheral representation of the primary visual cortex (V1) receives monosynaptic connections from areas of the auditory cortex in the macaque. However, auditory projections to V1 have not been reported in other primates. We investigated the existence of direct interconnections between V1 and auditory areas in the marmoset, a New World monkey. Labelled neurons in auditory cortex were observed following 4 out of 10 retrograde tracer injections involving V1. These projections to V1 originated in the caudal subdivisions of auditory cortex (primary auditory cortex, caudal belt and parabelt areas), and targeted parts of V1 that represent parafoveal and peripheral vision. Injections near the representation of the vertical meridian of the visual field labelled few or no cells in auditory cortex. We also placed 8 retrograde tracer injections involving core, belt and parabelt auditory areas, none of which revealed direct projections from V1. These results confirm the existence of a direct, nonreciprocal projection from auditory areas to V1 in a different primate species, which has evolved separately from the macaque for over 30 million years. The essential similarity of these observations between marmoset and macaque indicate that early-stage audiovisual integration is a shared characteristic of primate sensory processing.
Collapse
Affiliation(s)
- Piotr Majka
- Laboratory of Neuroinformatics, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093, Warsaw, Poland
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia
| | - Marcello G P Rosa
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia.
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia.
| | - Shi Bai
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Jonathan M Chan
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Bing-Xing Huo
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, 11724, USA
| | - Natalia Jermakow
- Laboratory of Neuroinformatics, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093, Warsaw, Poland
| | - Meng K Lin
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan
| | - Yeonsook S Takahashi
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan
| | - Ianina H Wolkowicz
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Katrina H Worthy
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Ramesh Rajan
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - David H Reser
- School of Rural Health, Monash University, Churchill, VIC, 3842, Australia
| | - Daniel K Wójcik
- Laboratory of Neuroinformatics, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093, Warsaw, Poland
| | - Hideyuki Okano
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan
- Department of Physiology, Keio University School of Medicine, Tokyo, 160-8582, Japan
| | - Partha P Mitra
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia.
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, 11724, USA.
| |
Collapse
|
67
|
Schut MJ, Van der Stoep N, Fabius JH, Van der Stigchel S. Feature integration is unaffected by saccade landing point, even when saccades land outside of the range of regular oculomotor variance. J Vis 2018; 18:6. [PMID: 30029270 DOI: 10.1167/18.7.6] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The experience of our visual surroundings appears continuous, contradicting the erratic nature of visual processing due to saccades. A possible way the visual system can construct a continuous experience is by integrating presaccadic and postsaccadic visual input. However, saccades rarely land exactly at the intended location. Feature integration would therefore need to be robust against variations in saccade execution to facilitate visual continuity. In the current study, observers reported a feature (color) of the saccade target, which occasionally changed slightly during the saccade. In transsaccadic change-trials, observers reported a mixture of the pre- and postsaccadic color, indicating transsaccadic feature integration. Saccade landing distance was not a significant predictor of the reported color. Next, to investigate the influence of more extreme deviations of saccade landing point on color reports, we used a global effect paradigm in a second experiment. In global effect trials, a distractor appeared together with the saccade target, causing most saccades to land in between the saccade target and the distractor. Strikingly, even when saccades land further away (up to 4°) from the saccade target than one would expect under single target conditions, there was no effect of saccade landing point on the reported color. We reason that saccade landing point does not affect feature integration, due to dissociation between the intended saccade target and the actual saccade landing point. Transsaccadic feature integration seems to be a mechanism that is dependent on visual spatial attention, and, as a result, is robust against variance in saccade landing point.
Collapse
Affiliation(s)
- Martijn J Schut
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
| | - Jasper H Fabius
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
| | | |
Collapse
|
68
|
Neural implementation of Bayesian inference in a sensorimotor behavior. Nat Neurosci 2018; 21:1442-1451. [PMID: 30224803 PMCID: PMC6312195 DOI: 10.1038/s41593-018-0233-y] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Accepted: 08/14/2018] [Indexed: 11/28/2022]
Abstract
Actions are guided by a Bayesian-like interaction between priors based on experience and current sensory evidence. Here, we unveil a complete neural implementation of Bayesian-like behavior, including adaptation of a prior. We recorded the spiking of single neurons in the smooth eye movement region of the frontal eye fields (FEFSEM), a region that is causally involved in smooth pursuit eye movements. Monkeys tracked moving targets in contexts that set different priors for target speed. Before the onset of target motion, preparatory activity encodes and adapts in parallel with the behavioral adaptation of the prior. During the initiation of pursuit, FEFSEM output encodes a maximum a posteriori estimate of target speed based on a reliability-weighted combination of the prior and sensory evidence. FEFSEM responses during pursuit are sufficient both to adapt a prior that may be stored in FEFSEM and, through known downstream pathways, to cause Bayesian-like behavior in pursuit.
Collapse
|
69
|
Intra-auditory integration between pitch and loudness in humans: Evidence of super-optimal integration at moderate uncertainty in auditory signals. Sci Rep 2018; 8:13708. [PMID: 30209342 PMCID: PMC6135783 DOI: 10.1038/s41598-018-31792-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 08/21/2018] [Indexed: 11/08/2022] Open
Abstract
When a person plays a musical instrument, sound is produced and the integrated frequency and intensity produced are perceived aurally. The central nervous system (CNS) receives defective afferent signals from auditory systems and delivers imperfect efferent signals to the motor system due to the noise in both systems. However, it is still little known about auditory-motor interactions for successful performance. Here, we investigated auditory-motor interactions as multi-sensory input and multi-motor output system. Subjects performed a constant force production task using four fingers in three different auditory feedback conditions, where either the frequency (F), intensity (I), or both frequency and intensity (FI) of an auditory tone changed with sum of finger forces. Four levels of uncertainty (high, moderate-high, moderate-low, and low) were conditioned by manipulating the feedback gain of the produced force. We observed performance enhancement under the FI condition compared to either F or I alone at moderate-high uncertainty. Interestingly, the performance enhancement was greater than the prediction of the Bayesian model, suggesting super-optimality. We also observed deteriorated synergistic multi-finger interactions as the level of uncertainty increased, suggesting that the CNS responded to increased uncertainty by changing control strategy of multi-finger actions.
Collapse
|
70
|
Egger SW, Jazayeri M. A nonlinear updating algorithm captures suboptimal inference in the presence of signal-dependent noise. Sci Rep 2018; 8:12597. [PMID: 30135441 PMCID: PMC6105733 DOI: 10.1038/s41598-018-30722-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Accepted: 08/02/2018] [Indexed: 11/14/2022] Open
Abstract
Bayesian models have advanced the idea that humans combine prior beliefs and sensory observations to optimize behavior. How the brain implements Bayes-optimal inference, however, remains poorly understood. Simple behavioral tasks suggest that the brain can flexibly represent probability distributions. An alternative view is that the brain relies on simple algorithms that can implement Bayes-optimal behavior only when the computational demands are low. To distinguish between these alternatives, we devised a task in which Bayes-optimal performance could not be matched by simple algorithms. We asked subjects to estimate and reproduce a time interval by combining prior information with one or two sequential measurements. In the domain of time, measurement noise increases with duration. This property takes the integration of multiple measurements beyond the reach of simple algorithms. We found that subjects were able to update their estimates using the second measurement but their performance was suboptimal, suggesting that they were unable to update full probability distributions. Instead, subjects’ behavior was consistent with an algorithm that predicts upcoming sensory signals, and applies a nonlinear function to errors in prediction to update estimates. These results indicate that the inference strategies employed by humans may deviate from Bayes-optimal integration when the computational demands are high.
Collapse
Affiliation(s)
- Seth W Egger
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA. .,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
71
|
Krauss P, Tziridis K, Schilling A, Schulze H. Cross-Modal Stochastic Resonance as a Universal Principle to Enhance Sensory Processing. Front Neurosci 2018; 12:578. [PMID: 30186104 PMCID: PMC6110899 DOI: 10.3389/fnins.2018.00578] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Accepted: 07/30/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Patrick Krauss
- Department of Otorhinolaryngology, Head and Neck Surgery, Experimental Otolaryngology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany
| | - Konstantin Tziridis
- Department of Otorhinolaryngology, Head and Neck Surgery, Experimental Otolaryngology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany
| | - Achim Schilling
- Department of Otorhinolaryngology, Head and Neck Surgery, Experimental Otolaryngology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany
| | - Holger Schulze
- Department of Otorhinolaryngology, Head and Neck Surgery, Experimental Otolaryngology, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
72
|
Michail G, Keil J. High cognitive load enhances the susceptibility to non-speech audiovisual illusions. Sci Rep 2018; 8:11530. [PMID: 30069059 PMCID: PMC6070496 DOI: 10.1038/s41598-018-30007-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Accepted: 07/20/2018] [Indexed: 12/03/2022] Open
Abstract
The role of attentional processes in the integration of input from different sensory modalities is complex and multifaceted. Importantly, little is known about how simple, non-linguistic stimuli are integrated when the resources available for sensory processing are exhausted. We studied this question by examining multisensory integration under conditions of limited endogenous attentional resources. Multisensory integration was assessed through the sound-induced flash illusion (SIFI), in which a flash presented simultaneously with two short auditory beeps is often perceived as two flashes, while cognitive load was manipulated using an n-back task. A one-way repeated measures ANOVA revealed that increased cognitive demands had a significant effect on the perception of the illusion while post-hoc tests showed that participants' illusion perception was increased when attentional resources were limited. Additional analysis demonstrated that this effect was not related to a response bias. These findings provide evidence that the integration of non-speech, audiovisual stimuli is enhanced under reduced attentional resources and it therefore supports the notion that top-down attentional control plays an essential role in multisensory integration.
Collapse
Affiliation(s)
- Georgios Michail
- Department of Psychiatry and Psychotherapy, Multisensory Integration Lab, Charité Universitätsmedizin Berlin, Berlin, Germany.
| | - Julian Keil
- Department of Psychiatry and Psychotherapy, Multisensory Integration Lab, Charité Universitätsmedizin Berlin, Berlin, Germany
- Biological Psychology, Christian-Albrechts-University Kiel, Kiel, Germany
| |
Collapse
|
73
|
Cuppini C, Shams L, Magosso E, Ursino M. A biologically inspired neurocomputational model for audiovisual integration and causal inference. Eur J Neurosci 2018; 46:2481-2498. [PMID: 28949035 DOI: 10.1111/ejn.13725] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 09/18/2017] [Accepted: 09/19/2017] [Indexed: 11/28/2022]
Abstract
Recently, experimental and theoretical research has focused on the brain's abilities to extract information from a noisy sensory environment and how cross-modal inputs are processed to solve the causal inference problem to provide the best estimate of external events. Despite the empirical evidence suggesting that the nervous system uses a statistically optimal and probabilistic approach in addressing these problems, little is known about the brain's architecture needed to implement these computations. The aim of this work was to realize a mathematical model, based on physiologically plausible hypotheses, to analyze the neural mechanisms underlying multisensory perception and causal inference. The model consists of three layers topologically organized: two encode auditory and visual stimuli, separately, and are reciprocally connected via excitatory synapses and send excitatory connections to the third downstream layer. This synaptic organization realizes two mechanisms of cross-modal interactions: the first is responsible for the sensory representation of the external stimuli, while the second solves the causal inference problem. We tested the network by comparing its results to behavioral data reported in the literature. Among others, the network can account for the ventriloquism illusion, the pattern of sensory bias and the percept of unity as a function of the spatial auditory-visual distance, and the dependence of the auditory error on the causal inference. Finally, simulations results are consistent with probability matching as the perceptual strategy used in auditory-visual spatial localization tasks, agreeing with the behavioral data. The model makes untested predictions that can be investigated in future behavioral experiments.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, I40136, Bologna, Italy
| | - Ladan Shams
- Department of Psychology, Department of BioEngineering, Interdepartmental Neuroscience Program, University of California, Los Angeles, CA, USA
| | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, I40136, Bologna, Italy
| | - Mauro Ursino
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, I40136, Bologna, Italy
| |
Collapse
|
74
|
Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze. PLoS One 2018; 13:e0199097. [PMID: 29902253 PMCID: PMC6002115 DOI: 10.1371/journal.pone.0199097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 05/31/2018] [Indexed: 11/21/2022] Open
Abstract
Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.
Collapse
|
75
|
Montagne C, Zhou Y. Audiovisual Interactions in Front and Rear Space. Front Psychol 2018; 9:713. [PMID: 29867678 PMCID: PMC5962672 DOI: 10.3389/fpsyg.2018.00713] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2017] [Accepted: 04/23/2018] [Indexed: 11/13/2022] Open
Abstract
The human visual and auditory systems do not encode an entirely overlapped space when static head and body position are maintained. While visual capture of sound source location in the frontal field is known to be immediate and direct, visual influence in the rear auditory space behind the subject remains under-studied. In this study we investigated the influence of presenting frontal LED flashes on the perceived location of a phantom sound source generated using time-delay-based stereophony. Our results show that frontal visual stimuli affected auditory localization in two different ways - (1) auditory responses were laterally shifted (left or right) toward the location of the light stimulus and (2) auditory responses were more often in the frontal field. The observed visual effects do not adhere to the spatial rule of multisensory interaction with regard to the physical proximity of cues. Instead, the influence of visual cues interacted closely with front-back confusions in auditory localization. In particular, visually induced shift along the left-right direction occurred most often when an auditory stimulus was localized in the same (frontal) field as the light stimulus, even when the actual sound sources were presented from behind a subject. Increasing stimulus duration (from 15-ms to 50-ms) significantly mitigated the rates of front-back confusion and the associated effects of visual stimuli. These findings suggest that concurrent visual stimulation elicits a strong frontal bias in auditory localization and confirm that temporal integration plays an important role in decreasing front-back errors under conditions requiring multisensory spatial processing.
Collapse
Affiliation(s)
- Christopher Montagne
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, United States
| | - Yi Zhou
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, United States
| |
Collapse
|
76
|
Normal temporal binding window but no sound-induced flash illusion in people with one eye. Exp Brain Res 2018; 236:1825-1834. [DOI: 10.1007/s00221-018-5263-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 04/12/2018] [Indexed: 10/17/2022]
|
77
|
Abstract
Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers' behavioral weights by fitting psychometric functions to participants' localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region's preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants' modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting).
Collapse
|
78
|
Abstract
Human perceptual decisions are often described as optimal. Critics of this view have argued that claims of optimality are overly flexible and lack explanatory power. Meanwhile, advocates for optimality have countered that such criticisms single out a few selected papers. To elucidate the issue of optimality in perceptual decision making, we review the extensive literature on suboptimal performance in perceptual tasks. We discuss eight different classes of suboptimal perceptual decisions, including improper placement, maintenance, and adjustment of perceptual criteria; inadequate tradeoff between speed and accuracy; inappropriate confidence ratings; misweightings in cue combination; and findings related to various perceptual illusions and biases. In addition, we discuss conceptual shortcomings of a focus on optimality, such as definitional difficulties and the limited value of optimality claims in and of themselves. We therefore advocate that the field drop its emphasis on whether observed behavior is optimal and instead concentrate on building and testing detailed observer models that explain behavior across a wide range of tasks. To facilitate this transition, we compile the proposed hypotheses regarding the origins of suboptimal perceptual decisions reviewed here. We argue that verifying, rejecting, and expanding these explanations for suboptimal behavior - rather than assessing optimality per se - should be among the major goals of the science of perceptual decision making.
Collapse
Affiliation(s)
- Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA 30332.
| | - Rachel N Denison
- Department of Psychology and Center for Neural Science, New York University, New York, NY 10003.
| |
Collapse
|
79
|
Mikula L, Gaveau V, Pisella L, Khan AZ, Blohm G. Learned rather than online relative weighting of visual-proprioceptive sensory cues. J Neurophysiol 2018; 119:1981-1992. [PMID: 29465322 DOI: 10.1152/jn.00338.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand's specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.
Collapse
Affiliation(s)
- Laura Mikula
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France.,School of Optometry, University of Montreal , Montreal, Quebec , Canada
| | - Valérie Gaveau
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France
| | - Laure Pisella
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France
| | - Aarlenne Z Khan
- School of Optometry, University of Montreal , Montreal, Quebec , Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
| |
Collapse
|
80
|
Nikbakht N, Tafreshiha A, Zoccolan D, Diamond ME. Supralinear and Supramodal Integration of Visual and Tactile Signals in Rats: Psychophysics and Neuronal Mechanisms. Neuron 2018; 97:626-639.e8. [PMID: 29395913 PMCID: PMC5814688 DOI: 10.1016/j.neuron.2018.01.003] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2017] [Revised: 11/24/2017] [Accepted: 12/31/2017] [Indexed: 11/30/2022]
Abstract
To better understand how object recognition can be triggered independently of the sensory channel through which information is acquired, we devised a task in which rats judged the orientation of a raised, black and white grating. They learned to recognize two categories of orientation: 0° ± 45° (“horizontal”) and 90° ± 45° (“vertical”). Each trial required a visual (V), a tactile (T), or a visual-tactile (VT) discrimination; VT performance was better than that predicted by optimal linear combination of V and T signals, indicating synergy between sensory channels. We examined posterior parietal cortex (PPC) and uncovered key neuronal correlates of the behavioral findings: PPC carried both graded information about object orientation and categorical information about the rat’s upcoming choice; single neurons exhibited identical responses under the three modality conditions. Finally, a linear classifier of neuronal population firing replicated the behavioral findings. Taken together, these findings suggest that PPC is involved in the supramodal processing of shape. Rats combine vision and touch to distinguish two grating orientation categories Performance with vision and touch together reveals synergy between the two channels Posterior parietal cortex (PPC) neuronal responses are invariant to modality PPC neurons carry information about object orientation and the rat’s categorization
Collapse
Affiliation(s)
- Nader Nikbakht
- Tactile Perception and Learning Lab, International School for Advanced Studies (SISSA), Via Bonomea 265, Trieste, TS 34136, Italy
| | - Azadeh Tafreshiha
- Tactile Perception and Learning Lab, International School for Advanced Studies (SISSA), Via Bonomea 265, Trieste, TS 34136, Italy
| | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Via Bonomea 265, Trieste, TS 34136, Italy
| | - Mathew E Diamond
- Tactile Perception and Learning Lab, International School for Advanced Studies (SISSA), Via Bonomea 265, Trieste, TS 34136, Italy.
| |
Collapse
|
81
|
Chotsrisuparat C, Koning A, Jacobs R, van Lier R. Effects of Auditory Patterns on Judged Displacements of an Occluded Moving Object. Multisens Res 2018; 31:623-643. [PMID: 31264610 DOI: 10.1163/22134808-18001294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 12/22/2017] [Indexed: 11/19/2022]
Abstract
Using displays in which a moving disk disappeared behind an occluder, we examined whether an accompanying auditory rhythm influenced the perceived displacement of the disk during occlusion. We manipulated a baseline rhythm, comprising a relatively fast alternation of equal sound and pause durations. We had two different manipulations to create auditory sequences with a slower rhythm: either the pause durations or the sound durations were increased. In the trial, a disk moved at a constant speed, and at a certain point moved behind an occluder during which an auditory rhythm was played. Participants were instructed to track the occluded disk, and judge the expected position of the disk at the moment that the auditory rhythm ended by touching the judged position on a touch screen. We investigated the influence of the auditory rhythm, i.e., ratio of sound to pause duration, and the influence of auditory density, i.e., the number of sound onsets per time unit, on the judged distance. The results showed that the temporal characteristics affected the spatial judgments. Overall, we found that in the current paradigm relatively slow rhythms led to shorter judged distance as compared to relatively fast rhythms for both pause and sound variations. There was no main effect of auditory density on the judged distance of an expected visual event. That is, whereas the speed of the auditory rhythm appears crucial, the number of sound onsets per time unit as such, i.e., the auditory density, appears a much weaker factor.
Collapse
Affiliation(s)
- Chayada Chotsrisuparat
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Arno Koning
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Richard Jacobs
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Rob van Lier
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
82
|
Bettinger JS, Eastman TE. Foundations of anticipatory logic in biology and physics. PROGRESS IN BIOPHYSICS AND MOLECULAR BIOLOGY 2017; 131:108-120. [DOI: 10.1016/j.pbiomolbio.2017.09.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Revised: 09/01/2017] [Accepted: 09/04/2017] [Indexed: 12/30/2022]
|
83
|
On the interplay of visuospatial and audiotemporal dominance: Evidence from a multimodal kappa effect. Atten Percept Psychophys 2017; 80:535-552. [PMID: 29147960 DOI: 10.3758/s13414-017-1437-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.
Collapse
|
84
|
Cuppini C, Ursino M, Magosso E, Ross LA, Foxe JJ, Molholm S. A Computational Analysis of Neural Mechanisms Underlying the Maturation of Multisensory Speech Integration in Neurotypical Children and Those on the Autism Spectrum. Front Hum Neurosci 2017; 11:518. [PMID: 29163099 PMCID: PMC5670153 DOI: 10.3389/fnhum.2017.00518] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2017] [Accepted: 10/11/2017] [Indexed: 11/13/2022] Open
Abstract
Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electric, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Mauro Ursino
- Department of Electric, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Elisa Magosso
- Department of Electric, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Lars A. Ross
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - John J. Foxe
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
- Department of Neuroscience and The Del Monte Institute for Neuroscience, University of Rochester School of Medicine, Rochester, NY, United States
| | - Sophie Molholm
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| |
Collapse
|
85
|
Boyle SC, Kayser SJ, Kayser C. Neural correlates of multisensory reliability and perceptual weights emerge at early latencies during audio-visual integration. Eur J Neurosci 2017; 46:2565-2577. [PMID: 28940728 PMCID: PMC5725738 DOI: 10.1111/ejn.13724] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 09/11/2017] [Accepted: 09/18/2017] [Indexed: 12/24/2022]
Abstract
To make accurate perceptual estimates, observers must take the reliability of sensory information into account. Despite many behavioural studies showing that subjects weight individual sensory cues in proportion to their reliabilities, it is still unclear when during a trial neuronal responses are modulated by the reliability of sensory information or when they reflect the perceptual weights attributed to each sensory input. We investigated these questions using a combination of psychophysics, EEG‐based neuroimaging and single‐trial decoding. Our results show that the weighted integration of sensory information in the brain is a dynamic process; effects of sensory reliability on task‐relevant EEG components were evident 84 ms after stimulus onset, while neural correlates of perceptual weights emerged 120 ms after stimulus onset. These neural processes had different underlying sources, arising from sensory and parietal regions, respectively. Together these results reveal the temporal dynamics of perceptual and neural audio‐visual integration and support the notion of temporally early and functionally specific multisensory processes in the brain.
Collapse
Affiliation(s)
- Stephanie C Boyle
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| |
Collapse
|
86
|
Sutton EE, Demir A, Stamper SA, Fortune ES, Cowan NJ. Dynamic modulation of visual and electrosensory gains for locomotor control. J R Soc Interface 2017; 13:rsif.2016.0057. [PMID: 27170650 DOI: 10.1098/rsif.2016.0057] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Accepted: 04/13/2016] [Indexed: 11/12/2022] Open
Abstract
Animal nervous systems resolve sensory conflict for the control of movement. For example, the glass knifefish, Eigenmannia virescens, relies on visual and electrosensory feedback as it swims to maintain position within a moving refuge. To study how signals from these two parallel sensory streams are used in refuge tracking, we constructed a novel augmented reality apparatus that enables the independent manipulation of visual and electrosensory cues to freely swimming fish (n = 5). We evaluated the linearity of multisensory integration, the change to the relative perceptual weights given to vision and electrosense in relation to sensory salience, and the effect of the magnitude of sensory conflict on sensorimotor gain. First, we found that tracking behaviour obeys superposition of the sensory inputs, suggesting linear sensorimotor integration. In addition, fish rely more on vision when electrosensory salience is reduced, suggesting that fish dynamically alter sensorimotor gains in a manner consistent with Bayesian integration. However, the magnitude of sensory conflict did not significantly affect sensorimotor gain. These studies lay the theoretical and experimental groundwork for future work investigating multisensory control of locomotion.
Collapse
Affiliation(s)
- Erin E Sutton
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Alican Demir
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Sarah A Stamper
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Eric S Fortune
- Department of Biological Sciences, New Jersey Institute of Technology, Newark, NJ, USA
| | - Noah J Cowan
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
87
|
Abstract
Where textures are defined by repetitive small spatial structures, exploration covering a greater extent will lead to signal repetition. We investigated how sensory estimates derived from these signals are integrated. In Experiment 1, participants stroked with the index finger one to eight times across two virtual gratings. Half of the participants discriminated according to ridge amplitude, the other half according to ridge spatial period. In both tasks, just noticeable differences (JNDs) decreased with an increasing number of strokes. Those gains from additional exploration were more than three times smaller than predicted for optimal observers who have access to equally reliable, and therefore equally weighted, estimates for the entire exploration. We assume that the sequential nature of the exploration leads to memory decay of sensory estimates. Thus, participants compare an overall estimate of the first stimulus, which is affected by memory decay, to stroke-specific estimates during the exploration of the second stimulus. This was tested in Experiments 2 and 3. The spatial period of one stroke across either the first or second of two sequentially presented gratings was slightly discrepant from periods in all other strokes. This allowed calculating weights of stroke-specific estimates in the overall percept. As predicted, weights were approximately equal for all strokes in the first stimulus, while weights decreased during the exploration of the second stimulus. A quantitative Kalman filter model of our assumptions was consistent with the data. Hence, our results support an optimal integration model for sequential information given that memory decay affects comparison processes.
Collapse
|
88
|
Grouping by feature of cross-modal flankers in temporal ventriloquism. Sci Rep 2017; 7:7615. [PMID: 28790403 PMCID: PMC5548807 DOI: 10.1038/s41598-017-06550-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2017] [Accepted: 06/14/2017] [Indexed: 11/08/2022] Open
Abstract
Signals in one sensory modality can influence perception of another, for example the bias of visual timing by audition: temporal ventriloquism. Strong accounts of temporal ventriloquism hold that the sensory representation of visual signal timing changes to that of the nearby sound. Alternatively, underlying sensory representations do not change. Rather, perceptual grouping processes based on spatial, temporal, and featural information produce best-estimates of global event properties. In support of this interpretation, when feature-based perceptual grouping conflicts with temporal information-based in scenarios that reveal temporal ventriloquism, the effect is abolished. However, previous demonstrations of this disruption used long-range visual apparent-motion stimuli. We investigated whether similar manipulations of feature grouping could also disrupt the classical temporal ventriloquism demonstration, which occurs over a short temporal range. We estimated the precision of participants' reports of which of two visual bars occurred first. The bars were accompanied by different cross-modal signals that onset synchronously or asynchronously with each bar. Participants' performance improved with asynchronous presentation relative to synchronous - temporal ventriloquism - however, unlike the long-range apparent motion paradigm, this was unaffected by different combinations of cross-modal feature, suggesting that featural similarity of cross-modal signals may not modulate cross-modal temporal influences in short time scales.
Collapse
|
89
|
Darlington TR, Tokiyama S, Lisberger SG. Control of the strength of visual-motor transmission as the mechanism of rapid adaptation of priors for Bayesian inference in smooth pursuit eye movements. J Neurophysiol 2017; 118:1173-1189. [PMID: 28592689 PMCID: PMC5547260 DOI: 10.1152/jn.00282.2017] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Revised: 06/04/2017] [Accepted: 06/04/2017] [Indexed: 12/25/2022] Open
Abstract
Bayesian inference provides a cogent account of how the brain combines sensory information with "priors" based on past experience to guide many behaviors, including smooth pursuit eye movements. We now demonstrate very rapid adaptation of the pursuit system's priors for target direction and speed. We go on to leverage that adaptation to outline possible neural mechanisms that could cause pursuit to show features consistent with Bayesian inference. Adaptation of the prior causes changes in the eye speed and direction at the initiation of pursuit. The adaptation appears after a single trial and accumulates over repeated exposure to a given history of target speeds and directions. The influence of the priors depends on the reliability of visual motion signals: priors are more effective against the visual motion signals provided by low-contrast vs. high-contrast targets. Adaptation of the direction prior generalizes to eye speed and vice versa, suggesting that both priors could be controlled by a single neural mechanism. We conclude that the pursuit system can learn the statistics of visual motion rapidly and use those statistics to guide future behavior. Furthermore, a model that adjusts the gain of visual-motor transmission predicts the effects of recent experience on pursuit direction and speed, as well as the specifics of the generalization between the priors for speed and direction. We suggest that Bayesian inference in pursuit behavior is implemented by distinctly non-Bayesian internal mechanisms that use the smooth eye movement region of the frontal eye fields to control of the gain of visual-motor transmission.NEW & NOTEWORTHY Bayesian inference can account for the interaction between sensory data and past experience in many behaviors. Here, we show, using smooth pursuit eye movements, that the priors based on past experience can be adapted over a very short time frame. We also show that a single model based on direction-specific adaptation of the strength of visual-motor transmission can explain the implementation and adaptation of priors for both target direction and target speed.
Collapse
Affiliation(s)
- Timothy R Darlington
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina
| | - Stefanie Tokiyama
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina
| | - Stephen G Lisberger
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina
| |
Collapse
|
90
|
Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback. Nat Commun 2017; 8:138. [PMID: 28743932 PMCID: PMC5527101 DOI: 10.1038/s41467-017-00181-8] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Accepted: 06/08/2017] [Indexed: 02/01/2023] Open
Abstract
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey’s learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules. Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
Collapse
|
91
|
Churan J, Paul J, Klingenhoefer S, Bremmer F. Integration of visual and tactile information in reproduction of traveled distance. J Neurophysiol 2017; 118:1650-1663. [PMID: 28659463 DOI: 10.1152/jn.00342.2017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Revised: 06/27/2017] [Accepted: 06/27/2017] [Indexed: 11/22/2022] Open
Abstract
In the natural world, self-motion always stimulates several different sensory modalities. Here we investigated the interplay between a visual optic flow stimulus simulating self-motion and a tactile stimulus (air flow resulting from self-motion) while human observers were engaged in a distance reproduction task. We found that adding congruent tactile information (i.e., speed of the air flow and speed of visual motion are directly proportional) to the visual information significantly improves the precision of the actively reproduced distances. This improvement, however, was smaller than predicted for an optimal integration of visual and tactile information. In contrast, incongruent tactile information (i.e., speed of the air flow and speed of visual motion are inversely proportional) did not improve subjects' precision indicating that incongruent tactile information and visual information were not integrated. One possible interpretation of the results is a link to properties of neurons in the ventral intraparietal area that have been shown to have spatially and action-congruent receptive fields for visual and tactile stimuli.NEW & NOTEWORTHY This study shows that tactile and visual information can be integrated to improve the estimates of the parameters of self-motion. This, however, happens only if the two sources of information are congruent-as they are in a natural environment. In contrast, an incongruent tactile stimulus is still used as a source of information about self-motion but it is not integrated with visual information.
Collapse
Affiliation(s)
- Jan Churan
- Department of Neurophysics, Marburg University, Marburg, Germany; and
| | - Johannes Paul
- Department of Neurophysics, Marburg University, Marburg, Germany; and
| | - Steffen Klingenhoefer
- Department of Neurophysics, Marburg University, Marburg, Germany; and.,Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey
| | - Frank Bremmer
- Department of Neurophysics, Marburg University, Marburg, Germany; and
| |
Collapse
|
92
|
Crane BT. Effect of eye position during human visual-vestibular integration of heading perception. J Neurophysiol 2017; 118:1609-1621. [PMID: 28615328 DOI: 10.1152/jn.00037.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Revised: 06/13/2017] [Accepted: 06/13/2017] [Indexed: 11/22/2022] Open
Abstract
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems.NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability.
Collapse
Affiliation(s)
- Benjamin T Crane
- Department of Otolaryngology, University of Rochester, Rochester, New York
| |
Collapse
|
93
|
Sensory cortical response to uncertainty and low salience during recognition of affective cues in musical intervals. PLoS One 2017; 12:e0175991. [PMID: 28422990 PMCID: PMC5396975 DOI: 10.1371/journal.pone.0175991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2016] [Accepted: 04/04/2017] [Indexed: 01/07/2023] Open
Abstract
Previous neuroimaging studies have shown an increased sensory cortical response (i.e., heightened weight on sensory evidence) under higher levels of predictive uncertainty. The signal enhancement theory proposes that attention improves the quality of the stimulus representation, and therefore reduces uncertainty by increasing the gain of the sensory signal. The present study employed functional magnetic resonance imaging (fMRI) to investigate the neural correlates for ambiguous valence inferences signaled by auditory information within an emotion recognition paradigm. Participants categorized sound stimuli of three distinct levels of consonance/dissonance controlled by interval content. Separate behavioural and neuroscientific experiments were conducted. Behavioural results revealed that, compared with the consonance condition (perfect fourths, fifths and octaves) and the strong dissonance condition (minor/major seconds and tritones), the intermediate dissonance condition (minor thirds) was the most ambiguous, least salient and more cognitively demanding category (slowest reaction times). The neuroscientific findings were consistent with a heightened weight on sensory evidence whilst participants were evaluating intermediate dissonances, which was reflected in an increased neural response of the right Heschl’s gyrus. The results support previous studies that have observed enhanced precision of sensory evidence whilst participants attempted to represent and respond to higher degrees of uncertainty, and converge with evidence showing preferential processing of complex spectral information in the right primary auditory cortex. These findings are discussed with respect to music-theoretical concepts and recent Bayesian models of perception, which have proposed that attention may heighten the weight of information coming from sensory channels to stimulate learning about unknown predictive relationships.
Collapse
|
94
|
|
95
|
Ursino M, Cuppini C, Magosso E. Multisensory Bayesian Inference Depends on Synapse Maturation during Training: Theoretical Analysis and Neural Modeling Implementation. Neural Comput 2017; 29:735-782. [DOI: 10.1162/neco_a_00935] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding—the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. The work includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In cross-modal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs on a trial-by-trial basis, according to the reliability of the individual cues.
Collapse
Affiliation(s)
- Mauro Ursino
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| | - Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| |
Collapse
|
96
|
Kabbaligere R, Lee BC, Layne CS. Balancing sensory inputs: Sensory reweighting of ankle proprioception and vision during a bipedal posture task. Gait Posture 2017; 52:244-250. [PMID: 27978501 DOI: 10.1016/j.gaitpost.2016.12.009] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2016] [Revised: 12/01/2016] [Accepted: 12/06/2016] [Indexed: 02/02/2023]
Abstract
During multisensory integration, it has been proposed that the central nervous system (CNS) assigns a weight to each sensory input through a process called sensory reweighting. The outcome of this integration process is a single percept that is used to control posture. The main objective of this study was to determine the interaction between ankle proprioception and vision during sensory integration when the two inputs provide conflicting sensory information pertaining to direction of body sway. Sensory conflict was created by using bilateral Achilles tendon vibration and contracting visual flow and produced body sway in opposing directions when applied independently. Vibration was applied at 80Hz, 1mm amplitude and the visual flow consisted of a virtual reality scene with concentric rings retreating at 3m/s. Body sway elicited by the stimuli individually and in combination was evaluated in 10 healthy young adults by analyzing center of pressure (COP) displacement and lower limb kinematics. The magnitude of COP displacement produced when vibration and visual flow were combined was found to be lesser than the algebraic sum of COP displacement produced by the stimuli when applied individually. This suggests that multisensory integration is not merely an algebraic summation of individual cues. Instead the observed response might be a result of a weighted combination process with the weight attached to each cue being directly proportional to the relative reliability of the cues. The moderating effect of visual flow on postural instability produced by vibration points to the potential use of controlled visual flow for balance training.
Collapse
Affiliation(s)
- Rakshatha Kabbaligere
- Department of Health and Human Performance, University of Houston, Houston, TX, United States; Center for Neuromotor and Biomechanics Research, University of Houston, Houston, TX, United States.
| | - Beom-Chan Lee
- Department of Health and Human Performance, University of Houston, Houston, TX, United States; Center for Neuromotor and Biomechanics Research, University of Houston, Houston, TX, United States
| | - Charles S Layne
- Department of Health and Human Performance, University of Houston, Houston, TX, United States; Center for Neuromotor and Biomechanics Research, University of Houston, Houston, TX, United States; Center for Neuro-Engineering and Cognitive Science, University of Houston, Houston, TX, United States
| |
Collapse
|
97
|
Fischer BJ, Peña JL. Optimal nonlinear cue integration for sound localization. J Comput Neurosci 2017; 42:37-52. [PMID: 27714569 PMCID: PMC5253079 DOI: 10.1007/s10827-016-0626-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 08/10/2016] [Accepted: 09/06/2016] [Indexed: 10/20/2022]
Abstract
Integration of multiple sensory cues can improve performance in detection and estimation tasks. There is an open theoretical question of the conditions under which linear or nonlinear cue combination is Bayes-optimal. We demonstrate that a neural population decoded by a population vector requires nonlinear cue combination to approximate Bayesian inference. Specifically, if cues are conditionally independent, multiplicative cue combination is optimal for the population vector. The model was tested on neural and behavioral responses in the barn owl's sound localization system where space-specific neurons owe their selectivity to multiplicative tuning to sound localization cues interaural phase (IPD) and level (ILD) differences. We found that IPD and ILD cues are approximately conditionally independent. As a result, the multiplicative combination selectivity to IPD and ILD of midbrain space-specific neurons permits a population vector to perform Bayesian cue combination. We further show that this model describes the owl's localization behavior in azimuth and elevation. This work provides theoretical justification and experimental evidence supporting the optimality of nonlinear cue combination.
Collapse
Affiliation(s)
- Brian J Fischer
- Department of Mathematics, Seattle University, 901 12th Ave, Seattle, WA, 98122, USA.
| | - Jose Luis Peña
- Department of Neuroscience, Albert Einstein College of Medicine, 1410 Pelham Parkway South, Bronx, NY, 10461, USA
| |
Collapse
|
98
|
Chotsrisuparat C, Koning A, Jacobs R, van Lier R. Auditory Rhythms Influence Judged Time to Contact of an Occluded Moving Object. Multisens Res 2017. [DOI: 10.1163/22134808-00002592] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
We studied the expected moment of reappearance of a moving object after it disappeared from sight. In particular, we investigated whether auditory rhythms influence time to contact (TTC) judgments. Using displays in which a moving disk disappears behind an occluder, we examined whether an accompanying auditory rhythm influences the expected TTC of an occluded moving object. We manipulated a baseline auditory rhythm — consisting of equal sound and pause durations — in two ways: either the pause durations or the sound durations were increased to create slower rhythms. Participants had to press a button at the moment they expected the disk to reappear. Variations in pause duration (Experiments 1 and 2) affected expected TTC, in contrast to variations in sound duration (Experiment 3). These results show that auditory rhythms affect expected reappearance of an occluded moving object. Second, these results suggest that temporal auditory grouping is an important factor in TTC.
Collapse
Affiliation(s)
- Chayada Chotsrisuparat
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Arno Koning
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Richard Jacobs
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Rob van Lier
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
99
|
Orendorff EE, Kalesinskas L, Palumbo RT, Albert MV. Bayesian Analysis of Perceived Eye Level. Front Comput Neurosci 2016; 10:135. [PMID: 28018204 PMCID: PMC5156681 DOI: 10.3389/fncom.2016.00135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Accepted: 12/01/2016] [Indexed: 12/03/2022] Open
Abstract
To accurately perceive the world, people must efficiently combine internal beliefs and external sensory cues. We introduce a Bayesian framework that explains the role of internal balance cues and visual stimuli on perceived eye level (PEL)—a self-reported measure of elevation angle. This framework provides a single, coherent model explaining a set of experimentally observed PEL over a range of experimental conditions. Further, it provides a parsimonious explanation for the additive effect of low fidelity cues as well as the averaging effect of high fidelity cues, as also found in other Bayesian cue combination psychophysical studies. Our model accurately estimates the PEL and explains the form of previous equations used in describing PEL behavior. Most importantly, the proposed Bayesian framework for PEL is more powerful than previous behavioral modeling; it permits behavioral estimation in a wider range of cue combination and perceptual studies than models previously reported.
Collapse
Affiliation(s)
- Elaine E Orendorff
- École des Neurosciences de Paris, Université Pierre et Marie CurieParis, France; Department of Biology, Loyola University ChicagoChicago, IL, USA
| | - Laurynas Kalesinskas
- Department of Biology, Loyola University ChicagoChicago, IL, USA; Bioinformatics Program, Loyola University ChicagoChicago, IL, USA
| | - Robert T Palumbo
- Department of Psychology, Loyola University ChicagoChicago, IL, USA; Department of Medical and Social Sciences, Northwestern UniversityChicago, IL, USA
| | - Mark V Albert
- Bioinformatics Program, Loyola University ChicagoChicago, IL, USA; Department of Computer Science, Loyola University ChicagoChicago, IL, USA
| |
Collapse
|
100
|
Mendonça C, Mandelli P, Pulkki V. Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models. PLoS One 2016; 11:e0165391. [PMID: 27959919 PMCID: PMC5154506 DOI: 10.1371/journal.pone.0165391] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 10/11/2016] [Indexed: 11/23/2022] Open
Abstract
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m.
Collapse
Affiliation(s)
- Catarina Mendonça
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
- * E-mail:
| | - Pietro Mandelli
- School of Industrial and Information Engineering, Polytechnic University of Milan, Milan, Italy
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
| |
Collapse
|