51
|
Niederleitner B, Gutierrez-Ibanez C, Krabichler Q, Weigel S, Luksch H. A novel relay nucleus between the inferior colliculus and the optic tectum in the chicken (Gallus gallus). J Comp Neurol 2016; 525:513-534. [PMID: 27434677 DOI: 10.1002/cne.24082] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2016] [Revised: 07/13/2016] [Accepted: 07/17/2016] [Indexed: 11/08/2022]
Abstract
Processing multimodal sensory information is vital for behaving animals in many contexts. The barn owl, an auditory specialist, is a classic model for studying multisensory integration. In the barn owl, spatial auditory information is conveyed to the optic tectum (TeO) by a direct projection from the external nucleus of the inferior colliculus (ICX). In contrast, evidence of an integration of visual and auditory information in auditory generalist avian species is completely lacking. In particular, it is not known whether in auditory generalist species the ICX projects to the TeO at all. Here we use various retrograde and anterograde tracing techniques both in vivo and in vitro, intracellular fillings of neurons in vitro, and whole-cell patch recordings to characterize the connectivity between ICX and TeO in the chicken. We found that there is a direct projection from ICX to the TeO in the chicken, although this is small and only to the deeper layers (layers 13-15) of the TeO. However, we found a relay area interposed among the IC, the TeO, and the isthmic complex that receives strong synaptic input from the ICX and projects broadly upon the intermediate and deep layers of the TeO. This area is an external portion of the formatio reticularis lateralis (FRLx). In addition to the projection to the TeO, cells in FRLx send, via collaterals, descending projections through tectopontine-tectoreticular pathways. This newly described connection from the inferior colliculus to the TeO provides a solid basis for visual-auditory integration in an auditory generalist bird. J. Comp. Neurol. 525:513-534, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Bertram Niederleitner
- Lehrstuhl für Zoologie, Technische Universität München, 85354, Freising-Weihenstephan, Germany
| | | | - Quirin Krabichler
- Lehrstuhl für Zoologie, Technische Universität München, 85354, Freising-Weihenstephan, Germany
| | - Stefan Weigel
- Lehrstuhl für Zoologie, Technische Universität München, 85354, Freising-Weihenstephan, Germany
| | - Harald Luksch
- Lehrstuhl für Zoologie, Technische Universität München, 85354, Freising-Weihenstephan, Germany
| |
Collapse
|
52
|
Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System. J Neurosci 2016; 36:2101-10. [PMID: 26888922 DOI: 10.1523/jneurosci.3753-15.2016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. SIGNIFICANCE STATEMENT In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior.
Collapse
|
53
|
Tong J, Ngo V, Goldreich D. Tactile length contraction as Bayesian inference. J Neurophysiol 2016; 116:369-79. [PMID: 27121574 DOI: 10.1152/jn.00029.2016] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 04/24/2016] [Indexed: 11/22/2022] Open
Abstract
To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process.
Collapse
Affiliation(s)
- Jonathan Tong
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Vy Ngo
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Daniel Goldreich
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada; McMaster Integrative Neuroscience Discovery and Study, Hamilton, Ontario, Canada; and McMaster University Origins Institute, Hamilton, Ontario, Canada
| |
Collapse
|
54
|
Thakur CS, Afshar S, Wang RM, Hamilton TJ, Tapson J, van Schaik A. Bayesian Estimation and Inference Using Stochastic Electronics. Front Neurosci 2016; 10:104. [PMID: 27047326 PMCID: PMC4796016 DOI: 10.3389/fnins.2016.00104] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2015] [Accepted: 03/03/2016] [Indexed: 11/13/2022] Open
Abstract
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Collapse
Affiliation(s)
- Chetan Singh Thakur
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - Saeed Afshar
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - Runchun M Wang
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - Tara J Hamilton
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - Jonathan Tapson
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - André van Schaik
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| |
Collapse
|
55
|
Odegaard B, Wozny DR, Shams L. Biases in Visual, Auditory, and Audiovisual Perception of Space. PLoS Comput Biol 2015; 11:e1004649. [PMID: 26646312 PMCID: PMC4672909 DOI: 10.1371/journal.pcbi.1004649] [Citation(s) in RCA: 76] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2015] [Accepted: 11/09/2015] [Indexed: 11/18/2022] Open
Abstract
Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the precision of perceptual estimates, but also the accuracy.
Collapse
Affiliation(s)
- Brian Odegaard
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America
| | - David R. Wozny
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America
- Department of BioEngineering, University of California, Los Angeles, Los Angeles, California, United States of America
- Neuroscience Interdepartmental Program, University of California, Los Angeles, Los Angeles, California, United States of America
| |
Collapse
|
56
|
Bao S. Perceptual learning in the developing auditory cortex. Eur J Neurosci 2015; 41:718-24. [PMID: 25728188 DOI: 10.1111/ejn.12826] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Revised: 11/30/2014] [Accepted: 12/06/2014] [Indexed: 11/29/2022]
Abstract
A hallmark of the developing auditory cortex is the heightened plasticity in the critical period, during which acoustic inputs can indelibly alter cortical function. However, not all sounds in the natural acoustic environment are ethologically relevant. How does the auditory system resolve relevant sounds from the acoustic environment in such an early developmental stage when most associative learning mechanisms are not yet fully functional? What can the auditory system learn from one of the most important classes of sounds, animal vocalizations? How does naturalistic acoustic experience shape cortical sound representation and perception? To answer these questions, we need to consider an unusual strategy, statistical learning, where what the system needs to learn is embedded in the sensory input. Here, I will review recent findings on how certain statistical structures of natural animal vocalizations shape auditory cortical acoustic representations, and how cortical plasticity may underlie learned categorical sound perception. These results will be discussed in the context of human speech perception.
Collapse
Affiliation(s)
- Shaowen Bao
- Department of Physiology, University of Arizona, Tucson, AZ, 85724, USA
| |
Collapse
|
57
|
Abstract
Capturing nature's statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl's midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.
Collapse
Affiliation(s)
- Weston Cox
- Department of Electrical and Computer Engineering, Seattle University, Seattle, Washington, United States of America
| | - Brian J. Fischer
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
58
|
Benichoux V, Fontaine B, Franken TP, Karino S, Joris PX, Brette R. Neural tuning matches frequency-dependent time differences between the ears. eLife 2015; 4. [PMID: 25915620 PMCID: PMC4439524 DOI: 10.7554/elife.06072] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2014] [Accepted: 04/25/2015] [Indexed: 11/15/2022] Open
Abstract
The time it takes a sound to travel from source to ear differs between the ears and creates an interaural delay. It varies systematically with spatial direction and is generally modeled as a pure time delay, independent of frequency. In acoustical recordings, we found that interaural delay varies with frequency at a fine scale. In physiological recordings of midbrain neurons sensitive to interaural delay, we found that preferred delay also varies with sound frequency. Similar observations reported earlier were not incorporated in a functional framework. We find that the frequency dependence of acoustical and physiological interaural delays are matched in key respects. This suggests that binaural neurons are tuned to acoustical features of ecological environments, rather than to fixed interaural delays. Using recordings from the nerve and brainstem we show that this tuning may emerge from neurons detecting coincidences between input fibers that are mistuned in frequency. DOI:http://dx.doi.org/10.7554/eLife.06072.001 When you hear a sound, such as someone calling your name, it is often possible to make a good estimate of where that sound came from. If the sound came from the left, it would reach your left ear before your right ear, and vice versa if the sound originated from your right. The time that passes between the sound reaching each ear is known as the ‘interaural time difference’. Previous research has suggested that specific neurons in the brain respond to specific interaural time differences, and the brain then uses this interaural time difference to locate the sound. Sounds come in various frequencies from high-pitched alarms to low bass tones, and how a neuron responds to interaural time differences appears to change according to the frequency of the sound being played. For example, a given neuron may respond to a 200- microsecond interaural time difference when a tone is played at a high frequency, but show no response to this time difference when the tone is played at a low frequency. To date, researchers had been unable to explain why this occurs. Here, Benichoux et al. investigated this topic by playing a variety of sounds to anaesthetized cats. Electrodes were used to record the responses of individual neurons in the cats' brains, and the properties of the sound waves that reached the cats' ears were also recorded. These experiments revealed that the time it took a sound to travel from a location to each of the cats' ears, and consequently the interaural time difference, depended on whether it was a high-pitched or a low-pitched sound. This happened because different properties of the environment, such as the angle of the cat's head, affected specific frequencies in different ways. As expected, the neurons' responses were also affected by sound frequency. Indeed, the neurons' behaviour mirrored that of the sound waves themselves. This shows that neurons do not, as previously thought, simply react to specific interaural differences. Instead, these neurons use both sound frequency and interaural time differences to produce a thorough approximation of the sound's location. The precise mechanisms that generate this brain adaptation to the animal's environment remain to be determined. DOI:http://dx.doi.org/10.7554/eLife.06072.002
Collapse
Affiliation(s)
- Victor Benichoux
- Institut d'Etudes de la Cognition, Ecole Normale Supérieure, Paris, France
| | - Bertrand Fontaine
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, New York, United States
| | - Tom P Franken
- Laboratory of Auditory Neurophysiology, University of Leuven, Leuven, Belgium
| | - Shotaro Karino
- Laboratory of Auditory Neurophysiology, University of Leuven, Leuven, Belgium
| | - Philip X Joris
- Laboratory of Auditory Neurophysiology, University of Leuven, Leuven, Belgium
| | - Romain Brette
- Institut d'Etudes de la Cognition, Ecole Normale Supérieure, Paris, France
| |
Collapse
|
59
|
A Bayesian approach to person perception. Conscious Cogn 2015; 36:406-13. [PMID: 25864593 DOI: 10.1016/j.concog.2015.03.015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2014] [Revised: 03/23/2015] [Accepted: 03/26/2015] [Indexed: 11/23/2022]
Abstract
Here we propose a Bayesian approach to person perception, outlining the theoretical position and a methodological framework for testing the predictions experimentally. We use the term person perception to refer not only to the perception of others' personal attributes such as age and sex but also to the perception of social signals such as direction of gaze and emotional expression. The Bayesian approach provides a formal description of the way in which our perception combines current sensory evidence with prior expectations about the structure of the environment. Such expectations can lead to unconscious biases in our perception that are particularly evident when sensory evidence is uncertain. We illustrate the ideas with reference to our recent studies on gaze perception which show that people have a bias to perceive the gaze of others as directed towards themselves. We also describe a potential application to the study of the perception of a person's sex, in which a bias towards perceiving males is typically observed.
Collapse
|
60
|
Abstract
Organisms must act in the face of sensory, motor, and reward uncertainty stemming from a pandemonium of stochasticity and missing information. In many tasks, organisms can make better decisions if they have at their disposal a representation of the uncertainty associated with task-relevant variables. We formalize this problem using Bayesian decision theory and review recent behavioral and neural evidence that the brain may use knowledge of uncertainty, confidence, and probability.
Collapse
Affiliation(s)
- Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, New York 10003;
| | | |
Collapse
|
61
|
Anchisi D, Zanon M. A Bayesian perspective on sensory and cognitive integration in pain perception and placebo analgesia. PLoS One 2015; 10:e0117270. [PMID: 25664586 PMCID: PMC4321992 DOI: 10.1371/journal.pone.0117270] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2013] [Accepted: 12/22/2014] [Indexed: 12/31/2022] Open
Abstract
The placebo effect is a component of any response to a treatment (effective or inert), but we still ignore why it exists. We propose that placebo analgesia is a facet of pain perception, others being the modulating effects of emotions, cognition and past experience, and we suggest that a computational understanding of pain may provide a unifying explanation of these phenomena. Here we show how Bayesian decision theory can account for such features and we describe a model of pain that we tested against experimental data. Our model not only agrees with placebo analgesia, but also predicts that learning can affect pain perception in other unexpected ways, which experimental evidence supports. Finally, the model can also reflect the strategies used by pain perception, showing that modulation by disparate factors is intrinsic to the pain process.
Collapse
Affiliation(s)
- Davide Anchisi
- Department of Medical and Biological Sciences, Universit degli Studi di Udine, Udine, Italy
- * E-mail:
| | - Marco Zanon
- Department of Medical and Biological Sciences, Universit degli Studi di Udine, Udine, Italy
| |
Collapse
|
62
|
Neural representation of probabilities for Bayesian inference. J Comput Neurosci 2015; 38:315-23. [PMID: 25561333 DOI: 10.1007/s10827-014-0545-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2014] [Revised: 12/07/2014] [Accepted: 12/23/2014] [Indexed: 10/24/2022]
Abstract
Bayesian models are often successful in describing perception and behavior, but the neural representation of probabilities remains in question. There are several distinct proposals for the neural representation of probabilities, but they have not been directly compared in an example system. Here we consider three models: a non-uniform population code where the stimulus-driven activity and distribution of preferred stimuli in the population represent a likelihood function and a prior, respectively; the sampling hypothesis which proposes that the stimulus-driven activity over time represents a posterior probability and that the spontaneous activity represents a prior; and the class of models which propose that a population of neurons represents a posterior probability in a distributed code. It has been shown that the non-uniform population code model matches the representation of auditory space generated in the owl's external nucleus of the inferior colliculus (ICx). However, the alternative models have not been tested, nor have the three models been directly compared in any system. Here we tested the three models in the owl's ICx. We found that spontaneous firing rate and the average stimulus-driven response of these neurons were not consistent with predictions of the sampling hypothesis. We also found that neural activity in ICx under varying levels of sensory noise did not reflect a posterior probability. On the other hand, the responses of ICx neurons were consistent with the non-uniform population code model. We further show that Bayesian inference can be implemented in the non-uniform population code model using one spike per neuron when the population is large and is thus able to support the rapid inference that is necessary for sound localization.
Collapse
|
63
|
Bar NS, Skogestad S, Marçal JM, Ulanovsky N, Yovel Y. A sensory-motor control model of animal flight explains why bats fly differently in light versus dark. PLoS Biol 2015; 13:e1002046. [PMID: 25629809 PMCID: PMC4309566 DOI: 10.1371/journal.pbio.1002046] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Accepted: 12/08/2014] [Indexed: 11/18/2022] Open
Abstract
Animal flight requires fine motor control. However, it is unknown how flying animals rapidly transform noisy sensory information into adequate motor commands. Here we developed a sensorimotor control model that explains vertebrate flight guidance with high fidelity. This simple model accurately reconstructed complex trajectories of bats flying in the dark. The model implies that in order to apply appropriate motor commands, bats have to estimate not only the angle-to-target, as was previously assumed, but also the angular velocity ("proportional-derivative" controller). Next, we conducted experiments in which bats flew in light conditions. When using vision, bats altered their movements, reducing the flight curvature. This change was explained by the model via reduction in sensory noise under vision versus pure echolocation. These results imply a surprising link between sensory noise and movement dynamics. We propose that this sensory-motor link is fundamental to motion control in rapidly moving animals under different sensory conditions, on land, sea, or air.
Collapse
Affiliation(s)
- Nadav S. Bar
- Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Sigurd Skogestad
- Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Jose M. Marçal
- Institute for Telecommunications, University of Lisbon, Lisbon, Portugal
| | - Nachum Ulanovsky
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Yossi Yovel
- Department of Zoology, Faculty of Life Sciences, and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
64
|
Cazettes F, Fischer BJ, Pena JL. Spatial cue reliability drives frequency tuning in the barn Owl's midbrain. eLife 2014; 3:e04854. [PMID: 25531067 PMCID: PMC4291741 DOI: 10.7554/elife.04854] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2014] [Accepted: 12/21/2014] [Indexed: 11/13/2022] Open
Abstract
The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability.
Collapse
Affiliation(s)
- Fanny Cazettes
- Department of Neuroscience, Albert Einstein College of Medicine, New York, United States
| | - Brian J Fischer
- Department of Mathematics, Seattle University, Seattle, United States
| | - Jose L Pena
- Department of Neuroscience, Albert Einstein College of Medicine, New York, United States
| |
Collapse
|
65
|
McColgan T, Shah S, Köppl C, Carr C, Wagner H. A functional circuit model of interaural time difference processing. J Neurophysiol 2014; 112:2850-64. [PMID: 25185809 PMCID: PMC4254871 DOI: 10.1152/jn.00484.2014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Accepted: 08/25/2014] [Indexed: 11/22/2022] Open
Abstract
Inputs from the two sides of the brain interact to create maps of interaural time difference (ITD) in the nucleus laminaris of birds. How inputs from each side are matched with high temporal precision in ITD-sensitive circuits is unknown, given the differences in input path lengths from each side. To understand this problem in birds, we modeled the geometry of the input axons and their corresponding conduction velocities and latencies. Consistent with existing physiological data, we assumed a common latency up to the border of nucleus laminaris. We analyzed two biological implementations of the model, the single ITD map in chickens and the multiple maps of ITD in barn owls. For binaural inputs, since ipsi- and contralateral initial common latencies were very similar, we could restrict adaptive regulation of conduction velocity to within the nucleus. Other model applications include the simultaneous derivation of multiple conduction velocities from one set of measurements and the demonstration that contours with the same ITD cannot be parallel to the border of nucleus laminaris in the owl. Physiological tests of the predictions of the model demonstrate its validity and robustness. This model may have relevance not only for auditory processing but also for other computational tasks that require adaptive regulation of conduction velocity.
Collapse
Affiliation(s)
- Thomas McColgan
- Institute for Biology II, Rheinisch-Westfaelische Technische Hochschule (RWTH) Aachen, Aachen, Germany; Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Sahil Shah
- Department of Biology, University of Maryland, College Park, Maryland
| | - Christine Köppl
- Cluster of Excellence "Hearing4all" and Research Center Neurosensory Science and Department of Neuroscience School of Medicine and Health Science Carl von Ossietzky University Oldenburg, Oldenburg, Germany
| | - Catherine Carr
- Department of Biology, University of Maryland, College Park, Maryland; and
| | | |
Collapse
|
66
|
Kettler L, Wagner H. Influence of double stimulation on sound-localization behavior in barn owls. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2014; 200:1033-44. [PMID: 25352361 DOI: 10.1007/s00359-014-0953-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Revised: 09/01/2014] [Accepted: 10/08/2014] [Indexed: 11/28/2022]
Abstract
Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.
Collapse
Affiliation(s)
- Lutz Kettler
- Department of Zoology and Animal Physiology, Aachen University, Worringerweg 3, 52074, Aachen, Germany,
| | | |
Collapse
|
67
|
Abstract
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.
Collapse
Affiliation(s)
- Wiktor Młynarski
- Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany
- * E-mail:
| | - Jürgen Jost
- Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany
- Santa Fe Institute, Santa Fe, New Mexico, United States of America
| |
Collapse
|
68
|
Fischer BJ, Seidl AH. Resolution of interaural time differences in the avian sound localization circuit-a modeling study. Front Comput Neurosci 2014; 8:99. [PMID: 25206329 PMCID: PMC4143899 DOI: 10.3389/fncom.2014.00099] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2014] [Accepted: 08/01/2014] [Indexed: 11/13/2022] Open
Abstract
Interaural time differences (ITDs) are a main cue for sound localization and sound segregation. A dominant model to study ITD detection is the sound localization circuitry in the avian auditory brainstem. Neurons in nucleus laminaris (NL) receive auditory information from both ears via the avian cochlear nucleus magnocellularis (NM) and compare the relative timing of these inputs. Timing of these inputs is crucial, as ITDs in the microsecond range must be discriminated and encoded. We modeled ITD sensitivity of single NL neurons based on previously published data and determined the minimum resolvable ITD for neurons in NL. The minimum resolvable ITD is too large to allow for discrimination by single NL neurons of naturally occurring ITDs for very low frequencies. For high frequency NL neurons (>1 kHz) our calculated ITD resolutions fall well within the natural range of ITDs and approach values of below 10 μs. We show that different parts of the ITD tuning function offer different resolution in ITD coding, suggesting that information derived from both parts may be used for downstream processing. A place code may be used for sound location at frequencies above 500 Hz, but our data suggest the slope of the ITD tuning curve ought to be used for ITD discrimination by single NL neurons at the lowest frequencies. Our results provide an important measure of the necessary temporal window of binaural inputs for future studies on the mechanisms and development of neuronal computation of temporally precise information in this important system. In particular, our data establish the temporal precision needed for conduction time regulation along NM axons.
Collapse
Affiliation(s)
- Brian J Fischer
- Department of Mathematics, Seattle University Seattle, WA, USA
| | - Armin H Seidl
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology - Head and Neck Surgery, University of Washington Seattle, WA, USA ; Department of Neurology, University of Washington Seattle, WA, USA
| |
Collapse
|
69
|
Ganguli D, Simoncelli EP. Efficient sensory encoding and Bayesian inference with heterogeneous neural populations. Neural Comput 2014; 26:2103-34. [PMID: 25058702 PMCID: PMC4167880 DOI: 10.1162/neco_a_00638] [Citation(s) in RCA: 96] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The efficient coding hypothesis posits that sensory systems maximize information transmitted to the brain about the environment. We develop a precise and testable form of this hypothesis in the context of encoding a sensory variable with a population of noisy neurons, each characterized by a tuning curve. We parameterize the population with two continuous functions that control the density and amplitude of the tuning curves, assuming that the tuning widths vary inversely with the cell density. This parameterization allows us to solve, in closed form, for the information-maximizing allocation of tuning curves as a function of the prior probability distribution of sensory variables. For the optimal population, the cell density is proportional to the prior, such that more cells with narrower tuning are allocated to encode higher-probability stimuli and that each cell transmits an equal portion of the stimulus probability mass. We also compute the stimulus discrimination capabilities of a perceptual system that relies on this neural representation and find that the best achievable discrimination thresholds are inversely proportional to the sensory prior. We examine how the prior information that is implicitly encoded in the tuning curves of the optimal population may be used for perceptual inference and derive a novel decoder, the Bayesian population vector, that closely approximates a Bayesian least-squares estimator that has explicit access to the prior. Finally, we generalize these results to sigmoidal tuning curves, correlated neural variability, and a broader class of objective functions. These results provide a principled embedding of sensory prior information in neural populations and yield predictions that are readily testable with environmental, physiological, and perceptual data.
Collapse
Affiliation(s)
- Deep Ganguli
- Howard Hughes Medical Institute, Center for Neural Science and Courant Institute of Mathematical Sciences, New York University, New York, NY 10003, U.S.A.
| | | |
Collapse
|
70
|
Wang Y, Gutfreund Y, Peña JL. Coding space-time stimulus dynamics in auditory brain maps. Front Physiol 2014; 5:135. [PMID: 24782781 PMCID: PMC3986518 DOI: 10.3389/fphys.2014.00135] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2013] [Accepted: 03/19/2014] [Indexed: 11/21/2022] Open
Abstract
Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl's midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior.
Collapse
Affiliation(s)
- Yunyan Wang
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine Bronx, NY, USA
| | - Yoram Gutfreund
- The Rappaport Research Institute and Faculty of Medicine The Technion, Haifa, Israel
| | - José L Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine Bronx, NY, USA
| |
Collapse
|
71
|
Vonderschen K, Wagner H. Detecting interaural time differences and remodeling their representation. Trends Neurosci 2014; 37:289-300. [DOI: 10.1016/j.tins.2014.03.002] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2012] [Revised: 03/06/2014] [Accepted: 03/11/2014] [Indexed: 10/25/2022]
|
72
|
Lee DD, Ortega PA, Stocker AA. Dynamic belief state representations. Curr Opin Neurobiol 2014; 25:221-7. [DOI: 10.1016/j.conb.2014.01.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2013] [Revised: 01/27/2014] [Accepted: 01/31/2014] [Indexed: 10/25/2022]
|
73
|
Abstract
The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI:http://dx.doi.org/10.7554/eLife.01312.001 Having two ears allows animals to localize the source of a sound. For example, barn owls can snatch their prey in complete darkness by relying on sound alone. It has been known for a long time that this ability depends on tiny differences in the sounds that arrive at each ear, including differences in the time of arrival: in humans, for example, sound will arrive at the ear closer to the source up to half a millisecond earlier than it arrives at the other ear. These differences are called interaural time differences. However, the way that the brain processes this information to figure out where the sound came from has been the source of much debate. Several theories have been proposed for how the brain calculates position from interaural time differences. According to the hemispheric theory, the activities of particular binaurally sensitive neurons in each of side of the brain are added together: adding signals in this way has been shown to maximize sensitivity to time differences under simple, controlled circumstances. The peak decoding theory proposes that the brain can work out the location of a sound on the basis of which neurons responded most strongly to the sound. Both theories have their potential advantages, and there is evidence in support of each. Now, Goodman et al. have used computational simulations to compare the models under ecologically relevant circumstances. The simulations show that the results predicted by both models are inconsistent with those observed in real animals, and they propose that the brain must use the full pattern of neural responses to calculate the location of a sound. One of the parts of the brain that is responsible for locating sounds is the inferior colliculus. Studies in cats and humans have shown that damage to the inferior colliculus on one side of the brain prevents accurate localization of sounds on the opposite side of the body, but the animals are still able to locate sounds on the same side. This finding is difficult to explain using the hemispheric model, but Goodman et al. show that it can be explained with pattern-based models. DOI:http://dx.doi.org/10.7554/eLife.01312.002
Collapse
Affiliation(s)
- Dan F M Goodman
- Laboratoire de Psychologie de la Perception, CNRS and Université Paris Descartes, Paris, France
| | | | | |
Collapse
|
74
|
Decoding sound source location and separation using neural population activity patterns. J Neurosci 2013; 33:15837-47. [PMID: 24089491 DOI: 10.1523/jneurosci.2034-13.2013] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The strategies by which the central nervous system decodes the properties of sensory stimuli, such as sound source location, from the responses of a population of neurons are a matter of debate. We show, using the average firing rates of neurons in the inferior colliculus (IC) of awake rabbits, that prevailing decoding models of sound localization (summed population activity and the population vector) fail to localize sources accurately due to heterogeneity in azimuth tuning across the population. In contrast, a maximum-likelihood decoder operating on the pattern of activity across the population of neurons in one IC accurately localized sound sources in the contralateral hemifield, consistent with lesion studies, and did so with a precision consistent with rabbit psychophysical performance. The pattern decoder also predicts behavior in response to incongruent localization cues consistent with the long-standing "duplex" theory of sound localization. We further show that the pattern decoder accurately distinguishes two concurrent, spatially separated sources from a single source, consistent with human behavior. Decoder detection of small amounts of source separation directly in front is due to neural sensitivity to the interaural decorrelation of sound, at both low and high frequencies. The distinct patterns of IC activity between single and separated sound sources thereby provide a neural correlate for the ability to segregate and localize sources in everyday, multisource environments.
Collapse
|
75
|
Seriès P, Seitz AR. Learning what to expect (in visual perception). Front Hum Neurosci 2013; 7:668. [PMID: 24187536 PMCID: PMC3807544 DOI: 10.3389/fnhum.2013.00668] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2013] [Accepted: 09/24/2013] [Indexed: 11/25/2022] Open
Abstract
Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual’s priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors.
Collapse
Affiliation(s)
- Peggy Seriès
- Department of Informatics, University of Edinburgh Edinburgh, UK
| | | |
Collapse
|
76
|
New perspectives on the owl's map of auditory space. Curr Opin Neurobiol 2013; 24:55-62. [PMID: 24492079 DOI: 10.1016/j.conb.2013.08.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2013] [Revised: 08/07/2013] [Accepted: 08/13/2013] [Indexed: 11/20/2022]
Abstract
A map of sound direction was found in the owl's midbrain more than three decades ago. This finding suggested that the brain reconstructs spatial coordinates to represent them. Subsequent research elucidated the variables used to compute the map. Here we provide a review of the processes leading to its emergence and an updated perspective on how and what information is represented.
Collapse
|
77
|
Cazettes F, Fischer BJ, Peña JL. Likelihood representation in the owl's sound localization system. BMC Neurosci 2013. [PMCID: PMC3704434 DOI: 10.1186/1471-2202-14-s1-p128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
78
|
Fischer BJ. Neural computation with efficient population codes. BMC Neurosci 2013. [PMCID: PMC3704384 DOI: 10.1186/1471-2202-14-s1-p129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
79
|
Cuturi LF, MacNeilage PR. Systematic biases in human heading estimation. PLoS One 2013; 8:e56862. [PMID: 23457631 PMCID: PMC3574054 DOI: 10.1371/journal.pone.0056862] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2012] [Accepted: 01/15/2013] [Indexed: 11/18/2022] Open
Abstract
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.
Collapse
Affiliation(s)
- Luigi F. Cuturi
- German Center for Vertigo and Balance Disorders, University Hospital of Munich, Munich, Germany
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians University, Munich, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo and Balance Disorders, University Hospital of Munich, Munich, Germany
- * E-mail:
| |
Collapse
|
80
|
Otte RJ, Agterberg MJH, Van Wanrooij MM, Snik AFM, Van Opstal AJ. Age-related hearing loss and ear morphology affect vertical but not horizontal sound-localization performance. J Assoc Res Otolaryngol 2013; 14:261-73. [PMID: 23319012 DOI: 10.1007/s10162-012-0367-7] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2012] [Accepted: 12/14/2012] [Indexed: 10/27/2022] Open
Abstract
Several studies have attributed deterioration of sound localization in the horizontal (azimuth) and vertical (elevation) planes to an age-related decline in binaural processing and high-frequency hearing loss (HFHL). The latter might underlie decreased elevation performance of older adults. However, as the pinnae keep growing throughout life, we hypothesized that larger ears might enable older adults to localize sounds in elevation on the basis of lower frequencies, thus (partially) compensating their HFHL. In addition, it is not clear whether sound localization has already matured at a very young age, when the body is still growing, and the binaural and monaural sound-localization cues change accordingly. The present study investigated sound-localization performance of children (7-11 years), young adults (20-34 years), and older adults (63-80 years) under open-loop conditions in the two-dimensional frontal hemifield. We studied the effect of age-related hearing loss and ear size on localization responses to brief broadband sound bursts with different bandwidths. We found similar localization abilities in azimuth for all listeners, including the older adults with HFHL. Sound localization in elevation for the children and young adult listeners with smaller ears improved when stimuli contained frequencies above 7 kHz. Subjects with larger ears could also judge the elevation of sound sources restricted to lower frequency content. Despite increasing ear size, sound localization in elevation deteriorated in older adults with HFHL. We conclude that the binaural localization cues are successfully used well into later stages of life, but that pinna growth cannot compensate the more profound HFHL with age.
Collapse
Affiliation(s)
- Rik J Otte
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, P.O. Box 9101, 6500 HB, Nijmegen, The Netherlands
| | | | | | | | | |
Collapse
|
81
|
Neural correlates of prior expectations of motion in the lateral intraparietal and middle temporal areas. J Neurosci 2012; 32:10063-74. [PMID: 22815520 DOI: 10.1523/jneurosci.5948-11.2012] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Successful decision making involves combining observations of the external world with prior knowledge. Recent studies suggest that neural activity in macaque lateral intraparietal area (LIP) provides a useful window into this process. This study examines how rapidly changing prior knowledge about an upcoming sensory stimulus influences the computations that convert sensory signals into plans for action. Two monkeys performed a cued direction discrimination task, in which an arrow cue presented at the start of each trial communicated the prior probability of the direction of stimulus motion. We hypothesized that the cue would either shift the initial level of LIP activity before sensory evidence arrived, or it would scale sensory responses according to the prior probability of each stimulus, manifesting as a change in slope of LIP firing rates. Neural recordings demonstrated a clear shift in the activity level of LIP neurons following the arrow cue, which persisted into the presentation of the motion stimulus. No significant change in slope of responses was observed, suggesting that sensory gain was not strongly modulated. To confirm the latter observation, middle temporal area (MT) neurons were recorded during a version of the cued direction discrimination task, and we found no change in MT responses resulting from the presentation of the directional cue. These results suggest that information about an immediately upcoming stimulus does not scale the sensory response, but rather changes the amount of evidence that must be accumulated to reach a decision in areas that are involved in planning action.
Collapse
|
82
|
Population-wide bias of surround suppression in auditory spatial receptive fields of the owl's midbrain. J Neurosci 2012; 32:10470-8. [PMID: 22855796 DOI: 10.1523/jneurosci.0047-12.2012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The physical arrangement of receptive fields (RFs) within neural structures is important for local computations. Nonuniform distribution of tuning within populations of neurons can influence emergent tuning properties, causing bias in local processing. This issue was studied in the auditory system of barn owls. The owl's external nucleus of the inferior colliculus (ICx) contains a map of auditory space in which the frontal region is overrepresented. We measured spatiotemporal RFs of ICx neurons using spatial white noise. We found a population-wide bias in surround suppression such that suppression from frontal space was stronger. This asymmetry increased with laterality in spatial tuning. The bias could be explained by a model of lateral inhibition based on the overrepresentation of frontal space observed in ICx. The model predicted trends in surround suppression across ICx that matched the data. Thus, the uneven distribution of spatial tuning within the map could explain the topography of time-dependent tuning properties. This mechanism may have significant implications for the analysis of natural scenes by sensory systems.
Collapse
|
83
|
Vilares I, Howard JD, Fernandes HL, Gottfried JA, Kording KP. Differential representations of prior and likelihood uncertainty in the human brain. Curr Biol 2012; 22:1641-8. [PMID: 22840519 DOI: 10.1016/j.cub.2012.07.010] [Citation(s) in RCA: 101] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2012] [Revised: 06/11/2012] [Accepted: 07/03/2012] [Indexed: 11/16/2022]
Abstract
BACKGROUND Uncertainty shapes our perception of the world and the decisions we make. Two aspects of uncertainty are commonly distinguished: uncertainty in previously acquired knowledge (prior) and uncertainty in current sensory information (likelihood). Previous studies have established that humans can take both types of uncertainty into account, often in a way predicted by Bayesian statistics. However, the neural representations underlying these parameters remain poorly understood. RESULTS By varying prior and likelihood uncertainty in a decision-making task while performing neuroimaging in humans, we found that prior and likelihood uncertainty had quite distinct representations. Whereas likelihood uncertainty activated brain regions along the early stages of the visuomotor pathway, representations of prior uncertainty were identified in specialized brain areas outside this pathway, including putamen, amygdala, insula, and orbitofrontal cortex. Furthermore, the magnitude of brain activity in the putamen predicted individuals' personal tendencies to rely more on either prior or current information. CONCLUSIONS Our results suggest different pathways by which prior and likelihood uncertainty map onto the human brain and provide a potential neural correlate for higher reliance on current or prior knowledge. Overall, these findings offer insights into the neural pathways that may allow humans to make decisions close to the optimal defined by a Bayesian statistical framework.
Collapse
Affiliation(s)
- Iris Vilares
- Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of Chicago, Chicago, IL 60611, USA.
| | | | | | | | | |
Collapse
|
84
|
Wagner H, Kettler L, Orlowski J, Tellers P. Neuroethology of prey capture in the barn owl (Tyto alba L.). ACTA ACUST UNITED AC 2012; 107:51-61. [PMID: 22510644 DOI: 10.1016/j.jphysparis.2012.03.004] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2012] [Revised: 03/12/2012] [Accepted: 03/30/2012] [Indexed: 11/19/2022]
Abstract
Barn owls are a model system for studying prey capture. These animals can catch mice by hearing alone, but use vision whenever light conditions allow this. The silent flight, the frontally oriented eyes, and the facial ruffs are specializations that evolved to optimize prey capture. The auditory system is characterized by high absolute sensitivity, a use of interaural time difference for azimuthal sound-localization over almost the total hearing range up to at least 9 kHz, and the use of interaural level difference for elevational sound localization in the upper frequency range. Response latencies towards auditory targets were shortened by covert attention, while overt attention helped to orient towards salient visual objects. However, only 20% of the fixation movements could be explained by the saliency of the fixated objects, suggesting a top-down control of attention. In a visual-search experiment the birds turned earlier and more often towards and spent more time at salient objects. The visual system also exhibits high absolute sensitivity, while the spatial resolution is not particularly high. Last but not least, head movements may be classified as fixations, translations, and rotations combined with translations. These motion primitives may be combined to complex head-movement patterns. With the expected easy availability of genetic techniques for specialists in the near future and the possibility to apply the findings in biomimetic devices prey capture in barn owls will remain an exciting field in the future.
Collapse
Affiliation(s)
- Hermann Wagner
- Department of Zoology, RWTH Aachen University, Mies-van-der-Rohe-Strasse 15, D-52074 Aachen, Germany.
| | - Lutz Kettler
- Department of Zoology, RWTH Aachen University, Mies-van-der-Rohe-Strasse 15, D-52074 Aachen, Germany.
| | - Julius Orlowski
- Department of Zoology, RWTH Aachen University, Mies-van-der-Rohe-Strasse 15, D-52074 Aachen, Germany.
| | - Philipp Tellers
- Department of Zoology, RWTH Aachen University, Mies-van-der-Rohe-Strasse 15, D-52074 Aachen, Germany.
| |
Collapse
|
85
|
|