1
|
Bae A, Peña JL. Barn owls specialized sound-driven behavior: Lessons in optimal processing and coding by the auditory system. Hear Res 2024; 443:108952. [PMID: 38242019 DOI: 10.1016/j.heares.2024.108952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/03/2024] [Accepted: 01/11/2024] [Indexed: 01/21/2024]
Abstract
The barn owl, a nocturnal raptor with remarkably efficient prey-capturing abilities, has been one of the initial animal models used for research of brain mechanisms underlying sound localization. Some seminal findings made from their specialized sound localizing auditory system include discoveries of a midbrain map of auditory space, mechanisms towards spatial cue detection underlying sound-driven orienting behavior, and circuit level changes supporting development and experience-dependent plasticity. These findings have explained properties of vital hearing functions and inspired theories in spatial hearing that extend across diverse animal species, thereby cementing the barn owl's legacy as a powerful experimental system for elucidating fundamental brain mechanisms. This concise review will provide an overview of the insights from which the barn owl model system has exemplified the strength of investigating diversity and similarity of brain mechanisms across species. First, we discuss some of the key findings in the specialized system of the barn owl that elucidated brain mechanisms toward detection of auditory cues for spatial hearing. Then we examine how the barn owl has validated mathematical computations and theories underlying optimal hearing across species. And lastly, we conclude with how the barn owl has advanced investigations toward developmental and experience dependent plasticity in sound localization, as well as avenues for future research investigations towards bridging commonalities across species. Analogous to the informative power of Astrophysics for understanding nature through diverse exploration of planets, stars, and galaxies across the universe, miscellaneous research across different animal species pursues broad understanding of natural brain mechanisms and behavior.
Collapse
Affiliation(s)
- Andrea Bae
- Albert Einstein College of Medicine, NY, USA
| | - Jose L Peña
- Albert Einstein College of Medicine, NY, USA.
| |
Collapse
|
2
|
Maldarelli G, Firzlaff U, Kettler L, Ondracek JM, Luksch H. Two Types of Auditory Spatial Receptive Fields in Different Parts of the Chicken's Midbrain. J Neurosci 2022; 42:4669-4680. [PMID: 35508384 PMCID: PMC9186802 DOI: 10.1523/jneurosci.2204-21.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 04/04/2022] [Accepted: 04/26/2022] [Indexed: 11/21/2022] Open
Abstract
The optic tectum (OT) is an avian midbrain structure involved in the integration of visual and auditory stimuli. Studies in the barn owl, an auditory specialist, have shown that spatial auditory information is topographically represented in the OT. Little is known about how auditory space is represented in the midbrain of birds with generalist hearing, i.e., most of avian species lacking peripheral adaptations such as facial ruffs or asymmetric ears. Thus, we conducted in vivo extracellular recordings of single neurons in the OT and in the external portion of the formatio reticularis lateralis (FRLx), a brain structure located between the inferior colliculus (IC) and the OT, in anaesthetized chickens of either sex. We found that most of the auditory spatial receptive fields (aSRFs) were spatially confined both in azimuth and elevation, divided into two main classes: round aSRFs, mainly present in the OT, and annular aSRFs, with a ring-like shape around the interaural axis, mainly present in the FRLx. Our data further indicate that interaural time difference (ITD) and interaural level difference (ILD) play a role in the formation of both aSRF classes. These results suggest that, unlike mammals and owls which have a congruent representation of visual and auditory space in the OT, generalist birds separate the computation of auditory space in two different midbrain structures. We hypothesize that the FRLx-annular aSRFs define the distance of a sound source from the axis of the lateral visual fovea, whereas the OT-round aSRFs are involved in multimodal integration of the stimulus around the lateral fovea.SIGNIFICANCE STATEMENT Previous studies implied that auditory spatial receptive fields (aSRFs) in the midbrain of generalist birds are only confined along azimuth. Interestingly, we found SRFs s in the chicken to be confined along both azimuth and elevation. Moreover, the auditory receptive fields are arranged in a concentric manner around the overlapping interaural and visual axes. These data suggest that in generalist birds, which mainly rely on vision, the auditory system mainly serves to align auditory stimuli with the visual axis, while auditory specialized birds like the barn owl compute sound sources more precisely and integrate sound positions in the multimodal space map of the optic tectum (OT).
Collapse
Affiliation(s)
- Gianmarco Maldarelli
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| | - Uwe Firzlaff
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| | - Lutz Kettler
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| | - Janie M Ondracek
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| | - Harald Luksch
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan 85354, Germany
| |
Collapse
|
3
|
Effect of Stimulus-Dependent Spike Timing on Population Coding of Sound Location in the Owl's Auditory Midbrain. eNeuro 2020; 7:ENEURO.0244-19.2020. [PMID: 32188709 PMCID: PMC7189487 DOI: 10.1523/eneuro.0244-19.2020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 02/07/2020] [Accepted: 02/18/2020] [Indexed: 11/21/2022] Open
Abstract
In the auditory system, the spectrotemporal structure of acoustic signals determines the temporal pattern of spikes. Here, we investigated this effect in neurons of the barn owl's auditory midbrain (Tyto furcata) that are selective for auditory space and whether it can influence the coding of sound direction. We found that in the nucleus where neurons first become selective to combinations of sound localization cues, reproducibility of spike trains across repeated trials of identical sounds, a metric of across-trial temporal fidelity of spiking patterns evoked by a stimulus, was maximal at the sound direction that elicited the highest firing rate. We then tested the hypothesis that this stimulus-dependent patterning resulted in rate co-modulation of cells with similar frequency and spatial selectivity, driving stimulus-dependent synchrony of population responses. Tetrodes were used to simultaneously record multiple nearby units in the optic tectum (OT), where auditory space is topographically represented. While spiking of neurons in OT showed lower reproducibility across trials compared with upstream nuclei, spike-time synchrony between nearby OT neurons was highest for sounds at their preferred direction. A model of the midbrain circuit explained the relationship between stimulus-dependent reproducibility and synchrony, and demonstrated that this effect can improve the decoding of sound location from the OT output. Thus, stimulus-dependent spiking patterns in the auditory midbrain can have an effect on spatial coding. This study reports a functional connection between spike patterning elicited by spectrotemporal features of a sound and the coding of its location.
Collapse
|
4
|
Sadeghi M, Zhai X, Stevenson IH, Escabí MA. A neural ensemble correlation code for sound category identification. PLoS Biol 2019; 17:e3000449. [PMID: 31574079 PMCID: PMC6788721 DOI: 10.1371/journal.pbio.3000449] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 10/11/2019] [Accepted: 09/03/2019] [Indexed: 12/25/2022] Open
Abstract
Humans and other animals effortlessly identify natural sounds and categorize them into behaviorally relevant categories. Yet, the acoustic features and neural transformations that enable sound recognition and the formation of perceptual categories are largely unknown. Here, using multichannel neural recordings in the auditory midbrain of unanesthetized female rabbits, we first demonstrate that neural ensemble activity in the auditory midbrain displays highly structured correlations that vary with distinct natural sound stimuli. These stimulus-driven correlations can be used to accurately identify individual sounds using single-response trials, even when the sounds do not differ in their spectral content. Combining neural recordings and an auditory model, we then show how correlations between frequency-organized auditory channels can contribute to discrimination of not just individual sounds but sound categories. For both the model and neural data, spectral and temporal correlations achieved similar categorization performance and appear to contribute equally. Moreover, both the neural and model classifiers achieve their best task performance when they accumulate evidence over a time frame of approximately 1-2 seconds, mirroring human perceptual trends. These results together suggest that time-frequency correlations in sounds may be reflected in the correlations between auditory midbrain ensembles and that these correlations may play an important role in the identification and categorization of natural sounds.
Collapse
Affiliation(s)
- Mina Sadeghi
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Xiu Zhai
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Monty A. Escabí
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
- * E-mail:
| |
Collapse
|
5
|
A Physiologically Inspired Model for Solving the Cocktail Party Problem. J Assoc Res Otolaryngol 2019; 20:579-593. [PMID: 31392449 PMCID: PMC6889086 DOI: 10.1007/s10162-019-00732-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 07/18/2019] [Indexed: 11/05/2022] Open
Abstract
At a cocktail party, we can broadly monitor the entire acoustic scene to detect important cues (e.g., our names being called, or the fire alarm going off), or selectively listen to a target sound source (e.g., a conversation partner). It has recently been observed that individual neurons in the avian field L (analog to the mammalian auditory cortex) can display broad spatial tuning to single targets and selective tuning to a target embedded in spatially distributed sound mixtures. Here, we describe a model inspired by these experimental observations and apply it to process mixtures of human speech sentences. This processing is realized in the neural spiking domain. It converts binaural acoustic inputs into cortical spike trains using a multi-stage model composed of a cochlear filter-bank, a midbrain spatial-localization network, and a cortical network. The output spike trains of the cortical network are then converted back into an acoustic waveform, using a stimulus reconstruction technique. The intelligibility of the reconstructed output is quantified using an objective measure of speech intelligibility. We apply the algorithm to single and multi-talker speech to demonstrate that the physiologically inspired algorithm is able to achieve intelligible reconstruction of an “attended” target sentence embedded in two other non-attended masker sentences. The algorithm is also robust to masker level and displays performance trends comparable to humans. The ideas from this work may help improve the performance of hearing assistive devices (e.g., hearing aids and cochlear implants), speech-recognition technology, and computational algorithms for processing natural scenes cluttered with spatially distributed acoustic objects.
Collapse
|
6
|
Combination of Interaural Level and Time Difference in Azimuthal Sound Localization in Owls. eNeuro 2018; 4:eN-NWR-0238-17. [PMID: 29379866 PMCID: PMC5779116 DOI: 10.1523/eneuro.0238-17.2017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Revised: 11/21/2017] [Accepted: 11/21/2017] [Indexed: 11/21/2022] Open
Abstract
A function of the auditory system is to accurately determine the location of a sound source. The main cues for sound location are interaural time (ITD) and level (ILD) differences. Humans use both ITD and ILD to determine the azimuth. Thus far, the conception of sound localization in barn owls was that their facial ruff and asymmetrical ears generate a two-dimensional grid of ITD for azimuth and ILD for elevation. We show that barn owls also use ILD for azimuthal sound localization when ITDs are ambiguous. For high-frequency narrowband sounds, midbrain neurons can signal multiple locations, leading to the perception of an auditory illusion called a phantom source. Owls respond to such an illusory percept by orienting toward it instead of the true source. Acoustical measurements close to the eardrum reveal a small ILD component that changes with azimuth, suggesting that ITD and ILD information could be combined to eliminate the illusion. Our behavioral data confirm that perception was robust against ambiguities if ITD and ILD information was combined. Electrophysiological recordings of ILD sensitivity in the owl’s midbrain support the behavioral findings indicating that rival brain hemispheres drive the decision to orient to either true or phantom sources. Thus, the basis for disambiguation, and reliable detection of sound source azimuth, relies on similar cues across species as similar response to combinations of ILD and narrowband ITD has been observed in humans.
Collapse
|
7
|
Fischer BJ, Peña JL. Optimal nonlinear cue integration for sound localization. J Comput Neurosci 2017; 42:37-52. [PMID: 27714569 PMCID: PMC5253079 DOI: 10.1007/s10827-016-0626-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 08/10/2016] [Accepted: 09/06/2016] [Indexed: 10/20/2022]
Abstract
Integration of multiple sensory cues can improve performance in detection and estimation tasks. There is an open theoretical question of the conditions under which linear or nonlinear cue combination is Bayes-optimal. We demonstrate that a neural population decoded by a population vector requires nonlinear cue combination to approximate Bayesian inference. Specifically, if cues are conditionally independent, multiplicative cue combination is optimal for the population vector. The model was tested on neural and behavioral responses in the barn owl's sound localization system where space-specific neurons owe their selectivity to multiplicative tuning to sound localization cues interaural phase (IPD) and level (ILD) differences. We found that IPD and ILD cues are approximately conditionally independent. As a result, the multiplicative combination selectivity to IPD and ILD of midbrain space-specific neurons permits a population vector to perform Bayesian cue combination. We further show that this model describes the owl's localization behavior in azimuth and elevation. This work provides theoretical justification and experimental evidence supporting the optimality of nonlinear cue combination.
Collapse
Affiliation(s)
- Brian J Fischer
- Department of Mathematics, Seattle University, 901 12th Ave, Seattle, WA, 98122, USA.
| | - Jose Luis Peña
- Department of Neuroscience, Albert Einstein College of Medicine, 1410 Pelham Parkway South, Bronx, NY, 10461, USA
| |
Collapse
|
8
|
Gosmann J, Eliasmith C. Optimizing Semantic Pointer Representations for Symbol-Like Processing in Spiking Neural Networks. PLoS One 2016; 11:e0149928. [PMID: 26900931 PMCID: PMC4762696 DOI: 10.1371/journal.pone.0149928] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Accepted: 02/05/2016] [Indexed: 11/17/2022] Open
Abstract
The Semantic Pointer Architecture (SPA) is a proposal of specifying the computations and architectural elements needed to account for cognitive functions. By means of the Neural Engineering Framework (NEF) this proposal can be realized in a spiking neural network. However, in any such network each SPA transformation will accumulate noise. By increasing the accuracy of common SPA operations, the overall network performance can be increased considerably. As well, the representations in such networks present a trade-off between being able to represent all possible values and being only able to represent the most likely values, but with high accuracy. We derive a heuristic to find the near-optimal point in this trade-off. This allows us to improve the accuracy of common SPA operations by up to 25 times. Ultimately, it allows for a reduction of neuron number and a more efficient use of both traditional and neuromorphic hardware, which we demonstrate here.
Collapse
Affiliation(s)
- Jan Gosmann
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
9
|
Crawford E, Gingerich M, Eliasmith C. Biologically Plausible, Human‐Scale Knowledge Representation. Cogn Sci 2015; 40:782-821. [DOI: 10.1111/cogs.12261] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2013] [Revised: 02/17/2015] [Accepted: 03/03/2015] [Indexed: 10/23/2022]
Affiliation(s)
- Eric Crawford
- Computational Neuroscience Research Group University of Waterloo
| | | | - Chris Eliasmith
- Computational Neuroscience Research Group University of Waterloo
| |
Collapse
|
10
|
Fontaine B, Peña JL, Brette R. Spike-threshold adaptation predicted by membrane potential dynamics in vivo. PLoS Comput Biol 2014; 10:e1003560. [PMID: 24722397 PMCID: PMC3983065 DOI: 10.1371/journal.pcbi.1003560] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 02/21/2014] [Indexed: 11/18/2022] Open
Abstract
Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. Neurons spike when their membrane potential exceeds a threshold value, but this value has been shown to be variable in the same neuron recorded in vivo. This variability could reflect noise, or deterministic processes that make the threshold vary with the membrane potential. The second alternative would have important functional consequences. Here, we show that threshold variability is a genuine feature of neurons, which reflects adaptation to the membrane potential at a short timescale, with little contribution from noise. This demonstrates that a deterministic model can predict spikes based only on the membrane potential.
Collapse
Affiliation(s)
- Bertrand Fontaine
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - José Luis Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Romain Brette
- Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes, Paris, France
- Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- Sorbonne Universités, UPMC Univ. Paris 06, UMR_S 968, Institut de la Vision, Paris, France
- INSERM, U968, Paris, France
- CNRS, UMR_7210, Paris, France
- * E-mail:
| |
Collapse
|
11
|
Eliasmith C, Stewart TC, Choo X, Bekolay T, DeWolf T, Tang Y, Tang C, Rasmussen D. A large-scale model of the functioning brain. Science 2012. [PMID: 23197532 DOI: 10.1126/science.1225266] [Citation(s) in RCA: 330] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior of animals to the equally complex activity of their brains. Recently described, large-scale neural models have not bridged this gap between neural activity and biological function. In this work, we present a 2.5-million-neuron model of the brain (called "Spaun") that bridges this gap by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks.
Collapse
Affiliation(s)
- Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON N2J 3G1, Canada.
| | | | | | | | | | | | | | | |
Collapse
|
12
|
Singheiser M, Gutfreund Y, Wagner H. The representation of sound localization cues in the barn owl's inferior colliculus. Front Neural Circuits 2012; 6:45. [PMID: 22798945 PMCID: PMC3394089 DOI: 10.3389/fncir.2012.00045] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2012] [Accepted: 06/21/2012] [Indexed: 11/13/2022] Open
Abstract
The barn owl is a well-known model system for studying auditory processing and sound localization. This article reviews the morphological and functional organization, as well as the role of the underlying microcircuits, of the barn owl's inferior colliculus (IC). We focus on the processing of frequency and interaural time (ITD) and level differences (ILD). We first summarize the morphology of the sub-nuclei belonging to the IC and their differentiation by antero- and retrograde labeling and by staining with various antibodies. We then focus on the response properties of neurons in the three major sub-nuclei of IC [core of the central nucleus of the IC (ICCc), lateral shell of the central nucleus of the IC (ICCls), and the external nucleus of the IC (ICX)]. ICCc projects to ICCls, which in turn sends its information to ICX. The responses of neurons in ICCc are sensitive to changes in ITD but not to changes in ILD. The distribution of ITD sensitivity with frequency in ICCc can only partly be explained by optimal coding. We continue with the tuning properties of ICCls neurons, the first station in the midbrain where the ITD and ILD pathways merge after they have split at the level of the cochlear nucleus. The ICCc and ICCls share similar ITD and frequency tuning. By contrast, ICCls shows sigmoidal ILD tuning which is absent in ICCc. Both ICCc and ICCls project to the forebrain, and ICCls also projects to ICX, where space-specific neurons are found. Space-specific neurons exhibit side peak suppression in ITD tuning, bell-shaped ILD tuning, and are broadly tuned to frequency. These neurons respond only to restricted positions of auditory space and form a map of two-dimensional auditory space. Finally, we briefly review major IC features, including multiplication-like computations, correlates of echo suppression, plasticity, and adaptation.
Collapse
|
13
|
Abstract
Multiplication is an operation which is fundamental in mathematics, but it is also relevant for many sensory computations in the nervous system. Nevertheless, despite a number of suggestions in the literature, it is not known how multiplication is implemented in neural circuitry. We propose a simple feedforward circuit that combines a rate model of neural activity and a realistic neural input-output relation to accurately and efficiently implement multiplication of two rate-coded quantities. By simulating a network of integrate and fire neurons, we demonstrate the functional efficiency of the circuit. Finally we discuss how the model can be tested experimentally.
Collapse
Affiliation(s)
- Panagiotis Nezis
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, UK
| | | |
Collapse
|
14
|
Abstract
The human brain has accumulated many useful building blocks over its evolutionary history, and the best knowledge of these has often derived from experiments performed in animal species that display finely honed abilities. In this article we review a model system at the forefront of investigation into the neural bases of information processing, plasticity, and learning: the barn owl auditory localization pathway. In addition to the broadly applicable principles gleaned from three decades of work in this system, there are good reasons to believe that continued exploration of the owl brain will be invaluable for further advances in understanding of how neuronal networks give rise to behavior.
Collapse
Affiliation(s)
- Jose L Pena
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, USA
| | | |
Collapse
|
15
|
Witten IB, Knudsen PF, Knudsen EI. A dominance hierarchy of auditory spatial cues in barn owls. PLoS One 2010; 5:e10396. [PMID: 20442852 PMCID: PMC2861002 DOI: 10.1371/journal.pone.0010396] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2009] [Accepted: 03/22/2010] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Barn owls integrate spatial information across frequency channels to localize sounds in space. METHODOLOGY/PRINCIPAL FINDINGS We presented barn owls with synchronous sounds that contained different bands of frequencies (3-5 kHz and 7-9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation. CONCLUSIONS/SIGNIFICANCE SIGNIFICANCE We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.
Collapse
Affiliation(s)
- Ilana B. Witten
- Neurobiology Department, Stanford University Medical Center, Stanford, California, United States of America
| | - Phyllis F. Knudsen
- Neurobiology Department, Stanford University Medical Center, Stanford, California, United States of America
| | - Eric I. Knudsen
- Neurobiology Department, Stanford University Medical Center, Stanford, California, United States of America
- * E-mail:
| |
Collapse
|
16
|
Fischer BJ, Anderson CH, Peña JL. Multiplicative auditory spatial receptive fields created by a hierarchy of population codes. PLoS One 2009; 4:e8015. [PMID: 19956693 PMCID: PMC2776990 DOI: 10.1371/journal.pone.0008015] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2009] [Accepted: 10/06/2009] [Indexed: 12/03/2022] Open
Abstract
A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system.
Collapse
Affiliation(s)
- Brian J. Fischer
- Department of Mathematics, Occidental College, Los Angeles, California, United States of America
- Division of Biology, California Institute of Technology, Pasadena, California, United States of America
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri, United States of America
| | - Charles H. Anderson
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri, United States of America
| | - José Luis Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
17
|
Vonderschen K, Wagner H. Tuning to Interaural Time Difference and Frequency Differs Between the Auditory Arcopallium and the External Nucleus of the Inferior Colliculus. J Neurophysiol 2009; 101:2348-61. [DOI: 10.1152/jn.91196.2008] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Barn owls process sound-localization information in two parallel pathways, the midbrain and the forebrain pathway. Exctracellular recordings of neural responses to auditory stimuli from far advanced stations of these pathways, the auditory arcopallium in the forebrain and the external nucleus of the inferior colliculus in the midbrain, demonstrated that the representations of interaural time difference and frequency in the forebrain pathway differ from those in the midbrain pathway. Specifically, low-frequency representation was conserved in the forebrain pathway, while it was lost in the midbrain pathway. Variation of interaural time difference yielded symmetrical tuning curves in the midbrain pathway. By contrast, the typical forebrain-tuning curve was asymmetric with a steep slope crossing zero time difference and a less-steep slope toward larger contralateral time disparities. Low sound frequencies contributed sensitivity to contralateral leading sounds underlying these asymmetries, whereas high frequencies enhanced the steepness of slopes at small interaural time differences. Furthermore, the peaks of time-disparity tuning curves were wider in the forebrain than in the midbrain. The distribution of the steepest slopes of best interaural time differences in the auditory arcopallium, but not in the external nucleus of the inferior colliculus, was centered at zero time difference. The distribution observed in the auditory arocpallium is reminiscent of the situation observed in small mammals. We speculate that the forebrain representation may serve as a population code supporting fine discrimination of central interaural time differences and coarse indication of laterality of a stimulus for large interaural time differences.
Collapse
|
18
|
Fischer BJ, Konishi M. Variability reduction in interaural time difference tuning in the barn owl. J Neurophysiol 2008; 100:708-15. [PMID: 18509071 DOI: 10.1152/jn.90358.2008] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The interaural time difference (ITD) is the primary auditory cue used by the barn owl for localization in the horizontal direction. ITD is initially computed by circuits consisting of axonal delay lines from one of the cochlear nuclei and coincidence detector neurons in the nucleus laminaris (NL). NL projects directly to the anterior part of the dorsal lateral lemniscal nucleus (LLDa), and this area projects to the core of the central nucleus of the inferior colliculus (ICcc) in the midbrain. To show the selectivity of an NL neuron for ITD requires averaging of responses over several stimulus presentations for each ITD. In contrast, ICcc neurons detect their preferred ITD in a single burst of stimulus. We recorded extracellularly the responses of LLDa neurons to ITD in anesthetized barn owls and show that this ability is already present in LLDa, raising the possibility that ICcc inherits its noise reduction property from LLDa.
Collapse
Affiliation(s)
- Brian J Fischer
- Division of Biology, California Institute of Technology, Mail code 216-76, 1200 E. California, Pasadena, CA 91125, USA.
| | | |
Collapse
|