1
|
Drotos AC, Wajdi SZ, Malina M, Silveira MA, Williamson RS, Roberts MT. Neurons in the inferior colliculus use multiplexing to encode features of frequency-modulated sweeps. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.10.637492. [PMID: 39990317 PMCID: PMC11844360 DOI: 10.1101/2025.02.10.637492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2025]
Abstract
Within the central auditory pathway, the inferior colliculus (IC) is a critical integration center for ascending sound information. Previous studies have shown that many IC neurons exhibit receptive fields for individual features of auditory stimuli, such as sound frequency, intensity, and location, but growing evidence suggests that some IC neurons may multiplex features of sound. Here, we used in vivo juxtacellular recordings in awake, head-fixed mice to examine how IC neurons responded to frequency-modulated sweeps that varied in speed, direction, intensity, and frequency range. We then applied machine learning methods to determine how individual IC neurons encode features of FM sweeps. We found that individual IC neurons multiplex FM sweep features using various strategies including spike timing, distribution of inter-spike intervals, and first spike latency. In addition, we found that decoding accuracy for sweep direction can vary with sweep speed and frequency range, suggesting the presence of mixed selectivity in single neurons. Accordingly, using static receptive fields for direction alone yielded poor predictions of neuron responses to vocalizations that contain simple frequency changes. Lastly, we showed that encoding strategies varied across individual neurons, resulting in a highly informative population response for FM sweep features. Together, our results suggest that multiplexing sound features is a common mechanism used by IC neurons to represent complex sounds.
Collapse
Affiliation(s)
- Audrey C. Drotos
- Kresge Hearing Research Institute, Department of Otolaryngology – Head and Neck Surgery, University of Michigan, Ann Arbor, Michigan 48109
| | - Sarah Z. Wajdi
- Kresge Hearing Research Institute, Department of Otolaryngology – Head and Neck Surgery, University of Michigan, Ann Arbor, Michigan 48109
| | - Michael Malina
- Departments of Otolaryngology-Head & Neck Surgery and Neurobiology, University of Pittsburgh, PA, 16260
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Marina A. Silveira
- Kresge Hearing Research Institute, Department of Otolaryngology – Head and Neck Surgery, University of Michigan, Ann Arbor, Michigan 48109
- Department of Neuroscience, Development and Regenerative Biology, University of Texas at San Antonio, San Antonio, Texas, 78249
| | - Ross S. Williamson
- Departments of Otolaryngology-Head & Neck Surgery and Neurobiology, University of Pittsburgh, PA, 16260
| | - Michael T. Roberts
- Kresge Hearing Research Institute, Department of Otolaryngology – Head and Neck Surgery, University of Michigan, Ann Arbor, Michigan 48109
- Department of Molecular and Integrative Physiology, University of Michigan, Ann Arbor, Michigan 48109
| |
Collapse
|
2
|
Vaziri PA, McDougle SD, Clark DA. Humans can use positive and negative spectrotemporal correlations to detect rising and falling pitch. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.03.606481. [PMID: 39131316 PMCID: PMC11312537 DOI: 10.1101/2024.08.03.606481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
To discern speech or appreciate music, the human auditory system detects how pitch increases or decreases over time. However, the algorithms used to detect changes in pitch, or pitch motion, are incompletely understood. Here, using psychophysics, computational modeling, functional neuroimaging, and analysis of recorded speech, we ask if humans can detect pitch motion using computations analogous to those used by the visual system. We adapted stimuli from studies of vision to create novel auditory correlated noise stimuli that elicited robust pitch motion percepts. Crucially, these stimuli are inharmonic and possess no persistent features across frequency or time, but do possess positive or negative local spectrotemporal correlations in intensity. In psychophysical experiments, we found clear evidence that humans can judge pitch direction based only on positive or negative spectrotemporal intensity correlations. The key behavioral result-robust sensitivity to the negative spectrotemporal correlations-is a direct analogue of illusory "reverse-phi" motion in vision, and thus constitutes a new auditory illusion. Our behavioral results and computational modeling led us to hypothesize that human auditory processing may employ pitch direction opponency. fMRI measurements in auditory cortex supported this hypothesis. To link our psychophysical findings to real-world pitch perception, we analyzed recordings of English and Mandarin speech and found that pitch direction was robustly signaled by both positive and negative spectrotemporal correlations, suggesting that sensitivity to both types of correlations confers ecological benefits. Overall, this work reveals how motion detection algorithms sensitive to local correlations are deployed by the central nervous system across disparate modalities (vision and audition) and dimensions (space and frequency).
Collapse
|
3
|
Salles A, Loscalzo E, Montoya J, Mendoza R, Boergens KM, Moss CF. Auditory processing of communication calls in interacting bats. iScience 2024; 27:109872. [PMID: 38827399 PMCID: PMC11141141 DOI: 10.1016/j.isci.2024.109872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 01/15/2024] [Accepted: 04/29/2024] [Indexed: 06/04/2024] Open
Abstract
There is strong evidence that social context plays a role in the processing of acoustic signals. Yet, the circuits and mechanisms that govern this process are still not fully understood. The insectivorous big brown bat, Eptesicus fuscus, emits a wide array of communication calls, including food-claiming calls, aggressive calls, and appeasement calls. We implemented a competitive foraging task to explore the influence of behavioral context on auditory midbrain responses to conspecific social calls. We recorded neural population responses from the inferior colliculus (IC) of freely interacting bats and analyzed data with respect to social context. Analysis of our neural recordings from the IC shows stronger population responses to individual calls during social events. For the first time, neural recordings from the IC of a copulating bat were obtained. Our results indicate that social context enhances neuronal population responses to social vocalizations in the bat IC.
Collapse
Affiliation(s)
- Angeles Salles
- Department of Biological Sciences, University of Illinois Chicago, Chicago, IL 60607, USA
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Emely Loscalzo
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jessica Montoya
- Department of Biological Sciences, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Rosa Mendoza
- Department of Biological Sciences, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Kevin M. Boergens
- Department of Physics, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Cynthia F. Moss
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
4
|
Salles A, Neunuebel J. What do mammals have to say about the neurobiology of acoustic communication? MOLECULAR PSYCHOLOGY : BRAIN, BEHAVIOR, AND SOCIETY 2023; 2:5. [PMID: 38827277 PMCID: PMC11141777 DOI: 10.12688/molpsychol.17539.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Auditory communication is crucial across taxa, including humans, because it enables individuals to convey information about threats, food sources, mating opportunities, and other social cues necessary for survival. Comparative approaches to auditory communication will help bridge gaps across taxa and facilitate our understanding of the neural mechanisms underlying this complex task. In this work, we briefly review the field of auditory communication processing and the classical champion animal, the songbird. In addition, we discuss other mammalian species that are advancing the field. In particular, we emphasize mice and bats, highlighting the characteristics that may inform how we think about communication processing.
Collapse
Affiliation(s)
- Angeles Salles
- Biological Sciences, University of Illinois Chicago, Chicago, Illinois, USA
| | - Joshua Neunuebel
- Psychological and Brain Sciences, University of Delaware, Newark, Delaware, USA
| |
Collapse
|
5
|
Macias S, Bakshi K, Troyer T, Smotherman M. The prefrontal cortex of the Mexican free-tailed bat is more selective to communication calls than primary auditory cortex. J Neurophysiol 2022; 128:634-648. [PMID: 35975923 PMCID: PMC9448334 DOI: 10.1152/jn.00436.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 07/20/2022] [Accepted: 08/05/2022] [Indexed: 11/22/2022] Open
Abstract
In this study, we examined the auditory responses of a prefrontal area, the frontal auditory field (FAF), of an echolocating bat (Tadarida brasiliensis) and presented a comparative analysis of the neuronal response properties between the FAF and the primary auditory cortex (A1). We compared single-unit responses from the A1 and the FAF elicited by pure tones, downward frequency-modulated sweeps (dFMs), and species-specific vocalizations. Unlike the A1, FAFs were not frequency tuned. However, progressive increases in dFM sweep rate elicited a systematic increase of response precision, a phenomenon that does not take place in the A1. Call selectivity was higher in the FAF versus A1. We calculated the neuronal spectrotemporal receptive fields (STRFs) and spike-triggered averages (STAs) to predict responses to the communication calls and provide an explanation for the differences in call selectivity between the FAF and A1. In the A1, we found a high correlation between predicted and evoked responses. However, we did not generate reasonable STRFs in the FAF, and the prediction based on the STAs showed lower correlation coefficient than that of the A1. This suggests nonlinear response properties in the FAF that are stronger than the linear response properties in the A1. Stimulating with a call sequence increased call selectivity in the A1, but it remained unchanged in the FAF. These data are consistent with a role for the FAF in assessing distinctive acoustic features downstream of A1, similar to the role proposed for primate ventrolateral prefrontal cortex.NEW & NOTEWORTHY In this study, we examined the neuronal responses of a frontal cortical area in an echolocating bat to behaviorally relevant acoustic stimuli and compared them with those in the primary auditory cortex (A1). In contrast to the A1, neurons in the bat frontal auditory field are not frequency tuned but showed a higher selectivity for social signals such as communication calls. The results presented here indicate that the frontal auditory field may represent an additional processing center for behaviorally relevant sounds.
Collapse
Affiliation(s)
- Silvio Macias
- Department of Biology, Texas A&M University, College Station, Texas
| | - Kushal Bakshi
- Institute for Neuroscience, Texas A&M University, College Station, Texas
| | - Todd Troyer
- Department of Neuroscience, Developmental and Regenerative Biology, University of Texas at San Antonio, San Antonio, Texas
| | - Michael Smotherman
- Department of Biology, Texas A&M University, College Station, Texas
- Institute for Neuroscience, Texas A&M University, College Station, Texas
| |
Collapse
|
6
|
Squadrani L, Curti N, Giampieri E, Remondini D, Blais B, Castellani G. Effectiveness of Biologically Inspired Neural Network Models in Learning and Patterns Memorization. ENTROPY (BASEL, SWITZERLAND) 2022; 24:682. [PMID: 35626566 PMCID: PMC9141587 DOI: 10.3390/e24050682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 03/16/2022] [Accepted: 05/09/2022] [Indexed: 01/27/2023]
Abstract
Purpose: In this work, we propose an implementation of the Bienenstock-Cooper-Munro (BCM) model, obtained by a combination of the classical framework and modern deep learning methodologies. The BCM model remains one of the most promising approaches to modeling the synaptic plasticity of neurons, but its application has remained mainly confined to neuroscience simulations and few applications in data science. Methods: To improve the convergence efficiency of the BCM model, we combine the original plasticity rule with the optimization tools of modern deep learning. By numerical simulation on standard benchmark datasets, we prove the efficiency of the BCM model in learning, memorization capacity, and feature extraction. Results: In all the numerical simulations, the visualization of neuronal synaptic weights confirms the memorization of human-interpretable subsets of patterns. We numerically prove that the selectivity obtained by BCM neurons is indicative of an internal feature extraction procedure, useful for patterns clustering and classification. The introduction of competitiveness between neurons in the same BCM network allows the network to modulate the memorization capacity of the model and the consequent model selectivity. Conclusions: The proposed improvements make the BCM model a suitable alternative to standard machine learning techniques for both feature selection and classification tasks.
Collapse
Affiliation(s)
- Lorenzo Squadrani
- Department of Physics and Astronomy, University of Bologna, 40126 Bologna, Italy; (L.S.); (D.R.)
| | - Nico Curti
- Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40126 Bologna, Italy; (N.C.); (G.C.)
| | - Enrico Giampieri
- Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40126 Bologna, Italy; (N.C.); (G.C.)
| | - Daniel Remondini
- Department of Physics and Astronomy, University of Bologna, 40126 Bologna, Italy; (L.S.); (D.R.)
- INFN, 40127 Bologna, Italy
| | - Brian Blais
- Department of Science, Bryant University, Smithfield, RI 02917, USA;
| | - Gastone Castellani
- Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40126 Bologna, Italy; (N.C.); (G.C.)
| |
Collapse
|
7
|
Chitradurga Achutha A, Peremans H, Firzlaff U, Vanderelst D. Efficient encoding of spectrotemporal information for bat echolocation. PLoS Comput Biol 2021; 17:e1009052. [PMID: 34181643 PMCID: PMC8270447 DOI: 10.1371/journal.pcbi.1009052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 07/09/2021] [Accepted: 05/07/2021] [Indexed: 12/04/2022] Open
Abstract
In most animals, natural stimuli are characterized by a high degree of redundancy, limiting the ensemble of ecologically valid stimuli to a significantly reduced subspace of the representation space. Neural encodings can exploit this redundancy and increase sensing efficiency by generating low-dimensional representations that retain all information essential to support behavior. In this study, we investigate whether such an efficient encoding can be found to support a broad range of echolocation tasks in bats. Starting from an ensemble of echo signals collected with a biomimetic sonar system in natural indoor and outdoor environments, we use independent component analysis to derive a low-dimensional encoding of the output of a cochlear model. We show that this compressive encoding retains all essential information. To this end, we simulate a range of psycho-acoustic experiments with bats. In these simulations, we train a set of neural networks to use the encoded echoes as input while performing the experiments. The results show that the neural networks’ performance is at least as good as that of the bats. We conclude that our results indicate that efficient encoding of echo information is feasible and, given its many advantages, very likely to be employed by bats. Previous studies have demonstrated that low-dimensional encodings allow for task resolution at a relatively high level. In contrast to previous work in this area, we show that high performance can also be achieved when low-dimensional filters are derived from a data set of realistic echo signals, not tailored to specific experimental conditions. We show that complex (and simple) echoes from real environments can be efficiently and effectively represented using a small set of filters. Critically, we show that high performance across a range of tasks can be achieved when low-dimensional filters are derived from a data set of realistic echo signals, not tailored to specific experimental conditions. The redundancy in echoic information opens up the opportunity for efficient encoding, reducing the computational load of echo processing as well as the memory load for storing the information. Therefore, we predict the auditory system of bats to capitalize on this opportunity for efficient coding by implementing filters with spectrotemporal properties akin to those hypothesized here. Indeed, the filters we obtain here are similar to those found in other animals and other sensing capabilities. Our results indicate that bats could exploit the redundancy in sonar signals to implement an efficient neural encoding of the relevant information.
Collapse
Affiliation(s)
- Adarsh Chitradurga Achutha
- Mechanical and Materials Engineering, University of Cincinnati, Cincinnati, Ohio, United States of America
| | - Herbert Peremans
- Department of Engineering Management, University of Antwerp, Antwerp, Belgium
| | - Uwe Firzlaff
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising, Germany
| | - Dieter Vanderelst
- Department of Biological Sciences, University of Cincinnati, Cincinnati, Ohio, United States of America
- * E-mail:
| |
Collapse
|
8
|
Salles A, Park S, Sundar H, Macías S, Elhilali M, Moss CF. Neural Response Selectivity to Natural Sounds in the Bat Midbrain. Neuroscience 2020; 434:200-211. [PMID: 31918008 DOI: 10.1016/j.neuroscience.2019.11.047] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Revised: 11/27/2019] [Accepted: 11/28/2019] [Indexed: 11/29/2022]
Abstract
Little is known about the neural mechanisms that mediate differential action-selection responses to communication and echolocation calls in bats. For example, in the big brown bat, frequency modulated (FM) food-claiming communication calls closely resemble FM echolocation calls, which guide social and orienting behaviors, respectively. Using advanced signal processing methods, we identified fine differences in temporal structure of these natural sounds that appear key to auditory discrimination and behavioral decisions. We recorded extracellular potentials from single neurons in the midbrain inferior colliculus (IC) of passively listening animals, and compared responses to playbacks of acoustic signals used by bats for social communication and echolocation. We combined information obtained from spike number and spike triggered averages (STA) to reveal a robust classification of neuron selectivity for communication or echolocation calls. These data highlight the importance of temporal acoustic structure for differentiating echolocation and food-claiming social calls and point to general mechanisms of natural sound processing across species.
Collapse
Affiliation(s)
- Angeles Salles
- Department of Psychological and Brain Sciences, Johns Hopkins University, United States.
| | - Sangwook Park
- Department of Electrical and Computer Engineering, Johns Hopkins University, United States
| | - Harshavardhan Sundar
- Department of Electrical and Computer Engineering, Johns Hopkins University, United States
| | - Silvio Macías
- Department of Psychological and Brain Sciences, Johns Hopkins University, United States
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, United States
| | - Cynthia F Moss
- Department of Psychological and Brain Sciences, Johns Hopkins University, United States
| |
Collapse
|
9
|
Auditory Selectivity for Spectral Contrast in Cortical Neurons and Behavior. J Neurosci 2019; 40:1015-1027. [PMID: 31826944 DOI: 10.1523/jneurosci.1200-19.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 12/04/2019] [Accepted: 12/06/2019] [Indexed: 12/17/2022] Open
Abstract
Vocal communication relies on the ability of listeners to identify, process, and respond to vocal sounds produced by others in complex environments. To accurately recognize these signals, animals' auditory systems must robustly represent acoustic features that distinguish vocal sounds from other environmental sounds. Vocalizations typically have spectral structure; power regularly fluctuates along the frequency axis, creating spectral contrast. Spectral contrast is closely related to harmonicity, which refers to spectral power peaks occurring at integer multiples of a fundamental frequency. Although both spectral contrast and harmonicity typify natural sounds, they may differ in salience for communication behavior and engage distinct neural mechanisms. Therefore, it is important to understand which of these properties of vocal sounds underlie the neural processing and perception of vocalizations.Here, we test the importance of vocalization-typical spectral features in behavioral recognition and neural processing of vocal sounds, using male zebra finches. We show that behavioral responses to natural and synthesized vocalizations rely on the presence of discrete frequency components, but not on harmonic ratios between frequencies. We identify a specific population of neurons in primary auditory cortex that are sensitive to the spectral resolution of vocal sounds. We find that behavioral and neural response selectivity is explained by sensitivity to spectral contrast rather than harmonicity. This selectivity emerges within the cortex; it is absent in the thalamorecipient region and present in the deep output region. Further, deep-region neurons that are contrast-sensitive show distinct temporal responses and selectivity for modulation density compared with unselective neurons.SIGNIFICANCE STATEMENT Auditory coding and perception are critical for vocal communication. Auditory neurons must encode acoustic features that distinguish vocalizations from other sounds in the environment and generate percepts that direct behavior. The acoustic features that drive neural and behavioral selectivity for vocal sounds are unknown, however. Here, we show that vocal response behavior scales with stimulus spectral contrast but not with harmonicity, in songbirds. We identify a distinct population of auditory cortex neurons in which response selectivity parallels behavioral selectivity. This neural response selectivity is explained by sensitivity to spectral contrast rather than to harmonicity. Our findings inform the understanding of how the auditory system encodes socially-relevant signals via detection of an acoustic feature that is ubiquitous in vocalizations.
Collapse
|
10
|
Wong AB, Borst JGG. Tonotopic and non-auditory organization of the mouse dorsal inferior colliculus revealed by two-photon imaging. eLife 2019; 8:49091. [PMID: 31612853 PMCID: PMC6834370 DOI: 10.7554/elife.49091] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Accepted: 10/13/2019] [Indexed: 12/17/2022] Open
Abstract
The dorsal (DCIC) and lateral cortices (LCIC) of the inferior colliculus are major targets of the auditory and non-auditory cortical areas, suggesting a role in complex multimodal information processing. However, relatively little is known about their functional organization. We utilized in vivo two-photon Ca2+ imaging in awake mice expressing GCaMP6s in GABAergic or non-GABAergic neurons in the IC to investigate their spatial organization. We found different classes of temporal responses, which we confirmed with simultaneous juxtacellular electrophysiology. Both GABAergic and non-GABAergic neurons showed spatial microheterogeneity in their temporal responses. In contrast, a robust, double rostromedial-caudolateral gradient of frequency tuning was conserved between the two groups, and even among the subclasses. This, together with the existence of a subset of neurons sensitive to spontaneous movements, provides functional evidence for redefining the border between DCIC and LCIC.
Collapse
Affiliation(s)
- Aaron Benson Wong
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands
| | - J Gerard G Borst
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands
| |
Collapse
|
11
|
Chen C, Read HL, Escabí MA. A temporal integration mechanism enhances frequency selectivity of broadband inputs to inferior colliculus. PLoS Biol 2019; 17:e2005861. [PMID: 31233489 PMCID: PMC6611646 DOI: 10.1371/journal.pbio.2005861] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Revised: 07/05/2019] [Accepted: 05/22/2019] [Indexed: 11/18/2022] Open
Abstract
Accurately resolving frequency components in sounds is essential for sound recognition, yet there is little direct evidence for how frequency selectivity is preserved or newly created across auditory structures. We demonstrate that prepotentials (PPs) with physiological properties resembling presynaptic potentials from broadly tuned brainstem inputs can be recorded concurrently with postsynaptic action potentials in inferior colliculus (IC). These putative brainstem inputs (PBIs) are broadly tuned and exhibit delayed and spectrally interleaved excitation and inhibition not present in the simultaneously recorded IC neurons (ICNs). A sharpening of tuning is accomplished locally at the expense of spike-timing precision through nonlinear temporal integration of broadband inputs. A neuron model replicates the finding and demonstrates that temporal integration alone can degrade timing precision while enhancing frequency tuning through interference of spectrally in- and out-of-phase inputs. These findings suggest that, in contrast to current models that require local inhibition, frequency selectivity can be sharpened through temporal integration, thus supporting an alternative computational strategy to quickly refine frequency selectivity.
Collapse
Affiliation(s)
- Chen Chen
- Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Heather L. Read
- Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Monty A. Escabí
- Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| |
Collapse
|
12
|
Hörpel SG, Firzlaff U. Processing of fast amplitude modulations in bat auditory cortex matches communication call-specific sound features. J Neurophysiol 2019; 121:1501-1512. [PMID: 30785811 DOI: 10.1152/jn.00748.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Bats use a large repertoire of calls for social communication. In the bat Phyllostomus discolor, social communication calls are often characterized by sinusoidal amplitude and frequency modulations with modulation frequencies in the range of 100-130 Hz. However, peaks in mammalian auditory cortical modulation transfer functions are typically limited to modulation frequencies below 100 Hz. We investigated the coding of sinusoidally amplitude modulated sounds in auditory cortical neurons in P. discolor by constructing rate and temporal modulation transfer functions. Neuronal responses to playbacks of various communication calls were additionally recorded and compared with the neurons' responses to sinusoidally amplitude-modulated sounds. Cortical neurons in the posterior dorsal field of the auditory cortex were tuned to unusually high modulation frequencies: rate modulation transfer functions often peaked around 130 Hz (median: 87 Hz), and the median of the highest modulation frequency that evoked significant phase-locking was also 130 Hz. Both values are much higher than reported from the auditory cortex of other mammals, with more than 51% of the units preferring modulation frequencies exceeding 100 Hz. Conspicuously, the fast modulations preferred by the neurons match the fast amplitude and frequency modulations of prosocial, and mostly of aggressive, communication calls in P. discolor. We suggest that the preference for fast amplitude modulations in the P. discolor dorsal auditory cortex serves to reliably encode the fast modulations seen in their communication calls. NEW & NOTEWORTHY Neural processing of temporal sound features is crucial for the analysis of communication calls. In bats, these calls are often characterized by fast temporal envelope modulations. Because auditory cortex neurons typically encode only low modulation frequencies, it is unclear how species-specific vocalizations are cortically processed. We show that auditory cortex neurons in the bat Phyllostomus discolor encode fast temporal envelope modulations. This property improves response specificity to communication calls and thus might support species-specific communication.
Collapse
Affiliation(s)
- Stephen Gareth Hörpel
- Chair of Zoology, Department of Animal Sciences, Technical University of Munich , Freising , Germany
| | - Uwe Firzlaff
- Chair of Zoology, Department of Animal Sciences, Technical University of Munich , Freising , Germany
| |
Collapse
|
13
|
Westö J, May PJC. Describing complex cells in primary visual cortex: a comparison of context and multifilter LN models. J Neurophysiol 2018; 120:703-719. [PMID: 29718805 PMCID: PMC6139451 DOI: 10.1152/jn.00916.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/30/2018] [Accepted: 04/30/2018] [Indexed: 11/24/2022] Open
Abstract
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: 1) we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions, and 2) we evaluate context models and multifilter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multifilter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multifilter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior. NEW & NOTEWORTHY We used data from complex cells in primary visual cortex to estimate a wide variety of receptive field models from two frameworks that have previously not been compared with each other. The models included traditionally used multifilter linear-nonlinear models and novel variants of context models. Using mutual information and correlation coefficients as performance measures, we showed that context models are superior for describing complex cells and that the novel context models performed the best.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering Aalto University , Espoo , Finland
| | - Patrick J C May
- Department of Psychology, Lancaster University , Lancaster , United Kingdom
| |
Collapse
|
14
|
Beetz MJ, Kordes S, García-Rosales F, Kössl M, Hechavarría JC. Processing of Natural Echolocation Sequences in the Inferior Colliculus of Seba's Fruit Eating Bat, Carollia perspicillata. eNeuro 2017; 4:ENEURO.0314-17.2017. [PMID: 29242823 PMCID: PMC5729038 DOI: 10.1523/eneuro.0314-17.2017] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Revised: 11/17/2017] [Accepted: 11/25/2017] [Indexed: 11/21/2022] Open
Abstract
For the purpose of orientation, echolocating bats emit highly repetitive and spatially directed sonar calls. Echoes arising from call reflections are used to create an acoustic image of the environment. The inferior colliculus (IC) represents an important auditory stage for initial processing of echolocation signals. The present study addresses the following questions: (1) how does the temporal context of an echolocation sequence mimicking an approach flight of an animal affect neuronal processing of distance information to echo delays? (2) how does the IC process complex echolocation sequences containing echo information from multiple objects (multiobject sequence)? Here, we conducted neurophysiological recordings from the IC of ketamine-anaesthetized bats of the species Carollia perspicillata and compared the results from the IC with the ones from the auditory cortex (AC). Neuronal responses to an echolocation sequence was suppressed when compared to the responses to temporally isolated and randomized segments of the sequence. The neuronal suppression was weaker in the IC than in the AC. In contrast to the cortex, the time course of the acoustic events is reflected by IC activity. In the IC, suppression sharpens the neuronal tuning to specific call-echo elements and increases the signal-to-noise ratio in the units' responses. When presenting multiple-object sequences, despite collicular suppression, the neurons responded to each object-specific echo. The latter allows parallel processing of multiple echolocation streams at the IC level. Altogether, our data suggests that temporally-precise neuronal responses in the IC could allow fast and parallel processing of multiple acoustic streams.
Collapse
Affiliation(s)
- M. Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
- Department of Behavioral Physiology and Sociobiology, Biozentrum, University of Würzburg, Am Hubland, Würzburg 97074, Germany
| | - Sebastian Kordes
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
| | - Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
| | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
| | - Julio C. Hechavarría
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
| |
Collapse
|
15
|
Holdgraf CR, Rieger JW, Micheli C, Martin S, Knight RT, Theunissen FE. Encoding and Decoding Models in Cognitive Electrophysiology. Front Syst Neurosci 2017; 11:61. [PMID: 29018336 PMCID: PMC5623038 DOI: 10.3389/fnsys.2017.00061] [Citation(s) in RCA: 71] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 08/07/2017] [Indexed: 11/13/2022] Open
Abstract
Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of "Encoding" models, in which stimulus features are used to model brain activity, and "Decoding" models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses.
Collapse
Affiliation(s)
- Christopher R. Holdgraf
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
- Office of the Vice Chancellor for Research, Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, United States
| | - Jochem W. Rieger
- Department of Psychology, Carl-von-Ossietzky University, Oldenburg, Germany
| | - Cristiano Micheli
- Department of Psychology, Carl-von-Ossietzky University, Oldenburg, Germany
- Institut des Sciences Cognitives Marc Jeannerod, Lyon, France
| | - Stephanie Martin
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
- Defitech Chair in Brain-Machine Interface, Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Robert T. Knight
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
| | - Frederic E. Theunissen
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
16
|
Bach JH, Kollmeier B, Anemüller J. Matching Pursuit Analysis of Auditory Receptive Fields' Spectro-Temporal Properties. Front Syst Neurosci 2017; 11:4. [PMID: 28232791 PMCID: PMC5299023 DOI: 10.3389/fnsys.2017.00004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2016] [Accepted: 01/23/2017] [Indexed: 11/13/2022] Open
Abstract
Gabor filters have long been proposed as models for spectro-temporal receptive fields (STRFs), with their specific spectral and temporal rate of modulation qualitatively replicating characteristics of STRF filters estimated from responses to auditory stimuli in physiological data. The present study builds on the Gabor-STRF model by proposing a methodology to quantitatively decompose STRFs into a set of optimally matched Gabor filters through matching pursuit, and by quantitatively evaluating spectral and temporal characteristics of STRFs in terms of the derived optimal Gabor-parameters. To summarize a neuron's spectro-temporal characteristics, we introduce a measure for the “diagonality,” i.e., the extent to which an STRF exhibits spectro-temporal transients which cannot be factorized into a product of a spectral and a temporal modulation. With this methodology, it is shown that approximately half of 52 analyzed zebra finch STRFs can each be well approximated by a single Gabor or a linear combination of two Gabor filters. Moreover, the dominant Gabor functions tend to be oriented either in the spectral or in the temporal direction, with truly “diagonal” Gabor functions rarely being necessary for reconstruction of an STRF's main characteristics. As a toy example for the applicability of STRF and Gabor-STRF filters to auditory detection tasks, we use STRF filters as features in an automatic event detection task and compare them to idealized Gabor filters and mel-frequency cepstral coefficients (MFCCs). STRFs classify a set of six everyday sounds with an accuracy similar to reference Gabor features (94% recognition rate). Spectro-temporal STRF and Gabor features outperform reference spectral MFCCs in quiet and in low noise conditions (down to 0 dB signal to noise ratio).
Collapse
Affiliation(s)
- Jörg-Hendrik Bach
- Medizinische Physik, Universität OldenburgOldenburg, Germany
- Cluster of Excellence Hearing4all, Universität OldenburgOldenburg, Germany
| | - Birger Kollmeier
- Medizinische Physik, Universität OldenburgOldenburg, Germany
- Cluster of Excellence Hearing4all, Universität OldenburgOldenburg, Germany
| | - Jörn Anemüller
- Medizinische Physik, Universität OldenburgOldenburg, Germany
- Cluster of Excellence Hearing4all, Universität OldenburgOldenburg, Germany
- *Correspondence: Jörn Anemüller
| |
Collapse
|
17
|
Meyer AF, Diepenbrock JP, Happel MFK, Ohl FW, Anemüller J. Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles. PLoS One 2014; 9:e93062. [PMID: 24699631 PMCID: PMC3974709 DOI: 10.1371/journal.pone.0093062] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2013] [Accepted: 02/28/2014] [Indexed: 11/19/2022] Open
Abstract
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.
Collapse
Affiliation(s)
- Arne F. Meyer
- Department of Medical Physics and Acoustics and Cluster of Excellence ''Hearing4all'', University of Oldenburg, Oldenburg, Germany
- * E-mail:
| | - Jan-Philipp Diepenbrock
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Max F. K. Happel
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Department of Neuroprosthetics, Institute of Biology, Otto-von-Guericke University, Magdeburg, Germany
| | - Frank W. Ohl
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Department of Neuroprosthetics, Institute of Biology, Otto-von-Guericke University, Magdeburg, Germany
| | - Jörn Anemüller
- Department of Medical Physics and Acoustics and Cluster of Excellence ''Hearing4all'', University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
18
|
Single neuron and population coding of natural sounds in auditory cortex. Curr Opin Neurobiol 2013; 24:103-10. [PMID: 24492086 DOI: 10.1016/j.conb.2013.09.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Revised: 08/29/2013] [Accepted: 09/09/2013] [Indexed: 11/22/2022]
Abstract
The auditory system drives behavior using information extracted from sounds. Early in the auditory hierarchy, circuits are highly specialized for detecting basic sound features. However, already at the level of the auditory cortex the functional organization of the circuits and the underlying coding principles become different. Here, we review some recent progress in our understanding of single neuron and population coding in primary auditory cortex, focusing on natural sounds. We discuss possible mechanisms explaining why single neuron responses to simple sounds cannot predict responses to natural stimuli. We describe recent work suggesting that structural features like local subnetworks rather than smoothly mapped tonotopy are essential components of population coding. Finally, we suggest a synthesis of how single neurons and subnetworks may be involved in coding natural sounds.
Collapse
|
19
|
Bandyopadhyay S, Young ED. Nonlinear temporal receptive fields of neurons in the dorsal cochlear nucleus. J Neurophysiol 2013; 110:2414-25. [PMID: 23986561 DOI: 10.1152/jn.00278.2013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Studies of the dorsal cochlear nucleus (DCN) have focused on spectral processing because of the complex spectral receptive fields of the DCN. However, temporal fluctuations in natural signals convey important information, including information about moving sound sources or movements of the external ear in animals like cats. Here, we investigate the temporal filtering properties of DCN principal neurons through the use of temporal weighting functions that allow flexible analysis of nonlinearities and time variation in temporal response properties. First-order temporal receptive fields derived from the neurons are sufficient to characterize their response properties to low-contrast (3-dB standard deviation) stimuli. Larger contrasts require the second-order terms. Allowing temporal variation of the parameters of the first-order model or adding a component representing refractoriness improves predictions by the model by relatively small amounts. The importance of second-order components of the model is shown through simulations of nonlinear envelope synchronization behavior across sound level. The temporal model can be combined with a spectral model to predict tuning to the speed and direction of moving sounds.
Collapse
|
20
|
Ter-Mikaelian M, Semple MN, Sanes DH. Effects of spectral and temporal disruption on cortical encoding of gerbil vocalizations. J Neurophysiol 2013; 110:1190-204. [PMID: 23761696 DOI: 10.1152/jn.00645.2012] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Animal communication sounds contain spectrotemporal fluctuations that provide powerful cues for detection and discrimination. Human perception of speech is influenced both by spectral and temporal acoustic features but is most critically dependent on envelope information. To investigate the neural coding principles underlying the perception of communication sounds, we explored the effect of disrupting the spectral or temporal content of five different gerbil call types on neural responses in the awake gerbil's primary auditory cortex (AI). The vocalizations were impoverished spectrally by reduction to 4 or 16 channels of band-passed noise. For this acoustic manipulation, an average firing rate of the neuron did not carry sufficient information to distinguish between call types. In contrast, the discharge patterns of individual AI neurons reliably categorized vocalizations composed of only four spectral bands with the appropriate natural token. The pooled responses of small populations of AI cells classified spectrally disrupted and natural calls with an accuracy that paralleled human performance on an analogous speech task. To assess whether discharge pattern was robust to temporal perturbations of an individual call, vocalizations were disrupted by time-reversing segments of variable duration. For this acoustic manipulation, cortical neurons were relatively insensitive to short reversal lengths. Consistent with human perception of speech, these results indicate that the stable representation of communication sounds in AI is more dependent on sensitivity to slow temporal envelopes than on spectral detail.
Collapse
Affiliation(s)
- Maria Ter-Mikaelian
- Center for Neural Science, New York University, New York, New York 10003, USA
| | | | | |
Collapse
|
21
|
Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain. Hear Res 2013; 305:45-56. [PMID: 23726970 DOI: 10.1016/j.heares.2013.05.005] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2012] [Revised: 03/23/2013] [Accepted: 05/11/2013] [Indexed: 11/23/2022]
Abstract
The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
|
22
|
Yu JJ, Young ED. Frequency response areas in the inferior colliculus: nonlinearity and binaural interaction. Front Neural Circuits 2013; 7:90. [PMID: 23675323 PMCID: PMC3650518 DOI: 10.3389/fncir.2013.00090] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2013] [Accepted: 04/22/2013] [Indexed: 11/13/2022] Open
Abstract
The tuning, binaural properties, and encoding characteristics of neurons in the central nucleus of the inferior colliculus (CNIC) were investigated to shed light on nonlinearities in the responses of these neurons. Results were analyzed for three types of neurons (I, O, and V) in the CNIC of decerebrate cats. Rate responses to binaural stimuli were characterized using a 1st- plus 2nd-order spectral integration model. Parameters of the model were derived using broadband stimuli with random spectral shapes (RSS). This method revealed four characteristics of CNIC neurons: (1) Tuning curves derived from broadband stimuli have fixed (i. e., level tolerant) bandwidths across a 50-60 dB range of sound levels; (2) 1st-order contralateral weights (particularly for type I and O neurons) were usually larger in magnitude than corresponding ipsilateral weights; (3) contralateral weights were more important than ipsilateral weights when using the model to predict responses to untrained noise stimuli; and (4) 2nd-order weight functions demonstrate frequency selectivity different from that of 1st-order weight functions. Furthermore, while the inclusion of 2nd-order terms in the model usually improved response predictions related to untrained RSS stimuli, they had limited impact on predictions related to other forms of filtered broadband noise [e. g., virtual-space stimuli (VS)]. The accuracy of the predictions varied considerably by response type. Predictions were most accurate for I neurons, and less accurate for O and V neurons, except at the lowest stimulus levels. These differences in prediction performance support the idea that type I, O, and V neurons encode different aspects of the stimulus: while type I neurons are most capable of producing linear representations of spectral shape, type O and V neurons may encode spectral features or temporal stimulus properties in a manner not easily explained with the low-order model. Supported by NIH grant DC00115.
Collapse
Affiliation(s)
| | - Eric D. Young
- Center for Hearing and Balance, Department of Biomedical Engineering, Johns Hopkins UniversityBaltimore, MD, USA
| |
Collapse
|
23
|
Pollak GD. The dominant role of inhibition in creating response selectivities for communication calls in the brainstem auditory system. Hear Res 2013; 305:86-101. [PMID: 23545427 DOI: 10.1016/j.heares.2013.03.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2012] [Revised: 02/20/2013] [Accepted: 03/06/2013] [Indexed: 10/27/2022]
Abstract
This review is concerned with how communication calls are processed and represented by populations of neurons in both the inferior colliculus (IC), the auditory midbrain nucleus, and the dorsal nucleus of the lateral lemniscus (DNLL), the nucleus just caudal to the IC. The review has five sections where focus in each section is on inhibition and its role in shaping response selectivity for communication calls. In the first section, the lack of response selectivity for calls in DNLL neurons is presented and discusses why inhibition plays virtually no role in shaping selectivity. In the second section, the lack of selectivity in the DNLL is contrasted with the high degree of response selectivity in the IC. The third section then reviews how inhibition in the IC shapes response selectivities for calls, and how those selectivities can create a population response with a distinctive response profile to a particular call, which differs from the population profile evoked by any other call. The fourth section is concerned with the specifics of inhibition in the IC, and how the interaction of excitation and inhibition creates directional selectivities for frequency modulations, one of the principal acoustic features of communication signals. The two major hypotheses for directional selectivity are presented. One is the timing hypothesis, which holds that the precise timing of excitation relative to inhibition is the feature that shapes directionality. The other hypothesis is that the relative magnitudes of excitation and inhibition are the dominant features that shape directionality, where timing is relatively unimportant. The final section then turns to the role of serotonin, a neuromodulator that can markedly change responses to calls in the IC. Serotonin provides a linkage between behavioral states and processing. This linkage is discussed in the final section together with the hypothesis that serotonin acts to enhances the contrast in the population responses to various calls over and above the distinctive population responses that were created by inhibition. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- George D Pollak
- Section of Neurobiology and Center for Perceptual Systems, 337 Patterson Laboratory Building, The University of Texas at Austin, Austin, TX 78712, USA.
| |
Collapse
|
24
|
Geis HRAP, Borst JGG. Intracellular responses to frequency modulated tones in the dorsal cortex of the mouse inferior colliculus. Front Neural Circuits 2013; 7:7. [PMID: 23386812 PMCID: PMC3560375 DOI: 10.3389/fncir.2013.00007] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2012] [Accepted: 01/13/2013] [Indexed: 11/13/2022] Open
Abstract
Frequency modulations occur in many natural sounds, including vocalizations. The neuronal response to frequency modulated (FM) stimuli has been studied extensively in different brain areas, with an emphasis on the auditory cortex and the central nucleus of the inferior colliculus. Here, we measured the responses to FM sweeps in whole-cell recordings from neurons in the dorsal cortex of the mouse inferior colliculus. Both up- and downward logarithmic FM sweeps were presented at two different speeds to both the ipsi- and the contralateral ear. Based on the number of action potentials that were fired, between 10 and 24% of cells were selective for rate or direction of the FM sweeps. A somewhat lower percentage of cells, 6–21%, showed selectivity based on EPSP size. To study the mechanisms underlying the generation of FM selectivity, we compared FM responses with responses to simple tones in the same cells. We found that if pairs of neurons responded in a similar way to simple tones, they generally also responded in a similar way to FM sweeps. Further evidence that FM selectivity can be generated within the dorsal cortex was obtained by reconstructing FM sweeps from the response to simple tones using three different models. In about half of the direction selective neurons the selectivity was generated by spectrally asymmetric synaptic inhibition. In addition, evidence for direction selectivity based on the timing of excitatory responses was also obtained in some cells. No clear evidence for the local generation of rate selectivity was obtained. We conclude that FM direction selectivity can be generated within the dorsal cortex of the mouse inferior colliculus by multiple mechanisms.
Collapse
Affiliation(s)
- H-Rüdiger A P Geis
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam Rotterdam, Netherlands
| | | |
Collapse
|
25
|
Hurley LM, Sullivan MR. From behavioral context to receptors: serotonergic modulatory pathways in the IC. Front Neural Circuits 2012; 6:58. [PMID: 22973195 PMCID: PMC3434355 DOI: 10.3389/fncir.2012.00058] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2012] [Accepted: 08/10/2012] [Indexed: 12/18/2022] Open
Abstract
In addition to ascending, descending, and lateral auditory projections, inputs extrinsic to the auditory system also influence neural processing in the inferior colliculus (IC). These types of inputs often have an important role in signaling salient factors such as behavioral context or internal state. One route for such extrinsic information is through centralized neuromodulatory networks like the serotonergic system. Serotonergic inputs to the IC originate from centralized raphe nuclei, release serotonin in the IC, and activate serotonin receptors expressed by auditory neurons. Different types of serotonin receptors act as parallel pathways regulating specific features of circuitry within the IC. This results from variation in subcellular localizations and effector pathways of different receptors, which consequently influence auditory responses in distinct ways. Serotonin receptors may regulate GABAergic inhibition, influence response gain, alter spike timing, or have effects that are dependent on the level of activity. Serotonin receptor types additionally interact in nonadditive ways to produce distinct combinatorial effects. This array of effects of serotonin is likely to depend on behavioral context, since the levels of serotonin in the IC transiently increase during behavioral events including stressful situations and social interaction. These studies support a broad model of serotonin receptors as a link between behavioral context and reconfiguration of circuitry in the IC, and the resulting possibility that plasticity at the level of specific receptor types could alter the relationship between context and circuit function.
Collapse
Affiliation(s)
- Laura M Hurley
- Department of Biology, Center for the Integrative Study of Animal Behavior, Indiana University Bloomington, IN, USA
| | | |
Collapse
|
26
|
Williams AJ, Fuzessery ZM. Multiple mechanisms shape FM sweep rate selectivity: complementary or redundant? Front Neural Circuits 2012; 6:54. [PMID: 22912604 PMCID: PMC3421451 DOI: 10.3389/fncir.2012.00054] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2012] [Accepted: 07/30/2012] [Indexed: 11/16/2022] Open
Abstract
Auditory neurons in the inferior colliculus (IC) of the pallid bat have highly rate selective responses to downward frequency modulated (FM) sweeps attributable to the spectrotemporal pattern of their echolocation call (a brief FM pulse). Several mechanisms are known to shape FM rate selectivity within the pallid bat IC. Here we explore how two mechanisms, stimulus duration and high-frequency inhibition (HFI), can interact to shape FM rate selectivity within the same neuron. Results from extracellular recordings indicated that a derived duration-rate function (based on tonal response) was highly predictive of the shape of the FM rate response. Longpass duration selectivity for tones was predictive of slowpass rate selectivity for FM sweeps, both of which required long stimulus durations and remained intact following iontophoretic blockade of inhibitory input. Bandpass duration selectivity for tones, sensitive to only a narrow range of tone durations, was predictive of bandpass rate selectivity for FM sweeps. Conversion of the tone duration response from bandpass to longpass after blocking inhibition was coincident with a change in FM rate selectivity from bandpass to slowpass indicating an active inhibitory component to the formation of bandpass selectivity. Independent of the effect of duration tuning on FM rate selectivity, the presence of HFI acted as a fastpass FM rate filter by suppressing slow FM sweep rates. In cases where both mechanisms were present, both had to be eliminated, by removing inhibition, before bandpass FM rate selectivity was affected. It is unknown why the auditory system utilizes multiple mechanisms capable of shaping identical forms of FM rate selectivity though it may represent distinct but convergent modes of neural signaling directed at shaping response selectivity for important biologically relevant sounds.
Collapse
Affiliation(s)
- Anthony J Williams
- Department of Zoology and Physiology, University of Wyoming Laramie, WY, USA
| | | |
Collapse
|