1
|
Peng F, Harper NS, Mishra AP, Auksztulewicz R, Schnupp JWH. Dissociable Roles of the Auditory Midbrain and Cortex in Processing the Statistical Features of Natural Sound Textures. J Neurosci 2024; 44:e1115232023. [PMID: 38267259 PMCID: PMC10919253 DOI: 10.1523/jneurosci.1115-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 11/23/2023] [Accepted: 12/11/2023] [Indexed: 01/26/2024] Open
Abstract
Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.
Collapse
Affiliation(s)
- Fei Peng
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 2JD, United Kingdom
| | - Ambika P Mishra
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin 14195, Germany
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| |
Collapse
|
2
|
Mechanisms of auditory masking in marine mammals. Anim Cogn 2022; 25:1029-1047. [PMID: 36018474 DOI: 10.1007/s10071-022-01671-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 06/16/2022] [Accepted: 08/06/2022] [Indexed: 11/01/2022]
Abstract
Anthropogenic noise is an increasing threat to marine mammals that rely on sound for communication, navigation, detecting prey and predators, and finding mates. Auditory masking is one consequence of anthropogenic noise, the study of which is approached from multiple disciplines including field investigations of animal behavior, noise characterization from in-situ recordings, computational modeling of communication space, and hearing experiments conducted in the laboratory. This paper focuses on laboratory hearing experiments applying psychophysical methods, with an emphasis on the mechanisms that govern auditory masking. Topics include tone detection in simple, complex, and natural noise; mechanisms for comodulation masking release and other forms of release from masking; the role of temporal resolution in auditory masking; and energetic vs informational masking.
Collapse
|
3
|
Distinct timescales for the neuronal encoding of vocal signals in a high-order auditory area. Sci Rep 2021; 11:19672. [PMID: 34608248 PMCID: PMC8490347 DOI: 10.1038/s41598-021-99135-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/21/2021] [Indexed: 02/08/2023] Open
Abstract
The ability of the auditory system to selectively recognize natural sound categories while maintaining a certain degree of tolerance towards variations within these categories, which may have functional roles, is thought to be crucial for vocal communication. To date, it is still largely unknown how the balance between tolerance and sensitivity to variations in acoustic signals is coded at a neuronal level. Here, we investigate whether neurons in a high-order auditory area in zebra finches, a songbird species, are sensitive to natural variations in vocal signals by recording their responses to repeated exposures to identical and variant sound sequences. We used the songs of male birds which tend to be highly repetitive with only subtle variations between renditions. When playing these songs to both anesthetized and awake birds, we found that variations between songs did not affect the neuron firing rate but the temporal reliability of responses. This suggests that auditory processing operates on a range of distinct timescales, namely a short one to detect variations in vocal signals, and longer ones that allow the birds to tolerate variations in vocal signal structure and to encode the global context.
Collapse
|
4
|
Wang Z, Chacron MJ. Synergistic population coding of natural communication stimuli by hindbrain electrosensory neurons. Sci Rep 2021; 11:10840. [PMID: 34035395 PMCID: PMC8149419 DOI: 10.1038/s41598-021-90413-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2021] [Accepted: 05/11/2021] [Indexed: 01/11/2023] Open
Abstract
Understanding how neural populations encode natural stimuli with complex spatiotemporal structure to give rise to perception remains a central problem in neuroscience. Here we investigated population coding of natural communication stimuli by hindbrain neurons within the electrosensory system of weakly electric fish Apteronotus leptorhynchus. Overall, we found that simultaneously recorded neural activities were correlated: signal but not noise correlations were variable depending on the stimulus waveform as well as the distance between neurons. Combining the neural activities using an equal-weight sum gave rise to discrimination performance between different stimulus waveforms that was limited by redundancy introduced by noise correlations. However, using an evolutionary algorithm to assign different weights to individual neurons before combining their activities (i.e., a weighted sum) gave rise to increased discrimination performance by revealing synergistic interactions between neural activities. Our results thus demonstrate that correlations between the neural activities of hindbrain electrosensory neurons can enhance information about the structure of natural communication stimuli that allow for reliable discrimination between different waveforms by downstream brain areas.
Collapse
Affiliation(s)
- Ziqi Wang
- Department of Physiology, McGill University, Montreal, Canada
| | | |
Collapse
|
5
|
Yao JD, Sanes DH. Temporal Encoding is Required for Categorization, But Not Discrimination. Cereb Cortex 2021; 31:2886-2897. [PMID: 33429423 DOI: 10.1093/cercor/bhaa396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/26/2020] [Accepted: 11/03/2020] [Indexed: 11/14/2022] Open
Abstract
Core auditory cortex (AC) neurons encode slow fluctuations of acoustic stimuli with temporally patterned activity. However, whether temporal encoding is necessary to explain auditory perceptual skills remains uncertain. Here, we recorded from gerbil AC neurons while they discriminated between a 4-Hz amplitude modulation (AM) broadband noise and AM rates >4 Hz. We found a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task. A population decoder that relied on temporal information outperformed a decoder that relied on spike count alone, but the spike count decoder still remained sufficient to explain average behavioral performance. This leaves open the possibility that more demanding perceptual judgments require temporal information. Thus, we asked whether accurate classification of different AM rates between 4 and 12 Hz required the information contained in AC temporal discharge patterns. Indeed, accurate classification of these AM stimuli depended on the inclusion of temporal information rather than spike count alone. Overall, our results compare two different representations of time-varying acoustic features that can be accessed by downstream circuits required for perceptual judgments.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, USA.,Department of Psychology, New York University, New York, NY 10003, USA.,Department of Biology, New York University, New York, NY 10003, USA.,Neuroscience Institute, NYU Langone Medical Center, New York University, New York, NY 10016, USA
| |
Collapse
|
6
|
Johnson JS, Niwa M, O'Connor KN, Sutter ML. Amplitude modulation encoding in the auditory cortex: comparisons between the primary and middle lateral belt regions. J Neurophysiol 2020; 124:1706-1726. [PMID: 33026929 DOI: 10.1152/jn.00171.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In macaques, the middle lateral auditory cortex (ML) is a belt region adjacent to the primary auditory cortex (A1) and believed to be at a hierarchically higher level. Although ML single-unit responses have been studied for several auditory stimuli, the ability of ML cells to encode amplitude modulation (AM)-an ability that has been widely studied in A1-has not yet been characterized. Here, we compared the responses of A1 and ML neurons to amplitude-modulated (AM) noise in awake macaques. Although several of the basic properties of A1 and ML responses to AM noise were similar, we found several key differences. ML neurons were less likely to phase lock, did not phase lock as strongly, and were more likely to respond in a nonsynchronized fashion than A1 cells, consistent with a temporal-to-rate transformation as information ascends the auditory hierarchy. ML neurons tended to have lower temporally (phase-locking) based best modulation frequencies than A1 neurons. Neurons that decreased their firing rate in response to AM noise relative to their firing rate in response to unmodulated noise became more common at the level of ML than they were in A1. In both A1 and ML, we found a prevalent class of neurons that usually have enhanced rate responses relative to responses to the unmodulated noise at lower modulation frequencies and suppressed rate responses relative to responses to the unmodulated noise at middle modulation frequencies.NEW & NOTEWORTHY ML neurons synchronized less than A1 neurons, consistent with a hierarchical temporal-to-rate transformation. Both A1 and ML had a class of modulation transfer functions previously unreported in the cortex with a low-modulation-frequency (MF) peak, a middle-MF trough, and responses similar to unmodulated noise responses at high MFs. The results support a hierarchical shift toward a two-pool opponent code, where subtraction of neural activity between two populations of oppositely tuned neurons encodes AM.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience, University of California, Davis, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
7
|
Zhu N, Luo H, Zhang J. Evaluating Auditory Neural Activities and Information Transfer Using Phase and Spike Train Correlation Algorithms. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1548-1555. [PMID: 32634093 DOI: 10.1109/tnsre.2020.2998980] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The coherence of neural activities among different areas in the brain has received great attention because it is valuable in understanding the functional mechanism of brain structures. While many methodologies, such as time-frequency and entropy analysis, have been applied to evaluate relations between neural signals, these techniques haven't been effective in assessing neural communication in order to reach conclusions. Considering various measurements, the results analyzed by the above-mentioned algorithms may be influenced by the types of neural signals and their amplitudes, which affect their reliability and consistency. In this study, we introduced two new methods, phase-phase and spike train correlations, to analyze the neural signals communications among various areas of the brain, aiming to decipher neural information communications between different brain structures of normal rats and those with noise-induced tinnitus, a ringing condition in the ear or head. To test the proposed methodologies, a set of electrophysiological recordings of tinnitus-related spontaneous activities were conducted in the auditory cortex (AC), inferior colliculus (IC), and dorsal cochlear nucleus (DCN). The results using the two proposed algorithms were demonstrated and compared to those obtained by the transfer entropy (TE) method using the same experimental data set. Both algorithms yielded a result in a consistent scale of zero to one indicating the strength of correlation and showed a similar trend to results by TE. The experimental results on rats have shown information flow within and between most structures with a stronger correlation at lower frequencies.
Collapse
|
8
|
Spiking network optimized for word recognition in noise predicts auditory system hierarchy. PLoS Comput Biol 2020; 16:e1007558. [PMID: 32559204 PMCID: PMC7329140 DOI: 10.1371/journal.pcbi.1007558] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 07/01/2020] [Accepted: 11/22/2019] [Indexed: 11/21/2022] Open
Abstract
The auditory neural code is resilient to acoustic variability and capable of recognizing sounds amongst competing sound sources, yet, the transformations enabling noise robust abilities are largely unknown. We report that a hierarchical spiking neural network (HSNN) optimized to maximize word recognition accuracy in noise and multiple talkers predicts organizational hierarchy of the ascending auditory pathway. Comparisons with data from auditory nerve, midbrain, thalamus and cortex reveals that the optimal HSNN predicts several transformations of the ascending auditory pathway including a sequential loss of temporal resolution and synchronization ability, increasing sparseness, and selectivity. The optimal organizational scheme enhances performance by selectively filtering out noise and fast temporal cues such as voicing periodicity, that are not directly relevant to the word recognition task. An identical network arranged to enable high information transfer fails to predict auditory pathway organization and has substantially poorer performance. Furthermore, conventional single-layer linear and nonlinear receptive field networks that capture the overall feature extraction of the HSNN fail to achieve similar performance. The findings suggest that the auditory pathway hierarchy and its sequential nonlinear feature extraction computations enhance relevant cues while removing non-informative sources of noise, thus enhancing the representation of sounds in noise impoverished conditions. The brain’s ability to recognize sounds in the presence of competing sounds or background noise is essential for everyday hearing tasks. How the brain accomplishes noise resiliency, however, is poorly understood. Using neural recordings from the ascending auditory pathway and an auditory spiking network model trained for sound recognition in noise we explore the computational strategies that enable noise robustness. Our results suggest that the hierarchical feature organization of the ascending auditory pathway and the resulting computations are critical for sound recognition in the presence of noise.
Collapse
|
9
|
Noise-Sensitive But More Precise Subcortical Representations Coexist with Robust Cortical Encoding of Natural Vocalizations. J Neurosci 2020; 40:5228-5246. [PMID: 32444386 DOI: 10.1523/jneurosci.2731-19.2020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 05/08/2020] [Accepted: 05/15/2020] [Indexed: 01/30/2023] Open
Abstract
Humans and animals maintain accurate sound discrimination in the presence of loud sources of background noise. It is commonly assumed that this ability relies on the robustness of auditory cortex responses. However, only a few attempts have been made to characterize neural discrimination of communication sounds masked by noise at each stage of the auditory system and to quantify the noise effects on the neuronal discrimination in terms of alterations in amplitude modulations. Here, we measured neural discrimination between communication sounds masked by a vocalization-shaped stationary noise from multiunit responses recorded in the cochlear nucleus, inferior colliculus, auditory thalamus, and primary and secondary auditory cortex at several signal-to-noise ratios (SNRs) in anesthetized male or female guinea pigs. Masking noise decreased sound discrimination of neuronal populations in each auditory structure, but collicular and thalamic populations showed better performance than cortical populations at each SNR. In contrast, in each auditory structure, discrimination by neuronal populations was slightly decreased when tone-vocoded vocalizations were tested. These results shed new light on the specific contributions of subcortical structures to robust sound encoding, and suggest that the distortion of slow amplitude modulation cues conveyed by communication sounds is one of the factors constraining the neuronal discrimination in subcortical and cortical levels.SIGNIFICANCE STATEMENT Dissecting how auditory neurons discriminate communication sounds in noise is a major goal in auditory neuroscience. Robust sound coding in noise is often viewed as a specific property of cortical networks, although this remains to be demonstrated. Here, we tested the discrimination performance of neuronal populations at five levels of the auditory system in response to conspecific vocalizations masked by noise. In each acoustic condition, subcortical neurons better discriminated target vocalizations than cortical ones and in each structure, the reduction in discrimination performance was related to the reduction in slow amplitude modulation cues.
Collapse
|
10
|
Abstract
Natural sounds contain acoustic dynamics ranging from tens to hundreds of milliseconds. How does the human auditory system encode acoustic information over wide-ranging timescales to achieve sound recognition? Previous work (Teng et al. 2017) demonstrated a temporal coding preference for the theta and gamma ranges, but it remains unclear how acoustic dynamics between these two ranges are coded. Here, we generated artificial sounds with temporal structures over timescales from ~200 to ~30 ms and investigated temporal coding on different timescales. Participants discriminated sounds with temporal structures at different timescales while undergoing magnetoencephalography recording. Although considerable intertrial phase coherence can be induced by acoustic dynamics of all the timescales, classification analyses reveal that the acoustic information of all timescales is preferentially differentiated through the theta and gamma bands, but not through the alpha and beta bands; stimulus reconstruction shows that the acoustic dynamics in the theta and gamma ranges are preferentially coded. We demonstrate that the theta and gamma bands show the generality of temporal coding with comparable capacity. Our findings provide a novel perspective-acoustic information of all timescales is discretised into two discrete temporal chunks for further perceptual analysis.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt, Germany
| | - David Poeppel
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt, Germany
- Department of Psychology, New York University, New York, NY 10003, USA
| |
Collapse
|
11
|
Sihn D, Kim SP. A Spike Train Distance Robust to Firing Rate Changes Based on the Earth Mover's Distance. Front Comput Neurosci 2020; 13:82. [PMID: 31920607 PMCID: PMC6914768 DOI: 10.3389/fncom.2019.00082] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 11/25/2019] [Indexed: 11/25/2022] Open
Abstract
Neural spike train analysis methods are mainly used for understanding the temporal aspects of neural information processing. One approach is to measure the dissimilarity between the spike trains of a pair of neurons, often referred to as the spike train distance. The spike train distance has been often used to classify neuronal units with similar temporal patterns. Several methods to compute spike train distance have been developed so far. Intuitively, a desirable distance should be the shortest length between two objects. The Earth Mover’s Distance (EMD) can compute spike train distance by measuring the shortest length between two spike trains via shifting a fraction of spikes from one spike train to another. The EMD could accurately measure spike timing differences, temporal similarity, and spikes time synchrony. It is also robust to firing rate changes. Victor and Purpura (1996) distance measures the minimum cost between two spike trains. Although it also measures the shortest path between spike trains, its output can vary with the time-scale parameter. In contrast, the EMD measures distance in a unique way by calculating the genuine shortest length between spike trains. The EMD also outperforms other existing spike train distance methods in measuring various aspects of the temporal characteristics of spike trains and in robustness to firing rate changes. The EMD can effectively measure the shortest length between spike trains without being considerably affected by the overall firing rate difference between them. Hence, it is suitable for pure temporal coding exclusively, which is a predominant premise underlying the present study.
Collapse
Affiliation(s)
- Duho Sihn
- Department of Human Factors Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, South Korea
| | - Sung-Phil Kim
- Department of Human Factors Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, South Korea
| |
Collapse
|
12
|
Gourévitch B, Mahrt EJ, Bakay W, Elde C, Portfors CV. GABA A receptors contribute more to rate than temporal coding in the IC of awake mice. J Neurophysiol 2020; 123:134-148. [PMID: 31721644 DOI: 10.1152/jn.00377.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Speech is our most important form of communication, yet we have a poor understanding of how communication sounds are processed by the brain. Mice make great model organisms to study neural processing of communication sounds because of their rich repertoire of social vocalizations and because they have brain structures analogous to humans, such as the auditory midbrain nucleus inferior colliculus (IC). Although the combined roles of GABAergic and glycinergic inhibition on vocalization selectivity in the IC have been studied to a limited degree, the discrete contributions of GABAergic inhibition have only rarely been examined. In this study, we examined how GABAergic inhibition contributes to shaping responses to pure tones as well as selectivity to complex sounds in the IC of awake mice. In our set of long-latency neurons, we found that GABAergic inhibition extends the evoked firing rate range of IC neurons by lowering the baseline firing rate but maintaining the highest probability of firing rate. GABAergic inhibition also prevented IC neurons from bursting in a spontaneous state. Finally, we found that although GABAergic inhibition shaped the spectrotemporal response to vocalizations in a nonlinear fashion, it did not affect the neural code needed to discriminate vocalizations, based either on spiking patterns or on firing rate. Overall, our results emphasize that even if GABAergic inhibition generally decreases the firing rate, it does so while maintaining or extending the abilities of neurons in the IC to code the wide variety of sounds that mammals are exposed to in their daily lives.NEW & NOTEWORTHY GABAergic inhibition adds nonlinearity to neuronal response curves. This increases the neuronal range of evoked firing rate by reducing baseline firing. GABAergic inhibition prevents bursting responses from neurons in a spontaneous state, reducing noise in the temporal coding of the neuron. This could result in improved signal transmission to the cortex.
Collapse
Affiliation(s)
- Boris Gourévitch
- Institut de l'Audition, Institut Pasteur, INSERM, Sorbonne Université, F-75012 Paris, France.,CNRS, France
| | - Elena J Mahrt
- School of Biological Sciences, Washington State University, Vancouver, Washington
| | - Warren Bakay
- Institut de l'Audition, Institut Pasteur, INSERM, Sorbonne Université, F-75012 Paris, France
| | - Cameron Elde
- School of Biological Sciences, Washington State University, Vancouver, Washington
| | - Christine V Portfors
- School of Biological Sciences, Washington State University, Vancouver, Washington
| |
Collapse
|
13
|
Sadeghi M, Zhai X, Stevenson IH, Escabí MA. A neural ensemble correlation code for sound category identification. PLoS Biol 2019; 17:e3000449. [PMID: 31574079 PMCID: PMC6788721 DOI: 10.1371/journal.pbio.3000449] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 10/11/2019] [Accepted: 09/03/2019] [Indexed: 12/25/2022] Open
Abstract
Humans and other animals effortlessly identify natural sounds and categorize them into behaviorally relevant categories. Yet, the acoustic features and neural transformations that enable sound recognition and the formation of perceptual categories are largely unknown. Here, using multichannel neural recordings in the auditory midbrain of unanesthetized female rabbits, we first demonstrate that neural ensemble activity in the auditory midbrain displays highly structured correlations that vary with distinct natural sound stimuli. These stimulus-driven correlations can be used to accurately identify individual sounds using single-response trials, even when the sounds do not differ in their spectral content. Combining neural recordings and an auditory model, we then show how correlations between frequency-organized auditory channels can contribute to discrimination of not just individual sounds but sound categories. For both the model and neural data, spectral and temporal correlations achieved similar categorization performance and appear to contribute equally. Moreover, both the neural and model classifiers achieve their best task performance when they accumulate evidence over a time frame of approximately 1-2 seconds, mirroring human perceptual trends. These results together suggest that time-frequency correlations in sounds may be reflected in the correlations between auditory midbrain ensembles and that these correlations may play an important role in the identification and categorization of natural sounds.
Collapse
Affiliation(s)
- Mina Sadeghi
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Xiu Zhai
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Monty A. Escabí
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
- * E-mail:
| |
Collapse
|
14
|
A Physiologically Inspired Model for Solving the Cocktail Party Problem. J Assoc Res Otolaryngol 2019; 20:579-593. [PMID: 31392449 PMCID: PMC6889086 DOI: 10.1007/s10162-019-00732-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 07/18/2019] [Indexed: 11/05/2022] Open
Abstract
At a cocktail party, we can broadly monitor the entire acoustic scene to detect important cues (e.g., our names being called, or the fire alarm going off), or selectively listen to a target sound source (e.g., a conversation partner). It has recently been observed that individual neurons in the avian field L (analog to the mammalian auditory cortex) can display broad spatial tuning to single targets and selective tuning to a target embedded in spatially distributed sound mixtures. Here, we describe a model inspired by these experimental observations and apply it to process mixtures of human speech sentences. This processing is realized in the neural spiking domain. It converts binaural acoustic inputs into cortical spike trains using a multi-stage model composed of a cochlear filter-bank, a midbrain spatial-localization network, and a cortical network. The output spike trains of the cortical network are then converted back into an acoustic waveform, using a stimulus reconstruction technique. The intelligibility of the reconstructed output is quantified using an objective measure of speech intelligibility. We apply the algorithm to single and multi-talker speech to demonstrate that the physiologically inspired algorithm is able to achieve intelligible reconstruction of an “attended” target sentence embedded in two other non-attended masker sentences. The algorithm is also robust to masker level and displays performance trends comparable to humans. The ideas from this work may help improve the performance of hearing assistive devices (e.g., hearing aids and cochlear implants), speech-recognition technology, and computational algorithms for processing natural scenes cluttered with spatially distributed acoustic objects.
Collapse
|
15
|
Noel A, Makrakis D, Eckford AW. Distortion Distribution of Neural Spike Train Sequence Matching With Optogenetics. IEEE Trans Biomed Eng 2018; 65:2814-2826. [PMID: 29993402 DOI: 10.1109/tbme.2018.2819200] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE This paper uses a simple optogenetic model to compare the timing distortion between a randomly-generated target spike sequence and an externally-stimulated neuron spike sequence. Optogenetics is an emerging field of neuroscience where neurons are genetically modified to express light-sensitive receptors that enable external control when the neurons fire. METHODS Two different measures are studied to determine the timing distortion. The first measure is the delay in externally-stimulated spikes. The second measure is the root-mean-square-error between the filtered outputs of the target and stimulated spike sequences. RESULTS The mean and the distribution of the distortion are derived in closed form when the target sequence generation rate is sufficiently low. The derived results are verified with simulations. CONCLUSION The proposed model and distortion measures can be used to measure the deviation between neuron spike sequences that are prescribed and what can be achieved via external stimulation. SIGNIFICANCE Given the prominence of neuronal signaling within the brain and throughout the body, optogenetics has significant potential to improve the understanding of the nervous system and to develop treatments for neurological diseases. This work is a step towards an analytical model to predict whether different spike trains were observed from the same external stimulus, and the broader goal of understanding the quantity and reliability of information that can be carried by neurons.
Collapse
|
16
|
Yao JD, Sanes DH. Developmental deprivation-induced perceptual and cortical processing deficits in awake-behaving animals. eLife 2018; 7:33891. [PMID: 29873632 PMCID: PMC6005681 DOI: 10.7554/elife.33891] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Accepted: 06/04/2018] [Indexed: 01/02/2023] Open
Abstract
Sensory deprivation during development induces lifelong changes to central nervous system function that are associated with perceptual impairments. However, the relationship between neural and behavioral deficits is uncertain due to a lack of simultaneous measurements during task performance. Therefore, we telemetrically recorded from auditory cortex neurons in gerbils reared with developmental conductive hearing loss as they performed an auditory task in which rapid fluctuations in amplitude are detected. These data were compared to a measure of auditory brainstem temporal processing from each animal. We found that developmental HL diminished behavioral performance, but did not alter brainstem temporal processing. However, the simultaneous assessment of neural and behavioral processing revealed that perceptual deficits were associated with a degraded cortical population code that could be explained by greater trial-to-trial response variability. Our findings suggest that the perceptual limitations that attend early hearing loss are best explained by an encoding deficit in auditory cortex.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, United States
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, United States.,Department of Psychology, New York University, New York, United States.,Department of Biology, New York University, New York, United States.,Neuroscience Institute, NYU Langone Medical Center, New York, United States
| |
Collapse
|
17
|
Abstract
The brain has no direct access to physical stimuli but only to the spiking activity evoked in sensory organs. It is unclear how the brain can learn representations of the stimuli based on those noisy, correlated responses alone. Here we show how to build an accurate distance map of responses solely from the structure of the population activity of retinal ganglion cells. We introduce the Temporal Restricted Boltzmann Machine to learn the spatiotemporal structure of the population activity and use this model to define a distance between spike trains. We show that this metric outperforms existing neural distances at discriminating pairs of stimuli that are barely distinguishable. The proposed method provides a generic and biologically plausible way to learn to associate similar stimuli based on their spiking responses, without any other knowledge of these stimuli.
Collapse
Affiliation(s)
- Christophe Gardella
- Laboratoire de physique statistique, Centre National de la Recherche Scientifique, Sorbonne University, University Paris-Diderot, École normale supérieure, PSL University, 75005 Paris, France
- Institut de la Vision, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Sorbonne University, 75012 Paris, France
| | - Olivier Marre
- Institut de la Vision, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Sorbonne University, 75012 Paris, France
| | - Thierry Mora
- Laboratoire de physique statistique, Centre National de la Recherche Scientifique, Sorbonne University, University Paris-Diderot, École normale supérieure, PSL University, 75005 Paris, France;
| |
Collapse
|
18
|
Van Ruijssevelt L, Chen Y, von Eugen K, Hamaide J, De Groof G, Verhoye M, Güntürkün O, Woolley SC, Van der Linden A. fMRI Reveals a Novel Region for Evaluating Acoustic Information for Mate Choice in a Female Songbird. Curr Biol 2018; 28:711-721.e6. [PMID: 29478859 DOI: 10.1016/j.cub.2018.01.048] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2017] [Revised: 12/05/2017] [Accepted: 01/17/2018] [Indexed: 01/02/2023]
Abstract
Selection of sexual partners is among the most critical decisions that individuals make and is therefore strongly shaped by evolution. In social species, where communication signals can convey substantial information about the identity, state, or quality of the signaler, accurate interpretation of communication signals for mate choice is crucial. Despite the importance of social information processing, to date, relatively little is known about the neurobiological mechanisms that contribute to sexual decision making and preferences. In this study, we used a combination of whole-brain functional magnetic resonance imaging (fMRI), immediate early gene expression, and behavior tests to identify the circuits that are important for the perception and evaluation of courtship songs in a female songbird, the zebra finch (Taeniopygia guttata). Female zebra finches are sensitive to subtle differences in male song performance and strongly prefer the longer, faster, and more stereotyped courtship songs to non-courtship renditions. Using BOLD fMRI and EGR1 expression assays, we uncovered a novel region involved in auditory perceptual decision making located in a sensory integrative region of the avian central nidopallium outside the traditionally studied auditory forebrain pathways. Changes in activity in this region in response to acoustically similar but categorically divergent stimuli showed stronger parallels to behavioral responses than an auditory sensory region. These data highlight a potential role for the caudocentral nidopallium (NCC) as a novel node in the avian circuitry underlying the evaluation of acoustic signals and their use in mate choice.
Collapse
Affiliation(s)
- Lisbeth Van Ruijssevelt
- Bio-Imaging lab, Department of Biomedical Sciences, University of Antwerp, 2610 Antwerpen, Belgium
| | - Yining Chen
- Department of Biology, McGill University, Montreal QC H3A 1B1, Canada
| | - Kaya von Eugen
- AE Biopsychologie, Fakultät für Psychologie, Ruhr-Universität Bochum, 44801 Bochum, Germany
| | - Julie Hamaide
- Bio-Imaging lab, Department of Biomedical Sciences, University of Antwerp, 2610 Antwerpen, Belgium
| | - Geert De Groof
- Bio-Imaging lab, Department of Biomedical Sciences, University of Antwerp, 2610 Antwerpen, Belgium
| | - Marleen Verhoye
- Bio-Imaging lab, Department of Biomedical Sciences, University of Antwerp, 2610 Antwerpen, Belgium
| | - Onur Güntürkün
- AE Biopsychologie, Fakultät für Psychologie, Ruhr-Universität Bochum, 44801 Bochum, Germany
| | - Sarah C Woolley
- Department of Biology, McGill University, Montreal QC H3A 1B1, Canada.
| | - Annemie Van der Linden
- Bio-Imaging lab, Department of Biomedical Sciences, University of Antwerp, 2610 Antwerpen, Belgium.
| |
Collapse
|
19
|
Teng X, Tian X, Rowland J, Poeppel D. Concurrent temporal channels for auditory processing: Oscillatory neural entrainment reveals segregation of function at different scales. PLoS Biol 2017; 15:e2000812. [PMID: 29095816 PMCID: PMC5667736 DOI: 10.1371/journal.pbio.2000812] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2016] [Accepted: 10/10/2017] [Indexed: 11/18/2022] Open
Abstract
Natural sounds convey perceptually relevant information over multiple timescales, and the necessary extraction of multi-timescale information requires the auditory system to work over distinct ranges. The simplest hypothesis suggests that temporal modulations are encoded in an equivalent manner within a reasonable intermediate range. We show that the human auditory system selectively and preferentially tracks acoustic dynamics concurrently at 2 timescales corresponding to the neurophysiological theta band (4-7 Hz) and gamma band ranges (31-45 Hz) but, contrary to expectation, not at the timescale corresponding to alpha (8-12 Hz), which has also been found to be related to auditory perception. Listeners heard synthetic acoustic stimuli with temporally modulated structures at 3 timescales (approximately 190-, approximately 100-, and approximately 30-ms modulation periods) and identified the stimuli while undergoing magnetoencephalography recording. There was strong intertrial phase coherence in the theta band for stimuli of all modulation rates and in the gamma band for stimuli with corresponding modulation rates. The alpha band did not respond in a similar manner. Classification analyses also revealed that oscillatory phase reliably tracked temporal dynamics but not equivalently across rates. Finally, mutual information analyses quantifying the relation between phase and cochlear-scaled correlations also showed preferential processing in 2 distinct regimes, with the alpha range again yielding different patterns. The results support the hypothesis that the human auditory system employs (at least) a 2-timescale processing mode, in which lower and higher perceptual sampling scales are segregated by an intermediate temporal regime in the alpha band that likely reflects different underlying computations.
Collapse
Affiliation(s)
| | - Xing Tian
- New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
| | - Jess Rowland
- School of Visual Arts, New York, New York, United States of America
- Department of Psychology, New York University, New York, New York, United States of America
| | - David Poeppel
- Max-Planck-Institute, Frankfurt, Germany
- Department of Psychology, New York University, New York, New York, United States of America
| |
Collapse
|
20
|
Auditory evoked BOLD responses in awake compared to lightly anaesthetized zebra finches. Sci Rep 2017; 7:13563. [PMID: 29051552 PMCID: PMC5648849 DOI: 10.1038/s41598-017-13014-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Accepted: 09/12/2017] [Indexed: 11/17/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) is increasingly used in cognitive neuroscience and has become a valuable tool in the study of auditory processing in zebra finches, a well-established model of learned vocal communication. Due to its sensitivity to head motion, most fMRI studies in animals are performed in anaesthetized conditions, which might significantly impact neural activity evoked by stimuli and cognitive tasks. In this study, we (1) demonstrate the feasibility of fMRI in awake zebra finches and (2) explore how light anaesthesia regimes affect auditory-evoked BOLD responses to biologically relevant songs. After an acclimation procedure, we show that fMRI can be successfully performed during wakefulness, enabling the detection of reproducible BOLD responses to sound. Additionally, two light anaesthesia protocols were tested (isoflurane and a combination of medetomidine and isoflurane), of which isoflurane alone appeared to be the most promising given the high success rate, non-invasive induction, and quick recovery. By comparing auditory evoked BOLD responses in awake versus lightly anaesthetized conditions, we observed overall effects of anaesthetics on cerebrovascular reactivity as reflected in the extent of positive and negative BOLD responses. Further, our results indicate that light anaesthesia has limited effects on selective BOLD responses to natural versus synthetic sounds.
Collapse
|
21
|
Malvestio I, Kreuz T, Andrzejak RG. Robustness and versatility of a nonlinear interdependence method for directional coupling detection from spike trains. Phys Rev E 2017; 96:022203. [PMID: 28950642 DOI: 10.1103/physreve.96.022203] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Indexed: 06/07/2023]
Abstract
The detection of directional couplings between dynamics based on measured spike trains is a crucial problem in the understanding of many different systems. In particular, in neuroscience it is important to assess the connectivity between neurons. One of the approaches that can estimate directional coupling from the analysis of point processes is the nonlinear interdependence measure L. Although its efficacy has already been demonstrated, it still needs to be tested under more challenging and realistic conditions prior to an application to real data. Thus, in this paper we use the Hindmarsh-Rose model system to test the method in the presence of noise and for different spiking regimes. We also examine the influence of different parameters and spike train distances. Our results show that the measure L is versatile and robust to various types of noise, and thus suitable for application to experimental data.
Collapse
Affiliation(s)
- Irene Malvestio
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018 Barcelona, Spain
- Department of Physics and Astronomy, University of Florence, 50119 Sesto Fiorentino, Italy
- Institute for Complex Systems, CNR, 50119 Sesto Fiorentino, Italy
| | - Thomas Kreuz
- Institute for Complex Systems, CNR, 50119 Sesto Fiorentino, Italy
| | - Ralph G Andrzejak
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018 Barcelona, Spain
- Institute for Bioengineering of Catalonia (IBEC), The Barcelona Institute of Science and Technology, 08028 Barcelona, Spain
| |
Collapse
|
22
|
A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance. J Neurosci 2017; 36:11097-11106. [PMID: 27798189 DOI: 10.1523/jneurosci.1302-16.2016] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2016] [Accepted: 09/02/2016] [Indexed: 01/06/2023] Open
Abstract
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. SIGNIFICANCE STATEMENT The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds.
Collapse
|
23
|
Single Neurons in the Avian Auditory Cortex Encode Individual Identity and Propagation Distance in Naturally Degraded Communication Calls. J Neurosci 2017; 37:3491-3510. [PMID: 28235893 PMCID: PMC5373131 DOI: 10.1523/jneurosci.2220-16.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/08/2017] [Accepted: 01/13/2017] [Indexed: 11/21/2022] Open
Abstract
One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging.SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio.
Collapse
|
24
|
Vocal sequences suppress spiking in the bat auditory cortex while evoking concomitant steady-state local field potentials. Sci Rep 2016; 6:39226. [PMID: 27976691 PMCID: PMC5156950 DOI: 10.1038/srep39226] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Accepted: 11/18/2016] [Indexed: 12/27/2022] Open
Abstract
The mechanisms by which the mammalian brain copes with information from natural vocalization streams remain poorly understood. This article shows that in highly vocal animals, such as the bat species Carollia perspicillata, the spike activity of auditory cortex neurons does not track the temporal information flow enclosed in fast time-varying vocalization streams emitted by conspecifics. For example, leading syllables of so-called distress sequences (produced by bats subjected to duress) suppress cortical spiking to lagging syllables. Local fields potentials (LFPs) recorded simultaneously to cortical spiking evoked by distress sequences carry multiplexed information, with response suppression occurring in low frequency LFPs (i.e. 2–15 Hz) and steady-state LFPs occurring at frequencies that match the rate of energy fluctuations in the incoming sound streams (i.e. >50 Hz). Such steady-state LFPs could reflect underlying synaptic activity that does not necessarily lead to cortical spiking in response to natural fast time-varying vocal sequences.
Collapse
|
25
|
Anesthesia and brain sensory processing: impact on neuronal responses in a female songbird. Sci Rep 2016; 6:39143. [PMID: 27966648 PMCID: PMC5155427 DOI: 10.1038/srep39143] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 11/10/2016] [Indexed: 11/29/2022] Open
Abstract
Whether anesthesia impacts brain sensory processing is a highly debated and important issue. There is a general agreement that anesthesia tends to diminish neuronal activity, but its potential impact on neuronal “tuning” is still an open question. Here we show, based on electrophysiological recordings in the primary auditory area of a female songbird, that anesthesia induces neuronal responses towards biologically irrelevant sounds and prevents the seasonal neuronal tuning towards functionally relevant species-specific song elements.
Collapse
|
26
|
Teng X, Tian X, Poeppel D. Testing multi-scale processing in the auditory system. Sci Rep 2016; 6:34390. [PMID: 27713546 PMCID: PMC5054370 DOI: 10.1038/srep34390] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Accepted: 09/06/2016] [Indexed: 11/30/2022] Open
Abstract
Natural sounds contain information on multiple timescales, so the auditory system must analyze and integrate acoustic information on those different scales to extract behaviorally relevant information. However, this multi-scale process in the auditory system is not widely investigated in the literature, and existing models of temporal integration are mainly built upon detection or recognition tasks on a single timescale. Here we use a paradigm requiring processing on relatively 'local' and 'global' scales and provide evidence suggesting that the auditory system extracts fine-detail acoustic information using short temporal windows and uses long temporal windows to abstract global acoustic patterns. Behavioral task performance that requires processing fine-detail information does not improve with longer stimulus length, contrary to predictions of previous temporal integration models such as the multiple-looks and the spectro-temporal excitation pattern model. Moreover, the perceptual construction of putatively 'unitary' auditory events requires more than hundreds of milliseconds. These findings support the hypothesis of a dual-scale processing likely implemented in the auditory cortex.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Psychology, New York University, New York, NY, USA
| | - Xing Tian
- New York University Shanghai, Shanghai, 200122 China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, 200122 China
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Department of Neuroscience, Max-Planck Institute, Frankfurt, Germany
| |
Collapse
|
27
|
Gadziola MA, Shanbhag SJ, Wenstrup JJ. Two distinct representations of social vocalizations in the basolateral amygdala. J Neurophysiol 2015; 115:868-86. [PMID: 26538612 DOI: 10.1152/jn.00953.2015] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Accepted: 11/04/2015] [Indexed: 11/22/2022] Open
Abstract
Acoustic communication signals carry information related to the types of social interactions by means of their "acoustic context," the sequencing and temporal emission pattern of vocalizations. Here we describe responses to natural vocal sequences in adult big brown bats (Eptesicus fuscus). We first assessed how vocal sequences modify the internal affective state of a listener (via heart rate). The heart rate of listening bats was differentially modulated by vocal sequences, showing significantly greater elevation in response to moderately aggressive sequences than appeasement or neutral sequences. Next, we characterized single-neuron responses in the basolateral amygdala (BLA) of awake, restrained bats to isolated syllables and vocal sequences. Two populations of neurons distinguished by background firing rates also differed in acoustic stimulus selectivity. Low-background neurons (<1 spike/s) were highly selective, responding on average to one tested stimulus. These may participate in a sparse code of vocal stimuli, in which each neuron responds to one or a few stimuli and the population responds to the range of vocalizations across behavioral contexts. Neurons with higher background rates (≥1 spike/s) responded broadly to tested stimuli and better represented the timing of syllables within sequences. We found that spike timing information improved the ability of these neurons to discriminate among vocal sequences and among the behavioral contexts associated with sequences compared with a rate code alone. These findings demonstrate that the BLA contains multiple robust representations of vocal stimuli that can provide the basis for emotional/physiological responses to these stimuli.
Collapse
Affiliation(s)
- Marie A Gadziola
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio; and School of Biomedical Sciences, Kent State University, Kent, Ohio
| | - Sharad J Shanbhag
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio; and
| | - Jeffrey J Wenstrup
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio; and School of Biomedical Sciences, Kent State University, Kent, Ohio
| |
Collapse
|
28
|
Zhao Z, Sato Y, Qin L. Response properties of neurons in the cat's putamen during auditory discrimination. Behav Brain Res 2015; 292:448-62. [PMID: 26162752 DOI: 10.1016/j.bbr.2015.07.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Revised: 06/27/2015] [Accepted: 07/02/2015] [Indexed: 11/30/2022]
Abstract
The striatum integrates diverse convergent input and plays a critical role in the goal-directed behaviors. To date, the auditory functions of striatum are less studied. Recently, it was demonstrated that auditory cortico-striatal projections influence behavioral performance during a frequency discrimination task. To reveal the functions of striatal neurons in auditory discrimination, we recorded the single-unit spike activities in the putamen (dorsal striatum) of free-moving cats while performing a Go/No-go task to discriminate the sounds with different modulation rates (12.5 Hz vs. 50 Hz) or envelopes (damped vs. ramped). We found that the putamen neurons can be broadly divided into four groups according to their contributions to sound discrimination. First, 40% of neurons showed vigorous responses synchronized to the sound envelope, and could precisely discriminate different sounds. Second, 18% of neurons showed a high preference of ramped to damped sounds, but no preference for modulation rate. They could only discriminate the change of sound envelope. Third, 27% of neurons rapidly adapted to the sound stimuli, had no ability of sound discrimination. Fourth, 15% of neurons discriminated the sounds dependent on the reward-prediction. Comparing to passively listening condition, the activities of putamen neurons were significantly enhanced by the engagement of the auditory tasks, but not modulated by the cat's behavioral choice. The coexistence of multiple types of neurons suggests that the putamen is involved in the transformation from auditory representation to stimulus-reward association.
Collapse
Affiliation(s)
- Zhenling Zhao
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi 409-3898, Japan; Jinan Biomedicine R&D Center, School of Life Science and Technology, Jinan University, Guangzhou 510632, People's Republic of China
| | - Yu Sato
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi 409-3898, Japan
| | - Ling Qin
- Department of Physiology, China Medical University, Shenyang 110001, People's Republic of China.
| |
Collapse
|
29
|
Abstract
Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding.
Collapse
|
30
|
Chait M, Greenberg S, Arai T, Simon JZ, Poeppel D. Multi-time resolution analysis of speech: evidence from psychophysics. Front Neurosci 2015; 9:214. [PMID: 26136650 PMCID: PMC4468943 DOI: 10.3389/fnins.2015.00214] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2015] [Accepted: 05/28/2015] [Indexed: 11/13/2022] Open
Abstract
How speech signals are analyzed and represented remains a foundational challenge both for cognitive science and neuroscience. A growing body of research, employing various behavioral and neurobiological experimental techniques, now points to the perceptual relevance of both phoneme-sized (10-40 Hz modulation frequency) and syllable-sized (2-10 Hz modulation frequency) units in speech processing. However, it is not clear how information associated with such different time scales interacts in a manner relevant for speech perception. We report behavioral experiments on speech intelligibility employing a stimulus that allows us to investigate how distinct temporal modulations in speech are treated separately and whether they are combined. We created sentences in which the slow (~4 Hz; Slow) and rapid (~33 Hz; Shigh) modulations-corresponding to ~250 and ~30 ms, the average duration of syllables and certain phonetic properties, respectively-were selectively extracted. Although Slow and Shigh have low intelligibility when presented separately, dichotic presentation of Shigh with Slow results in supra-additive performance, suggesting a synergistic relationship between low- and high-modulation frequencies. A second experiment desynchronized presentation of the Slow and Shigh signals. Desynchronizing signals relative to one another had no impact on intelligibility when delays were less than ~45 ms. Longer delays resulted in a steep intelligibility decline, providing further evidence of integration or binding of information within restricted temporal windows. Our data suggest that human speech perception uses multi-time resolution processing. Signals are concurrently analyzed on at least two separate time scales, the intermediate representations of these analyses are integrated, and the resulting bound percept has significant consequences for speech intelligibility-a view compatible with recent insights from neuroscience implicating multi-timescale auditory processing.
Collapse
Affiliation(s)
- Maria Chait
- Neuroscience and Cognitive Science Program, University of Maryland College Park, MD, USA ; Department of Linguistics, University of Maryland College Park, MD, USA
| | | | - Takayuki Arai
- Department of Information and Communication Sciences, Sophia University Tokyo, Japan
| | - Jonathan Z Simon
- Neuroscience and Cognitive Science Program, University of Maryland College Park, MD, USA ; Department of Biology, University of Maryland College Park, MD, USA ; Department of Electrical and Computer Engineering, University of Maryland College Park, MD, USA ; Institute for Systems Research, University of Maryland College Park, MD, USA
| | - David Poeppel
- Neuroscience and Cognitive Science Program, University of Maryland College Park, MD, USA ; Department of Linguistics, University of Maryland College Park, MD, USA ; Department of Psychology, New York University New York, NY, USA ; Department of Neuroscience, Max-Planck-Institute Frankfurt, Germany
| |
Collapse
|
31
|
Decoding speech perception from single cell activity in humans. Neuroimage 2015; 117:151-9. [PMID: 25976925 DOI: 10.1016/j.neuroimage.2015.05.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 03/27/2015] [Accepted: 05/02/2015] [Indexed: 10/23/2022] Open
Abstract
Deciphering the content of continuous speech is a challenging task performed daily by the human brain. Here, we tested whether activity of single cells in auditory cortex could be used to support such a task. We recorded neural activity from auditory cortex of two neurosurgical patients while presented with a short video segment containing speech. Population spiking activity (~20 cells per patient) allowed detection of word onset and decoding the identity of perceived words with significantly high accuracy levels. Oscillation phase of local field potentials (8-12Hz) also allowed decoding word identity although with lower accuracy levels. Our results provide evidence that the spiking activity of a relatively small population of cells in human primary auditory cortex contains significant information for classification of words in ongoing speech. Given previous evidence for overlapping neural representation during speech perception and production, this may have implications for developing brain-machine interfaces for patients with deficits in speech production.
Collapse
|
32
|
Behavioral relevance helps untangle natural vocal categories in a specific subset of core auditory cortical pyramidal neurons. J Neurosci 2015; 35:2636-45. [PMID: 25673855 DOI: 10.1523/jneurosci.3803-14.2015] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Sound categorization is essential for auditory behaviors like acoustic communication, but its genesis within the auditory pathway is not well understood-especially for learned natural categories like vocalizations, which often share overlapping acoustic features that must be distinguished (e.g., speech). We use electrophysiological mapping and single-unit recordings in mice to investigate how representations of natural vocal categories within core auditory cortex are modulated when one category acquires enhanced behavioral relevance. Taking advantage of a maternal mouse model of acoustic communication, we found no long-term auditory cortical map expansion to represent a behaviorally relevant pup vocalization category-contrary to expectations from the cortical plasticity literature on conditioning with pure tones. Instead, we observed plasticity that improved the separation between acoustically similar pup and adult vocalization categories among a physiologically defined subset of late-onset, putative pyramidal neurons, but not among putative interneurons. Additionally, a larger proportion of these putative pyramidal neurons in maternal animals compared with nonmaternal animals responded to the individual pup call exemplars having combinations of acoustic features most typical of that category. Together, these data suggest that higher-order representations of acoustic categories arise from a subset of core auditory cortical pyramidal neurons that become biased toward the combination of acoustic features statistically predictive of membership to a behaviorally relevant sound category.
Collapse
|
33
|
Tang C, Chehayeb D, Srivastava K, Nemenman I, Sober SJ. Millisecond-scale motor encoding in a cortical vocal area. PLoS Biol 2014; 12:e1002018. [PMID: 25490022 PMCID: PMC4260785 DOI: 10.1371/journal.pbio.1002018] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2014] [Accepted: 10/24/2014] [Indexed: 12/28/2022] Open
Abstract
Analyzing brain activity in songbirds suggests that the nervous system controls behavior by precisely modulating the timing pattern of electrical events. Studies of motor control have almost universally examined firing rates to investigate how the brain shapes behavior. In principle, however, neurons could encode information through the precise temporal patterning of their spike trains as well as (or instead of) through their firing rates. Although the importance of spike timing has been demonstrated in sensory systems, it is largely unknown whether timing differences in motor areas could affect behavior. We tested the hypothesis that significant information about trial-by-trial variations in behavior is represented by spike timing in the songbird vocal motor system. We found that neurons in motor cortex convey information via spike timing far more often than via spike rate and that the amount of information conveyed at the millisecond timescale greatly exceeds the information available from spike counts. These results demonstrate that information can be represented by spike timing in motor circuits and suggest that timing variations evoke differences in behavior. A central question in neuroscience is how neurons use patterns of electrical events to represent sensory information and control behavior. Neurons might use two different codes to transmit information. First, signals might be conveyed by the total number of electrical events (called “action potentials”) that a neuron produces. Alternately, the timing pattern of action potentials, as distinct from the total number of action potentials produced, might be used to transmit information. Although many studies have shown that timing can convey information about sensory inputs, such as visual scenery or sound waveforms, the role of action potential timing in the control of complex, learned behaviors is largely unknown. Here, by analyzing the pattern of action potentials produced in a songbird's brain as it precisely controls vocal behavior, we demonstrate that far more information about upcoming behavior is present in spike timing than in the total number of spikes fired. This work suggests that timing can be equally (or more) important in motor systems as in sensory systems.
Collapse
Affiliation(s)
- Claire Tang
- Neuroscience Graduate Program, University of California, San Francisco, San Francisco, California, United States of America
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
| | - Diala Chehayeb
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
| | - Kyle Srivastava
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, Georgia, United States of America
| | - Ilya Nemenman
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
- Department of Physics, Emory University, Atlanta, Georgia, United States of America
| | - Samuel J. Sober
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
- * E-mail:
| |
Collapse
|
34
|
Neural correlates of auditory streaming in an objective behavioral task. Proc Natl Acad Sci U S A 2014; 111:10738-43. [PMID: 25002519 DOI: 10.1073/pnas.1321487111] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Segregating streams of sounds from sources in complex acoustic scenes is crucial for perception in real world situations. We analyzed an objective psychophysical measure of stream segregation obtained while simultaneously recording forebrain neurons in the European starlings to investigate neural correlates of segregating a stream of A tones from a stream of B tones presented at one-half the rate. The objective measure, sensitivity for time shift detection of the B tone, was higher when the A and B tones were of the same frequency (one stream) compared with when there was a 6- or 12-semitone difference between them (two streams). The sensitivity for representing time shifts in spiking patterns was correlated with the behavioral sensitivity. The spiking patterns reflected the stimulus characteristics but not the behavioral response, indicating that the birds' primary cortical field represents the segregated streams, but not the decision process.
Collapse
|
35
|
Dimitrov AG, Cummins GI, Mayko ZM, Portfors CV. Inhibition does not affect the timing code for vocalizations in the mouse auditory midbrain. Front Physiol 2014; 5:140. [PMID: 24795640 PMCID: PMC3997027 DOI: 10.3389/fphys.2014.00140] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2013] [Accepted: 03/23/2014] [Indexed: 11/13/2022] Open
Abstract
Many animals use a diverse repertoire of complex acoustic signals to convey different types of information to other animals. The information in each vocalization therefore must be coded by neurons in the auditory system. One way in which the auditory system may discriminate among different vocalizations is by having highly selective neurons, where only one or two different vocalizations evoke a strong response from a single neuron. Another strategy is to have specific spike timing patterns for particular vocalizations such that each neural response can be matched to a specific vocalization. Both of these strategies seem to occur in the auditory midbrain of mice. The neural mechanisms underlying rate and time coding are unclear, however, it is likely that inhibition plays a role. Here, we examined whether inhibition is involved in shaping neural selectivity to vocalizations via rate and/or time coding in the mouse inferior colliculus (IC). We examined extracellular single unit responses to vocalizations before and after iontophoretically blocking GABAA and glycine receptors in the IC of awake mice. We then applied a number of neurometrics to examine the rate and timing information of individual neurons. We initially evaluated the neuronal responses using inspection of the raster plots, spike-counting measures of response rate and stimulus preference, and a measure of maximum available stimulus-response mutual information. Subsequently, we used two different event sequence distance measures, one based on vector space embedding, and one derived from the Victor/Purpura D q metric, to direct hierarchical clustering of responses. In general, we found that the most salient feature of pharmacologically blocking inhibitory receptors in the IC was the lack of major effects on the functional properties of IC neurons. Blocking inhibition did increase response rate to vocalizations, as expected. However, it did not significantly affect spike timing, or stimulus selectivity of the studied neurons. We observed two main effects when inhibition was locally blocked: (1) Highly selective neurons maintained their selectivity and the information about the stimuli did not change, but response rate increased slightly. (2) Neurons that responded to multiple vocalizations in the control condition, also responded to the same stimuli in the test condition, with similar timing and pattern, but with a greater number of spikes. For some neurons the information rate generally increased, but the information per spike decreased. In many of these neurons, vocalizations that generated no responses in the control condition generated some response in the test condition. Overall, we found that inhibition in the IC does not play a substantial role in creating the distinguishable and reliable neuronal temporal spike patterns in response to different vocalizations.
Collapse
Affiliation(s)
- Alexander G Dimitrov
- Department of Mathematics, Washington State University Vancouver Vancouver, WA, USA
| | - Graham I Cummins
- Department of Mathematics, Washington State University Vancouver Vancouver, WA, USA
| | - Zachary M Mayko
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| | - Christine V Portfors
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| |
Collapse
|
36
|
Kim SY, Lim W. Realistic thermodynamic and statistical-mechanical measures for neural synchronization. J Neurosci Methods 2014; 226:161-170. [DOI: 10.1016/j.jneumeth.2013.12.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2013] [Revised: 12/27/2013] [Accepted: 12/29/2013] [Indexed: 10/25/2022]
|
37
|
Ahn J, Kreeger LJ, Lubejko ST, Butts DA, MacLeod KM. Heterogeneity of intrinsic biophysical properties among cochlear nucleus neurons improves the population coding of temporal information. J Neurophysiol 2014; 111:2320-31. [PMID: 24623512 DOI: 10.1152/jn.00836.2013] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Reliable representation of the spectrotemporal features of an acoustic stimulus is critical for sound recognition. However, if all neurons respond with identical firing to the same stimulus, redundancy in the activity patterns would reduce the information capacity of the population. We thus investigated spike reliability and temporal fluctuation coding in an ensemble of neurons recorded in vitro from the avian auditory brain stem. Sequential patch-clamp recordings were made from neurons of the cochlear nucleus angularis while injecting identical filtered Gaussian white noise currents, simulating synaptic drive. The spiking activity in neurons receiving these identically fluctuating stimuli was highly correlated, measured pairwise across neurons and as a pseudo-population. Two distinct uncorrelated noise stimuli could be discriminated using the temporal patterning, but not firing rate, of the spike trains in the neural ensemble, with best discrimination using information at time scales of 5-20 ms. Despite high cross-correlation values, the spike patterns observed in individual neurons were idiosyncratic, with notable heterogeneity across neurons. To investigate how temporal information is being encoded, we used optimal linear reconstruction to produce an estimate of the original current stimulus from the spike trains. Ensembles of trains sampled across the neural population could be used to predict >50% of the stimulus variation using optimal linear decoding, compared with ∼20% using the same number of spike trains recorded from single neurons. We conclude that heterogeneity in the intrinsic biophysical properties of cochlear nucleus neurons reduces firing pattern redundancy while enhancing representation of temporal information.
Collapse
Affiliation(s)
- J Ahn
- Department of Biology, University of Maryland, College Park, Maryland
| | - L J Kreeger
- Department of Biology, University of Maryland, College Park, Maryland
| | - S T Lubejko
- Department of Biology, University of Maryland, College Park, Maryland
| | - D A Butts
- Department of Biology, University of Maryland, College Park, Maryland; Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland; and
| | - K M MacLeod
- Department of Biology, University of Maryland, College Park, Maryland; Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland; and Center for the Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland
| |
Collapse
|
38
|
Cortical inhibition reduces information redundancy at presentation of communication sounds in the primary auditory cortex. J Neurosci 2013; 33:10713-28. [PMID: 23804094 DOI: 10.1523/jneurosci.0079-13.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In all sensory modalities, intracortical inhibition shapes the functional properties of cortical neurons but also influences the responses to natural stimuli. Studies performed in various species have revealed that auditory cortex neurons respond to conspecific vocalizations by temporal spike patterns displaying a high trial-to-trial reliability, which might result from precise timing between excitation and inhibition. Studying the guinea pig auditory cortex, we show that partial blockage of GABAA receptors by gabazine (GBZ) application (10 μm, a concentration that promotes expansion of cortical receptive fields) increased the evoked firing rate and the spike-timing reliability during presentation of communication sounds (conspecific and heterospecific vocalizations), whereas GABAB receptor antagonists [10 μm saclofen; 10-50 μm CGP55845 (p-3-aminopropyl-p-diethoxymethyl phosphoric acid)] had nonsignificant effects. Computing mutual information (MI) from the responses to vocalizations using either the evoked firing rate or the temporal spike patterns revealed that GBZ application increased the MI derived from the activity of single cortical site but did not change the MI derived from population activity. In addition, quantification of information redundancy showed that GBZ significantly increased redundancy at the population level. This result suggests that a potential role of intracortical inhibition is to reduce information redundancy during the processing of natural stimuli.
Collapse
|
39
|
Ranasinghe KG, Vrana WA, Matney CJ, Kilgard MP. Increasing diversity of neural responses to speech sounds across the central auditory pathway. Neuroscience 2013; 252:80-97. [PMID: 23954862 DOI: 10.1016/j.neuroscience.2013.08.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Revised: 07/24/2013] [Accepted: 08/03/2013] [Indexed: 10/26/2022]
Abstract
Neurons at higher stations of each sensory system are responsive to feature combinations not present at lower levels. As a result, the activity of these neurons becomes less redundant than lower levels. We recorded responses to speech sounds from the inferior colliculus and the primary auditory cortex neurons of rats, and tested the hypothesis that primary auditory cortex neurons are more sensitive to combinations of multiple acoustic parameters compared to inferior colliculus neurons. We independently eliminated periodicity information, spectral information and temporal information in each consonant and vowel sound using a noise vocoder. This technique made it possible to test several key hypotheses about speech sound processing. Our results demonstrate that inferior colliculus responses are spatially arranged and primarily determined by the spectral energy and the fundamental frequency of speech, whereas primary auditory cortex neurons generate widely distributed responses to multiple acoustic parameters, and are not strongly influenced by the fundamental frequency of speech. We found no evidence that inferior colliculus or primary auditory cortex was specialized for speech features such as voice onset time or formants. The greater diversity of responses in primary auditory cortex compared to inferior colliculus may help explain how the auditory system can identify a wide range of speech sounds across a wide range of conditions without relying on any single acoustic cue.
Collapse
Affiliation(s)
- K G Ranasinghe
- The University of Texas at Dallas, School of Behavioral Brain Sciences, 800 West Campbell Road, GR41, Richardson, TX 75080-3021, United States.
| | | | | | | |
Collapse
|
40
|
Behavioral modulation of neural encoding of click-trains in the primary and nonprimary auditory cortex of cats. J Neurosci 2013. [PMID: 23926266 DOI: 10.1523/jneurosci.1724-13] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023] Open
Abstract
Neural representation of acoustic stimuli in the mammal auditory cortex (AC) has been extensively studied using anesthetized or awake nonbehaving animals. Recently, several studies have shown that active engagement in an auditory behavioral task can substantially change the neuron response properties compared with when animals were passively listening to the same sounds; however, these studies mainly investigated the effect of behavioral state on the primary auditory cortex and the reported effects were inconsistent. Here, we examined the single-unit spike activities in both the primary and nonprimary areas along the dorsal-to-ventral direction of the cat's AC, when the cat was actively discriminating click-trains at different repetition rates and when it was passively listening to the same stimuli. We found that the changes due to task engagement were heterogeneous in the primary AC; some neurons showed significant increases in driven firing rate, others showed decreases. But in the nonprimary AC, task engagement predominantly enhanced the neural responses, resulting in a substantial improvement of the neural discriminability of click-trains. Additionally, our results revealed that neural responses synchronizing to click-trains gradually decreased along the dorsal-to-ventral direction of cat AC, while nonsynchronizing responses remained less changed. The present study provides new insights into the hierarchical organization of AC along the dorsal-to-ventral direction and highlights the importance of using behavioral animals to investigate the later stages of cortical processing.
Collapse
|
41
|
Zheng Y, Escabí MA. Proportional spike-timing precision and firing reliability underlie efficient temporal processing of periodicity and envelope shape cues. J Neurophysiol 2013; 110:587-606. [PMID: 23636724 DOI: 10.1152/jn.01080.2010] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues.
Collapse
Affiliation(s)
- Y Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
| | | |
Collapse
|
42
|
Gaucher Q, Huetz C, Gourévitch B, Laudanski J, Occelli F, Edeline JM. How do auditory cortex neurons represent communication sounds? Hear Res 2013; 305:102-12. [PMID: 23603138 DOI: 10.1016/j.heares.2013.03.011] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Revised: 03/18/2013] [Accepted: 03/26/2013] [Indexed: 11/30/2022]
Abstract
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Quentin Gaucher
- Centre de Neurosciences Paris-Sud (CNPS), CNRS UMR 8195, Université Paris-Sud, Bâtiment 446, 91405 Orsay cedex, France
| | | | | | | | | | | |
Collapse
|
43
|
Abstract
The ability to recognize auditory objects like words and bird songs is thought to depend on neural responses that are selective between categories of the objects and tolerant of variation within those categories. To determine whether a hierarchy of increasing selectivity and tolerance exists in the avian auditory system, we trained European starlings (Sturnus vulgaris) to differentially recognize sets of songs, then measured extracellular single unit responses under urethane anesthesia in six areas of the auditory cortex. Responses were analyzed with a novel, generalized linear mixed model that provides robust estimates of the variance in responses to different stimuli. There were significant differences between areas in selectivity, tolerance, and the effects of training. The L2b and L1 subdivisions of field L had the least selectivity and tolerance. The caudal nidopallium (NCM) and subdivision L3 of field L were more selective than other areas, whereas the medial and lateral caudal mesopallium were more tolerant than NCM or L2b. L3 had a multimodal distribution of tolerance. Sensitivity to songs that were familiar and those that were not also distinguished the responses of caudomedial mesopallium and NCM. There were significant differences across areas between neurons with wide and narrow spikes. Collectively these results do not fit the traditional hierarchical view of the avian auditory forebrain, but are consistent with emerging concepts homologizing avian cortical and neocortical circuitry. The results suggest a functional divergence within the cortex into processing streams that respond to complementary aspects of the variability in communicative sounds.
Collapse
|
44
|
Ma H, Qin L, Dong C, Zhong R, Sato Y. Comparison of neural responses to cat meows and human vowels in the anterior and posterior auditory field of awake cats. PLoS One 2013; 8:e52942. [PMID: 23301004 PMCID: PMC3534661 DOI: 10.1371/journal.pone.0052942] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2012] [Accepted: 11/23/2012] [Indexed: 11/19/2022] Open
Abstract
For humans and animals, the ability to discriminate speech and conspecific vocalizations is an important physiological assignment of the auditory system. To reveal the underlying neural mechanism, many electrophysiological studies have investigated the neural responses of the auditory cortex to conspecific vocalizations in monkeys. The data suggest that vocalizations may be hierarchically processed along an anterior/ventral stream from the primary auditory cortex (A1) to the ventral prefrontal cortex. To date, the organization of vocalization processing has not been well investigated in the auditory cortex of other mammals. In this study, we examined the spike activities of single neurons in two early auditory cortical regions with different anteroposterior locations: anterior auditory field (AAF) and posterior auditory field (PAF) in awake cats, as the animals were passively listening to forward and backward conspecific calls (meows) and human vowels. We found that the neural response patterns in PAF were more complex and had longer latency than those in AAF. The selectivity for different vocalizations based on the mean firing rate was low in both AAF and PAF, and not significantly different between them; however, more vocalization information was transmitted when the temporal response profiles were considered, and the maximum transmitted information by PAF neurons was higher than that by AAF neurons. Discrimination accuracy based on the activities of an ensemble of PAF neurons was also better than that of AAF neurons. Our results suggest that AAF and PAF are similar with regard to which vocalizations they represent but differ in the way they represent these vocalizations, and there may be a complex processing stream between them.
Collapse
Affiliation(s)
- Hanlu Ma
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Ling Qin
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
- Department of Physiology, China Medical University, Shenyang, People’s Republic of China
- * E-mail:
| | - Chao Dong
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Renjia Zhong
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
- Department of Physiology, China Medical University, Shenyang, People’s Republic of China
| | - Yu Sato
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
| |
Collapse
|
45
|
Grimsley JMS, Shanbhag SJ, Palmer AR, Wallace MN. Processing of communication calls in Guinea pig auditory cortex. PLoS One 2012; 7:e51646. [PMID: 23251604 PMCID: PMC3520958 DOI: 10.1371/journal.pone.0051646] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2011] [Accepted: 11/08/2012] [Indexed: 11/25/2022] Open
Abstract
Vocal communication is an important aspect of guinea pig behaviour and a large contributor to their acoustic environment. We postulated that some cortical areas have distinctive roles in processing conspecific calls. In order to test this hypothesis we presented exemplars from all ten of their main adult vocalizations to urethane anesthetised animals while recording from each of the eight areas of the auditory cortex. We demonstrate that the primary area (AI) and three adjacent auditory belt areas contain many units that give isomorphic responses to vocalizations. These are the ventrorostral belt (VRB), the transitional belt area (T) that is ventral to AI and the small area (area S) that is rostral to AI. Area VRB has a denser representation of cells that are better at discriminating among calls by using either a rate code or a temporal code than any other area. Furthermore, 10% of VRB cells responded to communication calls but did not respond to stimuli such as clicks, broadband noise or pure tones. Area S has a sparse distribution of call responsive cells that showed excellent temporal locking, 31% of which selectively responded to a single call. AI responded well to all vocalizations and was much more responsive to vocalizations than the adjacent dorsocaudal core area. Areas VRB, AI and S contained units with the highest levels of mutual information about call stimuli. Area T also responded well to some calls but seems to be specialized for low sound levels. The two dorsal belt areas are comparatively unresponsive to vocalizations and contain little information about the calls. AI projects to areas S, VRB and T, so there may be both rostral and ventral pathways for processing vocalizations in the guinea pig.
Collapse
Affiliation(s)
- Jasmine M. S. Grimsley
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States of America
| | - Sharad J. Shanbhag
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States of America
| | - Alan R. Palmer
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
| | - Mark N. Wallace
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
- * E-mail:
| |
Collapse
|
46
|
Woolley SMN. Early experience shapes vocal neural coding and perception in songbirds. Dev Psychobiol 2012; 54:612-31. [PMID: 22711657 PMCID: PMC3404257 DOI: 10.1002/dev.21014] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2010] [Accepted: 01/09/2012] [Indexed: 11/09/2022]
Abstract
Songbirds, like humans, are highly accomplished vocal learners. The many parallels between speech and birdsong and conserved features of mammalian and avian auditory systems have led to the emergence of the songbird as a model system for studying the perceptual mechanisms of vocal communication. Laboratory research on songbirds allows the careful control of early life experience and high-resolution analysis of brain function during vocal learning, production, and perception. Here, I review what songbird studies have revealed about the role of early experience in the development of vocal behavior, auditory perception, and the processing of learned vocalizations by auditory neurons. The findings of these studies suggest general principles for how exposure to vocalizations during development and into adulthood influences the perception of learned vocal signals.
Collapse
Affiliation(s)
- Sarah M N Woolley
- Department of Psychology, Columbia University, 406 Schermerhorn Hall, 1190 Amsterdam Ave., New York, NY 10027, USA.
| |
Collapse
|
47
|
Edeline JM. Beyond traditional approaches to understanding the functional role of neuromodulators in sensory cortices. Front Behav Neurosci 2012; 6:45. [PMID: 22866031 PMCID: PMC3407859 DOI: 10.3389/fnbeh.2012.00045] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2012] [Accepted: 07/03/2012] [Indexed: 02/01/2023] Open
Abstract
Over the last two decades, a vast literature has described the influence of neuromodulatory systems on the responses of sensory cortex neurons (review in Gu, 2002; Edeline, 2003; Weinberger, 2003; Metherate, 2004, 2011). At the single cell level, facilitation of evoked responses, increases in signal-to-noise ratio, and improved functional properties of sensory cortex neurons have been reported in the visual, auditory, and somatosensory modality. At the map level, massive cortical reorganizations have been described when repeated activation of a neuromodulatory system are associated with a particular sensory stimulus. In reviewing our knowledge concerning the way the noradrenergic and cholinergic system control sensory cortices, I will point out that the differences between the protocols used to reveal these effects most likely reflect different assumptions concerning the role of the neuromodulators. More importantly, a gap still exists between the descriptions of neuromodulatory effects and the concepts that are currently applied to decipher the neural code operating in sensory cortices. Key examples that bring this gap into focus are the concept of cell assemblies and the role played by the spike timing precision (i.e., by the temporal organization of spike trains at the millisecond time-scale) which are now recognized as essential in sensory physiology but are rarely considered in experiments describing the role of neuromodulators in sensory cortices. Thus, I will suggest that several lines of research, particularly in the field of computational neurosciences, should help us to go beyond traditional approaches and, ultimately, to understand how neuromodulators impact on the cortical mechanisms underlying our perceptual abilities.
Collapse
Affiliation(s)
- Jean-Marc Edeline
- Centre de Neurosciences Paris-Sud, CNRS UMR 8195, Université Paris-Sud, Bâtiment Orsay Cedex, France
| |
Collapse
|
48
|
Pfeiffer M, Hartbauer M, Lang AB, Maass W, Römer H. Probing real sensory worlds of receivers with unsupervised clustering. PLoS One 2012; 7:e37354. [PMID: 22701566 PMCID: PMC3368931 DOI: 10.1371/journal.pone.0037354] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2011] [Accepted: 04/19/2012] [Indexed: 11/18/2022] Open
Abstract
The task of an organism to extract information about the external environment from sensory signals is based entirely on the analysis of ongoing afferent spike activity provided by the sense organs. We investigate the processing of auditory stimuli by an acoustic interneuron of insects. In contrast to most previous work we do this by using stimuli and neurophysiological recordings directly in the nocturnal tropical rainforest, where the insect communicates. Different from typical recordings in sound proof laboratories, strong environmental noise from multiple sound sources interferes with the perception of acoustic signals in these realistic scenarios. We apply a recently developed unsupervised machine learning algorithm based on probabilistic inference to find frequently occurring firing patterns in the response of the acoustic interneuron. We can thus ask how much information the central nervous system of the receiver can extract from bursts without ever being told which type and which variants of bursts are characteristic for particular stimuli. Our results show that the reliability of burst coding in the time domain is so high that identical stimuli lead to extremely similar spike pattern responses, even for different preparations on different dates, and even if one of the preparations is recorded outdoors and the other one in the sound proof lab. Simultaneous recordings in two preparations exposed to the same acoustic environment reveal that characteristics of burst patterns are largely preserved among individuals of the same species. Our study shows that burst coding can provide a reliable mechanism for acoustic insects to classify and discriminate signals under very noisy real-world conditions. This gives new insights into the neural mechanisms potentially used by bushcrickets to discriminate conspecific songs from sounds of predators in similar carrier frequency bands.
Collapse
Affiliation(s)
- Michael Pfeiffer
- Institute for Theoretical Computer Science, TU Graz, Graz, Austria.
| | | | | | | | | |
Collapse
|
49
|
Houghton C, Kreuz T. On the efficient calculation of van Rossum distances. NETWORK (BRISTOL, ENGLAND) 2012; 23:48-58. [PMID: 22568695 DOI: 10.3109/0954898x.2012.673048] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The van Rossum metric measures the distance between two spike trains. Measuring a single van Rossum distance between one pair of spike trains is not a computationally expensive task, however, many applications require a matrix of distances between all the spike trains in a set or the calculation of a multi-neuron distance between two populations of spike trains. Moreover, often these calculations need to be repeated for many different parameter values. An algorithm is presented here to render these calculation less computationally expensive, making the complexity linear in the number of spikes rather than quadratic.
Collapse
|
50
|
Maddox RK, Billimoria CP, Perrone BP, Shinn-Cunningham BG, Sen K. Competing sound sources reveal spatial effects in cortical processing. PLoS Biol 2012; 10:e1001319. [PMID: 22563301 PMCID: PMC3341327 DOI: 10.1371/journal.pbio.1001319] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Accepted: 03/20/2012] [Indexed: 11/18/2022] Open
Abstract
Why is spatial tuning in auditory cortex weak, even though location is important to object recognition in natural settings? This question continues to vex neuroscientists focused on linking physiological results to auditory perception. Here we show that the spatial locations of simultaneous, competing sound sources dramatically influence how well neural spike trains recorded from the zebra finch field L (an analog of mammalian primary auditory cortex) encode source identity. We find that the location of a birdsong played in quiet has little effect on the fidelity of the neural encoding of the song. However, when the song is presented along with a masker, spatial effects are pronounced. For each spatial configuration, a subset of neurons encodes song identity more robustly than others. As a result, competing sources from different locations dominate responses of different neural subpopulations, helping to separate neural responses into independent representations. These results help elucidate how cortical processing exploits spatial information to provide a substrate for selective spatial auditory attention.
Collapse
Affiliation(s)
- Ross K. Maddox
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Biodynamics, Boston University, Boston, Massachusetts, United States of America
- * E-mail:
| | - Cyrus P. Billimoria
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Biodynamics, Boston University, Boston, Massachusetts, United States of America
| | - Ben P. Perrone
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Biodynamics, Boston University, Boston, Massachusetts, United States of America
| | - Barbara G. Shinn-Cunningham
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, Massachusetts, United States of America
| | - Kamal Sen
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Biodynamics, Boston University, Boston, Massachusetts, United States of America
| |
Collapse
|