1
|
Gilday OD, Praegel B, Maor I, Cohen T, Nelken I, Mizrahi A. Surround suppression in mouse auditory cortex underlies auditory edge detection. PLoS Comput Biol 2023; 19:e1010861. [PMID: 36656876 PMCID: PMC9888713 DOI: 10.1371/journal.pcbi.1010861] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Revised: 01/31/2023] [Accepted: 01/09/2023] [Indexed: 01/20/2023] Open
Abstract
Surround suppression (SS) is a fundamental property of sensory processing throughout the brain. In the auditory system, the early processing stream encodes sounds using a one dimensional physical space-frequency. Previous studies in the auditory system have shown SS to manifest as bandwidth tuning around the preferred frequency. We asked whether bandwidth tuning can be found around frequencies away from the preferred frequency. We exploited the simplicity of spectral representation of sounds to study SS by manipulating both sound frequency and bandwidth. We recorded single unit spiking activity from the auditory cortex (ACx) of awake mice in response to an array of broadband stimuli with varying central frequencies and bandwidths. Our recordings revealed that a significant portion of neuronal response profiles had a preferred bandwidth that varied in a regular way with the sound's central frequency. To gain insight into the possible mechanism underlying these responses, we modelled neuronal activity using a variation of the "Mexican hat" function often used to model SS. The model accounted for response properties of single neurons with high accuracy. Our data and model show that these responses in ACx obey simple rules resulting from the presence of lateral inhibitory sidebands, mostly above the excitatory band of the neuron, that result in sensitivity to the location of top frequency edges, invariant to other spectral attributes. Our work offers a simple explanation for auditory edge detection and possibly other computations of spectral content in sounds.
Collapse
Affiliation(s)
- Omri David Gilday
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedikt Praegel
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ido Maor
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Tav Cohen
- Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Israel Nelken
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Adi Mizrahi
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail:
| |
Collapse
|
2
|
Souffi S, Varnet L, Zaidi M, Bathellier B, Huetz C, Edeline JM. Reduction in sound discrimination in noise is related to envelope similarity and not to a decrease in envelope tracking abilities. J Physiol 2023; 601:123-149. [PMID: 36373184 DOI: 10.1113/jp283526] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/08/2022] [Indexed: 11/15/2022] Open
Abstract
Humans and animals constantly face challenging acoustic environments, such as various background noises, that impair the detection, discrimination and identification of behaviourally relevant sounds. Here, we disentangled the role of temporal envelope tracking in the reduction in neuronal and behavioural discrimination between communication sounds in situations of acoustic degradations. By collecting neuronal activity from six different levels of the auditory system, from the auditory nerve up to the secondary auditory cortex, in anaesthetized guinea-pigs, we found that tracking of slow changes of the temporal envelope is a general functional property of auditory neurons for encoding communication sounds in quiet conditions and in adverse, challenging conditions. Results from a go/no-go sound discrimination task in mice support the idea that the loss of distinct slow envelope cues in noisy conditions impacted the discrimination performance. Together, these results suggest that envelope tracking is potentially a universal mechanism operating in the central auditory system, which allows the detection of any between-stimulus difference in the slow envelope and thus copes with degraded conditions. KEY POINTS: In quiet conditions, envelope tracking in the low amplitude modulation range (<20 Hz) is correlated with the neuronal discrimination between communication sounds as quantified by mutual information from the cochlear nucleus up to the auditory cortex. At each level of the auditory system, auditory neurons retain their abilities to track the communication sound envelopes in situations of acoustic degradation, such as vocoding and the addition of masking noises up to a signal-to-noise ratio of -10 dB. In noisy conditions, the increase in between-stimulus envelope similarity explains the reduction in both behavioural and neuronal discrimination in the auditory system. Envelope tracking can be viewed as a universal mechanism that allows neural and behavioural discrimination as long as the temporal envelope of communication sounds displays some differences.
Collapse
Affiliation(s)
- Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Léo Varnet
- Laboratoire des systèmes perceptifs, UMR CNRS 8248, Département d'Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences & Lettres, Paris, France
| | - Meryem Zaidi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Brice Bathellier
- Institut de l'Audition, Institut Pasteur, Université de Paris, INSERM, Paris, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| |
Collapse
|
3
|
Anandakumar DB, Liu RC. More than the end: OFF response plasticity as a mnemonic signature of a sound’s behavioral salience. Front Comput Neurosci 2022; 16:974264. [PMID: 36148326 PMCID: PMC9485674 DOI: 10.3389/fncom.2022.974264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 08/17/2022] [Indexed: 11/29/2022] Open
Abstract
In studying how neural populations in sensory cortex code dynamically varying stimuli to guide behavior, the role of spiking after stimuli have ended has been underappreciated. This is despite growing evidence that such activity can be tuned, experience-and context-dependent and necessary for sensory decisions that play out on a slower timescale. Here we review recent studies, focusing on the auditory modality, demonstrating that this so-called OFF activity can have a more complex temporal structure than the purely phasic firing that has often been interpreted as just marking the end of stimuli. While diverse and still incompletely understood mechanisms are likely involved in generating phasic and tonic OFF firing, more studies point to the continuing post-stimulus activity serving a short-term, stimulus-specific mnemonic function that is enhanced when the stimuli are particularly salient. We summarize these results with a conceptual model highlighting how more neurons within the auditory cortical population fire for longer duration after a sound’s termination during an active behavior and can continue to do so even while passively listening to behaviorally salient stimuli. Overall, these studies increasingly suggest that tonic auditory cortical OFF activity holds an echoic memory of specific, salient sounds to guide behavioral decisions.
Collapse
Affiliation(s)
- Dakshitha B Anandakumar
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, United States
- Department of Biology, Emory University, Atlanta, GA, United States
| | - Robert C Liu
- Department of Biology, Emory University, Atlanta, GA, United States
- Center for Translational Social Neuroscience, Emory University, Atlanta, GA, United States
| |
Collapse
|
4
|
Kang H, Auksztulewicz R, An H, Abi Chacra N, Sutter ML, Schnupp JWH. Neural Correlates of Auditory Pattern Learning in the Auditory Cortex. Front Neurosci 2021; 15:610978. [PMID: 33790730 PMCID: PMC8005649 DOI: 10.3389/fnins.2021.610978] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 02/23/2021] [Indexed: 11/13/2022] Open
Abstract
Learning of new auditory stimuli often requires repetitive exposure to the stimulus. Fast and implicit learning of sounds presented at random times enables efficient auditory perception. However, it is unclear how such sensory encoding is processed on a neural level. We investigated neural responses that are developed from a passive, repetitive exposure to a specific sound in the auditory cortex of anesthetized rats, using electrocorticography. We presented a series of random sequences that are generated afresh each time, except for a specific reference sequence that remains constant and re-appears at random times across trials. We compared induced activity amplitudes between reference and fresh sequences. Neural responses from both primary and non-primary auditory cortical regions showed significantly decreased induced activity amplitudes for reference sequences compared to fresh sequences, especially in the beta band. This is the first study showing that neural correlates of auditory pattern learning can be evoked even in anesthetized, passive listening animal models.
Collapse
Affiliation(s)
- Hijee Kang
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong.,Neuroscience Department, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Hyunjung An
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Nicolas Abi Chacra
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Mitchell L Sutter
- Center for Neuroscience and Section of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, CA, United States
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| |
Collapse
|
5
|
Royer J, Huetz C, Occelli F, Cancela JM, Edeline JM. Enhanced Discriminative Abilities of Auditory Cortex Neurons for Pup Calls Despite Reduced Evoked Responses in C57BL/6 Mother Mice. Neuroscience 2020; 453:1-16. [PMID: 33253823 DOI: 10.1016/j.neuroscience.2020.11.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 11/03/2020] [Accepted: 11/18/2020] [Indexed: 11/30/2022]
Abstract
A fundamental task for the auditory system is to process communication sounds according to their behavioral significance. In many mammalian species, pup calls became more significant for mothers than other conspecific and heterospecific communication sounds. To study the cortical consequences of motherhood on the processing of communication sounds, we recorded neuronal responses in the primary auditory cortex of virgin and mother C57BL/6 mice which had similar ABR thresholds. In mothers, the evoked firing rate in response to pure tones was decreased and the frequency receptive fields were narrower. The responses to pup and adult calls were also reduced but the amount of mutual information (MI) per spike about the pup call's identity was increased in mother mice. The response latency to pup and adult calls was significantly shorter in mothers. Despite similarly decreased responses to guinea pig whistles, the response latency, and the MI per spike did not differ between virgins and mothers for these heterospecific vocalizations. Noise correlations between cortical recordings were decreased in mothers, suggesting that the firing rate of distant neurons was more independent from each other. Together, these results indicate that in the most commonly used mouse strain for behavioral studies, the discrimination of pup calls by auditory cortex neurons is more efficient during motherhood.
Collapse
Affiliation(s)
- Juliette Royer
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France
| | - Chloé Huetz
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France
| | - Florian Occelli
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France
| | - José-Manuel Cancela
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France
| | - Jean-Marc Edeline
- Université Paris-Saclay, CNRS UMR 9197, Institut des neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France; Institut des neurosciences Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France.
| |
Collapse
|
6
|
Gupta P, Balasubramaniam N, Chang HY, Tseng FG, Santra TS. A Single-Neuron: Current Trends and Future Prospects. Cells 2020; 9:E1528. [PMID: 32585883 PMCID: PMC7349798 DOI: 10.3390/cells9061528] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 06/15/2020] [Accepted: 06/19/2020] [Indexed: 12/11/2022] Open
Abstract
The brain is an intricate network with complex organizational principles facilitating a concerted communication between single-neurons, distinct neuron populations, and remote brain areas. The communication, technically referred to as connectivity, between single-neurons, is the center of many investigations aimed at elucidating pathophysiology, anatomical differences, and structural and functional features. In comparison with bulk analysis, single-neuron analysis can provide precise information about neurons or even sub-neuron level electrophysiology, anatomical differences, pathophysiology, structural and functional features, in addition to their communications with other neurons, and can promote essential information to understand the brain and its activity. This review highlights various single-neuron models and their behaviors, followed by different analysis methods. Again, to elucidate cellular dynamics in terms of electrophysiology at the single-neuron level, we emphasize in detail the role of single-neuron mapping and electrophysiological recording. We also elaborate on the recent development of single-neuron isolation, manipulation, and therapeutic progress using advanced micro/nanofluidic devices, as well as microinjection, electroporation, microelectrode array, optical transfection, optogenetic techniques. Further, the development in the field of artificial intelligence in relation to single-neurons is highlighted. The review concludes with between limitations and future prospects of single-neuron analyses.
Collapse
Affiliation(s)
- Pallavi Gupta
- Department of Engineering Design, Indian Institute of Technology Madras, Tamil Nadu 600036, India; (P.G.); (N.B.)
| | - Nandhini Balasubramaniam
- Department of Engineering Design, Indian Institute of Technology Madras, Tamil Nadu 600036, India; (P.G.); (N.B.)
| | - Hwan-You Chang
- Department of Medical Science, National Tsing Hua University, Hsinchu 30013, Taiwan;
| | - Fan-Gang Tseng
- Department of Engineering and System Science, National Tsing Hua University, Hsinchu 30013, Taiwan;
| | - Tuhin Subhra Santra
- Department of Engineering Design, Indian Institute of Technology Madras, Tamil Nadu 600036, India; (P.G.); (N.B.)
| |
Collapse
|
7
|
Levy RB, Marquarding T, Reid AP, Pun CM, Renier N, Oviedo HV. Circuit asymmetries underlie functional lateralization in the mouse auditory cortex. Nat Commun 2019; 10:2783. [PMID: 31239458 PMCID: PMC6592910 DOI: 10.1038/s41467-019-10690-3] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Accepted: 05/24/2019] [Indexed: 11/29/2022] Open
Abstract
The left hemisphere's dominance in processing social communication has been known for over a century, but the mechanisms underlying this lateralized cortical function are poorly understood. Here, we compare the structure, function, and development of each auditory cortex (ACx) in the mouse to look for specializations that may underlie lateralization. Using Fos brain volume imaging, we found greater activation in the left ACx in response to vocalizations, while the right ACx responded more to frequency sweeps. In vivo recordings identified hemispheric differences in spectrotemporal selectivity, reinforcing their functional differences. We then compared the synaptic connectivity within each hemisphere and discovered lateralized circuit-motifs that are hearing experience-dependent. Our results suggest a specialist role for the left ACx, focused on facilitating the detection of specific vocalization features, while the right ACx is a generalist with the ability to integrate spectrotemporal features more broadly.
Collapse
Affiliation(s)
- Robert B Levy
- Biology Department, The City College of New York, New York, NY, 10031, USA
| | - Tiemo Marquarding
- Biology Department, The City College of New York, New York, NY, 10031, USA
- Institute for Molecular and Cellular Cognition, Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf, Hamburg, 20251, Germany
| | - Ashlan P Reid
- Biology Department, The City College of New York, New York, NY, 10031, USA
| | - Christopher M Pun
- The City College of New York, Macaulay Honors College, New York, NY, 10031, USA
| | - Nicolas Renier
- Institut du Cerveau et de la Moelle Epinière, Paris, 75013, France
| | - Hysell V Oviedo
- Biology Department, The City College of New York, New York, NY, 10031, USA.
- CUNY Graduate Center, New York, NY, 10016, USA.
| |
Collapse
|
8
|
Vasquez-Lopez SA, Weissenberger Y, Lohse M, Keating P, King AJ, Dahmen JC. Thalamic input to auditory cortex is locally heterogeneous but globally tonotopic. eLife 2017; 6:25141. [PMID: 28891466 PMCID: PMC5614559 DOI: 10.7554/elife.25141] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2017] [Accepted: 09/08/2017] [Indexed: 12/24/2022] Open
Abstract
Topographic representation of the receptor surface is a fundamental feature of sensory cortical organization. This is imparted by the thalamus, which relays information from the periphery to the cortex. To better understand the rules governing thalamocortical connectivity and the origin of cortical maps, we used in vivo two-photon calcium imaging to characterize the properties of thalamic axons innervating different layers of mouse auditory cortex. Although tonotopically organized at a global level, we found that the frequency selectivity of individual thalamocortical axons is surprisingly heterogeneous, even in layers 3b/4 of the primary cortical areas, where the thalamic input is dominated by the lemniscal projection. We also show that thalamocortical input to layer 1 includes collaterals from axons innervating layers 3b/4 and is largely in register with the main input targeting those layers. Such locally varied thalamocortical projections may be useful in enabling rapid contextual modulation of cortical frequency representations.
Collapse
Affiliation(s)
| | - Yves Weissenberger
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Michael Lohse
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Peter Keating
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom.,Ear Institute, University College London, London, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
9
|
Eliades SJ, Wang X. Contributions of sensory tuning to auditory-vocal interactions in marmoset auditory cortex. Hear Res 2017; 348:98-111. [PMID: 28284736 DOI: 10.1016/j.heares.2017.03.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Revised: 02/27/2017] [Accepted: 03/02/2017] [Indexed: 01/30/2023]
Abstract
During speech, humans continuously listen to their own vocal output to ensure accurate communication. Such self-monitoring is thought to require the integration of information about the feedback of vocal acoustics with internal motor control signals. The neural mechanism of this auditory-vocal interaction remains largely unknown at the cellular level. Previous studies in naturally vocalizing marmosets have demonstrated diverse neural activities in auditory cortex during vocalization, dominated by a vocalization-induced suppression of neural firing. How underlying auditory tuning properties of these neurons might contribute to this sensory-motor processing is unknown. In the present study, we quantitatively compared marmoset auditory cortex neural activities during vocal production with those during passive listening. We found that neurons excited during vocalization were readily driven by passive playback of vocalizations and other acoustic stimuli. In contrast, neurons suppressed during vocalization exhibited more diverse playback responses, including responses that were not predictable by auditory tuning properties. These results suggest that vocalization-related excitation in auditory cortex is largely a sensory-driven response. In contrast, vocalization-induced suppression is not well predicted by a neuron's auditory responses, supporting the prevailing theory that internal motor-related signals contribute to the auditory-vocal interaction observed in auditory cortex.
Collapse
Affiliation(s)
- Steven J Eliades
- Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
10
|
Issa JB, Haeffele BD, Young ED, Yue DT. Multiscale mapping of frequency sweep rate in mouse auditory cortex. Hear Res 2016; 344:207-222. [PMID: 28011084 DOI: 10.1016/j.heares.2016.11.018] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 11/23/2016] [Accepted: 11/28/2016] [Indexed: 11/25/2022]
Abstract
Functional organization is a key feature of the neocortex that often guides studies of sensory processing, development, and plasticity. Tonotopy, which arises from the transduction properties of the cochlea, is the most widely studied organizational feature in auditory cortex; however, in order to process complex sounds, cortical regions are likely specialized for higher order features. Here, motivated by the prevalence of frequency modulations in mouse ultrasonic vocalizations and aided by the use of a multiscale imaging approach, we uncover a functional organization across the extent of auditory cortex for the rate of frequency modulated (FM) sweeps. In particular, using two-photon Ca2+ imaging of layer 2/3 neurons, we identify a tone-insensitive region at the border of AI and AAF. This central sweep region behaves fundamentally differently from nearby neurons in AI and AII, responding preferentially to fast FM sweeps but not to tones or bandlimited noise. Together these findings define a second dimension of organization in the mouse auditory cortex for sweep rate complementary to that of tone frequency.
Collapse
Affiliation(s)
- John B Issa
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Ross Building, Room 713, 720 Rutland Avenue, Baltimore, MD 21205, USA.
| | - Benjamin D Haeffele
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Ross Building, Room 713, 720 Rutland Avenue, Baltimore, MD 21205, USA
| | - Eric D Young
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Ross Building, Room 713, 720 Rutland Avenue, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University School of Medicine, 725 N. Wolfe Street, WBSB, Baltimore, MD 21205, USA
| | - David T Yue
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Ross Building, Room 713, 720 Rutland Avenue, Baltimore, MD 21205, USA; Center for Cell Dynamics, The Johns Hopkins University School of Medicine, 720 Rutland Avenue, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University School of Medicine, 725 N. Wolfe Street, WBSB, Baltimore, MD 21205, USA
| |
Collapse
|
11
|
Ni R, Bender DA, Shanechi AM, Gamble JR, Barbour DL. Contextual effects of noise on vocalization encoding in primary auditory cortex. J Neurophysiol 2016; 117:713-727. [PMID: 27881720 DOI: 10.1152/jn.00476.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Accepted: 11/17/2016] [Indexed: 11/22/2022] Open
Abstract
Robust auditory perception plays a pivotal function for processing behaviorally relevant sounds, particularly with distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study, we recorded single-unit activity from the primary auditory cortex (A1) of awake marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise and vocalization babble. Noise effects on neural representation of target vocalizations were quantified by measuring the responses' similarity to those elicited by natural vocalizations as a function of signal-to-noise ratio. A clustering approach was used to describe the range of response profiles by reducing the population responses to a summary of four response classes (robust, balanced, insensitive, and brittle) under both noise conditions. This clustering approach revealed that, on average, approximately two-thirds of the neurons change their response class when encountering different noises. Therefore, the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, suggesting the low likelihood of a unique group of noise-invariant neurons across different background conditions in A1. Regarding noise influence on neural activities, the brittle response group showed addition of spiking activity both within and between phrases of vocalizations relative to clean vocalizations, whereas the other groups generally showed spiking activity suppression within phrases, and the alteration between phrases was noise dependent. Overall, the variable single-unit responses, yet consistent response types, imply that primate A1 performs scene analysis through the collective activity of multiple neurons. NEW & NOTEWORTHY The understanding of where and how auditory scene analysis is accomplished is of broad interest to neuroscientists. In this paper, we systematically investigated neuronal coding of multiple vocalizations degraded by two distinct noises at various signal-to-noise ratios in nonhuman primates. In the process, we uncovered heterogeneity of single-unit representations for different auditory scenes yet homogeneity of responses across the population.
Collapse
Affiliation(s)
- Ruiye Ni
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - David A Bender
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Amirali M Shanechi
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Jeffrey R Gamble
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Dennis L Barbour
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| |
Collapse
|
12
|
Williamson RS, Ahrens MB, Linden JF, Sahani M. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds. Neuron 2016; 91:467-81. [PMID: 27346532 PMCID: PMC4961224 DOI: 10.1016/j.neuron.2016.05.041] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 10/25/2015] [Accepted: 05/12/2016] [Indexed: 01/19/2023]
Abstract
Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds—a modulation of “input-specific gain” rather than “output gain”—may be a widespread motif in sensory processing. Gain of neuronal responses to sound components varies with immediate acoustic context “Contextual gain fields” can be estimated from neuronal responses to complex sounds Coincident sound at different frequencies boosts gain in cortex and thalamus Preceding sound at similar frequency reduces gain for longer in cortex than thalamus
Collapse
Affiliation(s)
- Ross S Williamson
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London WC1E 6BT, UK
| | - Misha B Ahrens
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Jennifer F Linden
- Ear Institute, University College London, London WC1X 8EE, UK; Department of Neuroscience, Physiology and Pharmacology, University College London, London WC1E 6BT, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK.
| |
Collapse
|
13
|
Honey C, Schnupp J. Neural Resolution of Formant Frequencies in the Primary Auditory Cortex of Rats. PLoS One 2015; 10:e0134078. [PMID: 26252382 PMCID: PMC4529216 DOI: 10.1371/journal.pone.0134078] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2014] [Accepted: 07/06/2015] [Indexed: 11/18/2022] Open
Abstract
Pulse-resonance sounds play an important role in animal communication and auditory object recognition, yet very little is known about the cortical representation of this class of sounds. In this study we shine light on one simple aspect: how well does the firing rate of cortical neurons resolve resonant ("formant") frequencies of vowel-like pulse-resonance sounds. We recorded neural responses in the primary auditory cortex (A1) of anesthetized rats to two-formant pulse-resonance sounds, and estimated their formant resolving power using a statistical kernel smoothing method which takes into account the natural variability of cortical responses. While formant-tuning functions were diverse in structure across different penetrations, most were sensitive to changes in formant frequency, with a frequency resolution comparable to that reported for rat cochlear filters.
Collapse
Affiliation(s)
| | - Jan Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
14
|
Affiliation(s)
- Gideon Rothschild
- Department of Physiology and Center for Integrative Neuroscience, University of California, San Francisco, California 94158;
| | - Adi Mizrahi
- Department of Neurobiology, Institute of Life Sciences, The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Edmond J. Safra Campus, 91904 Givat Ram Jerusalem, Israel;
| |
Collapse
|
15
|
Becoming a mother-circuit plasticity underlying maternal behavior. Curr Opin Neurobiol 2015; 35:49-56. [PMID: 26143475 DOI: 10.1016/j.conb.2015.06.007] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Accepted: 06/15/2015] [Indexed: 11/20/2022]
Abstract
The transition to motherhood is a dramatic event during the lifetime of many animals. In mammals, motherhood is accompanied by hormonal changes in the brain that start during pregnancy, followed by experience dependent plasticity after parturition. Together, these changes prime the nervous system of the mother for efficient nurturing of her offspring. Recent work has described how neural circuits are modified during the transition to motherhood. Here we discuss changes in the auditory cortex during motherhood as a model for maternal plasticity in sensory systems. We compare classical plasticity paradigms with changes that arise naturally in mothers, highlighting current efforts to establish a mechanistic understanding of plasticity and its different components in the context of maternal behavior.
Collapse
|
16
|
Moshitch D, Nelken I. The Representation of Interaural Time Differences in High-Frequency Auditory Cortex. Cereb Cortex 2014; 26:656-68. [DOI: 10.1093/cercor/bhu230] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
17
|
Nataf S. The sensory immune system: a neural twist to the antigenic discontinuity theory. Nat Rev Immunol 2014; 14:280. [PMID: 24662388 DOI: 10.1038/nri3521-c1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Affiliation(s)
- Serge Nataf
- Lyon Neuroscience Research Center, INSERM 1028 CNRS UMR5292, University Lyon-1, Banque de tissus et de cellules, Hôpital Edouard Herriot, Lyon University Hospital (Hospices Civils de Lyon), Lyon F-69000, France
| |
Collapse
|
18
|
Rabinowitz NC, Willmore BDB, King AJ, Schnupp JWH. Constructing noise-invariant representations of sound in the auditory pathway. PLoS Biol 2013; 11:e1001710. [PMID: 24265596 PMCID: PMC3825667 DOI: 10.1371/journal.pbio.1001710] [Citation(s) in RCA: 98] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 10/04/2013] [Indexed: 11/18/2022] Open
Abstract
Along the auditory pathway from auditory nerve to midbrain to cortex, individual neurons adapt progressively to sound statistics, enabling the discernment of foreground sounds, such as speech, over background noise. Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. We rarely hear sounds (such as someone talking) in isolation, but rather against a background of noise. When mixtures of sounds and background noise reach the ears, peripheral auditory neurons represent the whole sound mixture. Previous evidence suggests, however, that the higher auditory brain represents just the sounds of interest, and is less affected by the presence of background noise. The neural mechanisms underlying this transformation are poorly understood. Here, we investigate these mechanisms by studying the representation of sound by populations of neurons at three stages along the auditory pathway; we simulate the auditory nerve and record from neurons in the midbrain and primary auditory cortex of anesthetized ferrets. We find that the transformation from noise-sensitive representations of sound to noise-tolerant processing takes place gradually along the pathway from auditory nerve to midbrain to cortex. Our results suggest that this results from neurons adapting to the statistics of heard sounds.
Collapse
Affiliation(s)
- Neil C. Rabinowitz
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- Center for Neural Science, New York University, New York, New York, United States of America
- * E-mail: (N.C.R.); (J.W.H.S.)
| | - Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan W. H. Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- * E-mail: (N.C.R.); (J.W.H.S.)
| |
Collapse
|
19
|
Maddox RK, Sen K, Billimoria CP. Auditory forebrain neurons track temporal features of time-warped natural stimuli. J Assoc Res Otolaryngol 2013; 15:131-8. [PMID: 24129604 DOI: 10.1007/s10162-013-0418-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2013] [Accepted: 09/19/2013] [Indexed: 11/26/2022] Open
Abstract
A fundamental challenge for sensory systems is to recognize natural stimuli despite stimulus variations. A compelling example occurs in speech, where the auditory system can recognize words spoken at a wide range of speeds. To date, there have been more computational models for time-warp invariance than experimental studies that investigate responses to time-warped stimuli at the neural level. Here, we address this problem in the model system of zebra finches anesthetized with urethane. In behavioral experiments, we found high discrimination accuracy well beyond the observed natural range of song variations. We artificially sped up or slowed down songs (preserving pitch) and recorded auditory responses from neurons in field L, the avian primary auditory cortex homolog. We found that field L neurons responded robustly to time-warped songs, tracking the temporal features of the stimuli over a broad range of warp factors. Time-warp invariance was not observed per se, but there was sufficient information in the neural responses to reliably classify which of two songs was presented. Furthermore, the average spike rate was close to constant over the range of time warps, contrary to recent modeling predictions. We discuss how this response pattern is surprising given current computational models of time-warp invariance and how such a response could be decoded downstream to achieve time-warp-invariant recognition of sounds.
Collapse
Affiliation(s)
- Ross K Maddox
- Institute for Learning and Brain Sciences, University of Washington, 1715 NE Columbia Rd, Box 357988, Seattle, WA, 98195, USA
| | | | | |
Collapse
|
20
|
Single neuron and population coding of natural sounds in auditory cortex. Curr Opin Neurobiol 2013; 24:103-10. [PMID: 24492086 DOI: 10.1016/j.conb.2013.09.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Revised: 08/29/2013] [Accepted: 09/09/2013] [Indexed: 11/22/2022]
Abstract
The auditory system drives behavior using information extracted from sounds. Early in the auditory hierarchy, circuits are highly specialized for detecting basic sound features. However, already at the level of the auditory cortex the functional organization of the circuits and the underlying coding principles become different. Here, we review some recent progress in our understanding of single neuron and population coding in primary auditory cortex, focusing on natural sounds. We discuss possible mechanisms explaining why single neuron responses to simple sounds cannot predict responses to natural stimuli. We describe recent work suggesting that structural features like local subnetworks rather than smoothly mapped tonotopy are essential components of population coding. Finally, we suggest a synthesis of how single neurons and subnetworks may be involved in coding natural sounds.
Collapse
|
21
|
Schneider DM, Woolley SMN. Sparse and background-invariant coding of vocalizations in auditory scenes. Neuron 2013; 79:141-52. [PMID: 23849201 DOI: 10.1016/j.neuron.2013.04.038] [Citation(s) in RCA: 94] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/26/2013] [Indexed: 11/26/2022]
Abstract
Vocal communicators such as humans and songbirds readily recognize individual vocalizations, even in distracting auditory environments. This perceptual ability is likely subserved by auditory neurons whose spiking responses to individual vocalizations are minimally affected by background sounds. However, auditory neurons that produce background-invariant responses to vocalizations in auditory scenes have not been found. Here, we describe a population of neurons in the zebra finch auditory cortex that represent vocalizations with a sparse code and that maintain their vocalization-like firing patterns in levels of background sound that permit behavioral recognition. These same neurons decrease or stop spiking in levels of background sound that preclude behavioral recognition. In contrast, upstream neurons represent vocalizations with dense and background-corrupted responses. We provide experimental evidence suggesting that sparse coding is mediated by feedforward suppression. Finally, we show through simulations that feedforward inhibition can transform a dense representation of vocalizations into a sparse and background-invariant representation.
Collapse
Affiliation(s)
- David M Schneider
- Program in Neurobiology and Behavior, Columbia University, New York, NY 10032, USA
| | | |
Collapse
|
22
|
Understanding the neurophysiological basis of auditory abilities for social communication: a perspective on the value of ethological paradigms. Hear Res 2013; 305:3-9. [PMID: 23994815 DOI: 10.1016/j.heares.2013.08.008] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2013] [Revised: 08/11/2013] [Accepted: 08/19/2013] [Indexed: 11/21/2022]
Abstract
Acoustic communication between animals requires them to detect, discriminate, and categorize conspecific or heterospecific vocalizations in their natural environment. Laboratory studies of the auditory-processing abilities that facilitate these tasks have typically employed a broad range of acoustic stimuli, ranging from natural sounds like vocalizations to "artificial" sounds like pure tones and noise bursts. However, even when using vocalizations, laboratory studies often test abilities like categorization in relatively artificial contexts. Consequently, it is not clear whether neural and behavioral correlates of these tasks (1) reflect extensive operant training, which drives plastic changes in auditory pathways, or (2) the innate capacity of the animal and its auditory system. Here, we review a number of recent studies, which suggest that adopting more ethological paradigms utilizing natural communication contexts are scientifically important for elucidating how the auditory system normally processes and learns communication sounds. Additionally, since learning the meaning of communication sounds generally involves social interactions that engage neuromodulatory systems differently than laboratory-based conditioning paradigms, we argue that scientists need to pursue more ethological approaches to more fully inform our understanding of how the auditory system is engaged during acoustic communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
|
23
|
Takahashi H, Yokota R, Kanzaki R. Response variance in functional maps: neural darwinism revisited. PLoS One 2013; 8:e68705. [PMID: 23874733 PMCID: PMC3708906 DOI: 10.1371/journal.pone.0068705] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2013] [Accepted: 05/31/2013] [Indexed: 11/23/2022] Open
Abstract
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Collapse
Affiliation(s)
- Hirokazu Takahashi
- Graduate School of Information Science and Technology, The University of Tokyo, Bunkyo-ku, Tokyo, Japan.
| | | | | |
Collapse
|
24
|
Emergent categorical representation of natural, complex sounds resulting from the early post-natal sound environment. Neuroscience 2013; 248:30-42. [PMID: 23747304 DOI: 10.1016/j.neuroscience.2013.05.056] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2013] [Revised: 05/28/2013] [Accepted: 05/29/2013] [Indexed: 11/24/2022]
Abstract
Cortical sensory representations can be reorganized by sensory exposure in an epoch of early development. The adaptive role of this type of plasticity for natural sounds in sensory development is, however, unclear. We have reared rats in a naturalistic, complex acoustic environment and examined their auditory representations. We found that cortical neurons became more selective to spectrotemporal features in the experienced sounds. At the neuronal population level, more neurons were involved in representing the whole set of complex sounds, but fewer neurons actually responded to each individual sound, but with greater magnitudes. A comparison of population-temporal responses to the experienced complex sounds revealed that cortical responses to different renderings of the same song motif were more similar, indicating that the cortical neurons became less sensitive to natural acoustic variations associated with stimulus context and sound renderings. By contrast, cortical responses to sounds of different motifs became more distinctive, suggesting that cortical neurons were tuned to the defining features of the experienced sounds. These effects lead to emergent "categorical" representations of the experienced sounds, which presumably facilitate their recognition.
Collapse
|
25
|
Chang TR, Chiu TW, Sun X, Poon PWF. Modeling complex responses of FM-sensitive cells in the auditory midbrain using a committee machine. Brain Res 2013; 1536:44-52. [PMID: 23665390 DOI: 10.1016/j.brainres.2013.04.058] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2012] [Revised: 04/30/2013] [Accepted: 04/30/2013] [Indexed: 11/26/2022]
Abstract
Frequency modulation (FM) is an important building block of complex sounds that include speech signals. Exploring the neural mechanisms of FM coding with computer modeling could help understand how speech sounds are processed in the brain. Here, we modeled the single unit responses of auditory neurons recorded from the midbrain of anesthetized rats. These neurons displayed spectral temporal receptive fields (STRFs) that had multiple-trigger features, and were more complex than those with single-trigger features. Their responses have not been modeled satisfactorily with simple artificial neural networks, unlike neurons with simple-trigger features. To improve model performance, here we tested an approach with the committee machine. For a given neuron, the peri-stimulus time histogram (PSTH) was first generated in response to a repeated random FM tone, and peaks in the PSTH were segregated into groups based on the similarity of their pre-spike FM trigger features. Each group was then modeled using an artificial neural network with simple architecture, and, when necessary, by increasing the number of neurons in the hidden layer. After initial training, the artificial neural networks with their optimized weighting coefficients were pooled into a committee machine for training. Finally, the model performance was tested by prediction of the response of the same cell to a novel FM tone. The results showed improvement over simple artificial neural networks, supporting that trigger-feature-based modeling can be extended to cells with complex responses. This article is part of a Special Issue entitled Neural Coding 2012. This article is part of a Special Issue entitled Neural Coding 2012.
Collapse
Affiliation(s)
- T R Chang
- Department of Computer Science and Information Engineering, Southern Taiwan University of Science and Technology, Tainan, Taiwan.
| | | | | | | |
Collapse
|
26
|
Gaucher Q, Huetz C, Gourévitch B, Laudanski J, Occelli F, Edeline JM. How do auditory cortex neurons represent communication sounds? Hear Res 2013; 305:102-12. [PMID: 23603138 DOI: 10.1016/j.heares.2013.03.011] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Revised: 03/18/2013] [Accepted: 03/26/2013] [Indexed: 11/30/2022]
Abstract
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Quentin Gaucher
- Centre de Neurosciences Paris-Sud (CNPS), CNRS UMR 8195, Université Paris-Sud, Bâtiment 446, 91405 Orsay cedex, France
| | | | | | | | | | | |
Collapse
|
27
|
Rajan R, Dubaj V, Reser DH, Rosa MGP. Auditory cortex of the marmoset monkey - complex responses to tones and vocalizations under opiate anaesthesia in core and belt areas. Eur J Neurosci 2012; 37:924-41. [PMID: 23278961 DOI: 10.1111/ejn.12092] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2011] [Revised: 11/06/2012] [Accepted: 11/16/2012] [Indexed: 11/28/2022]
Abstract
Many anaesthetics commonly used in auditory research severely depress cortical responses, particularly in the supragranular layers of the primary auditory cortex and in non-primary areas. This is particularly true when stimuli other than simple tones are presented. Although awake preparations allow better preservation of the neuronal responses, there is an inherent limitation to this approach whenever the physiological data need to be combined with histological reconstruction or anatomical tracing. Here we tested the efficacy of an opiate-based anaesthetic regime to study physiological responses in the primary auditory cortex and middle lateral belt area. Adult marmosets were anaesthetized using a combination of sufentanil (8 μg/kg/h, i.v.) and N2 O (70%). Unit activity was recorded throughout the cortical layers, in response to auditory stimuli presented binaurally. Stimuli consisted of a battery of tones presented at different intensities, as well as two marmoset calls ('Tsik' and 'Twitter'). In addition to robust monotonic and non-monotonic responses to tones, we found that the neuronal activity reflected various aspects of the calls, including 'on' and 'off' components, and temporal fluctuations. Both phasic and tonic activities, as well as excitatory and inhibitory components, were observed. Furthermore, a late component (100-250 ms post-offset) was apparent. Our results indicate that the sufentanil/N2 O combination allows better preservation of response patterns in both the core and belt auditory cortex, in comparison with anaesthetics usually employed in auditory physiology. This anaesthetic regime holds promise in enabling the physiological study of complex auditory responses in acute preparations, combined with detailed anatomical and histological investigation.
Collapse
Affiliation(s)
- Ramesh Rajan
- Department of Physiology, Monash University, Clayton, Vic., 3800, Australia.
| | | | | | | |
Collapse
|
28
|
Auditory abstraction from spectro-temporal features to coding auditory entities. Proc Natl Acad Sci U S A 2012; 109:18968-73. [PMID: 23112145 DOI: 10.1073/pnas.1111242109] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The auditory system extracts behaviorally relevant information from acoustic stimuli. The average activity in auditory cortex is known to be sensitive to spectro-temporal patterns in sounds. However, it is not known whether the auditory cortex also processes more abstract features of sounds, which may be more behaviorally relevant than spectro-temporal patterns. Using recordings from three stations of the auditory pathway, the inferior colliculus (IC), the ventral division of the medial geniculate body (MGB) of the thalamus, and the primary auditory cortex (A1) of the cat in response to natural sounds, we compared the amount of information that spikes contained about two aspects of the stimuli: spectro-temporal patterns, and abstract entities present in the same stimuli such as a bird chirp, its echoes, and the ambient noise. IC spikes conveyed on average approximately the same amount of information about spectro-temporal patterns as they conveyed about abstract auditory entities, but A1 and the MGB neurons conveyed on average three times more information about abstract auditory entities than about spectro-temporal patterns. Thus, the majority of neurons in auditory thalamus and cortex coded well the presence of abstract entities in the sounds without containing much information about their spectro-temporal structure, suggesting that they are sensitive to abstract features in these sounds.
Collapse
|
29
|
Larson E, Maddox RK, Perrone BP, Sen K, Billimoria CP. Neuron-specific stimulus masking reveals interference in spike timing at the cortical level. J Assoc Res Otolaryngol 2011; 13:81-9. [PMID: 21964794 DOI: 10.1007/s10162-011-0292-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2011] [Accepted: 09/08/2011] [Indexed: 11/29/2022] Open
Abstract
The auditory system is capable of robust recognition of sounds in the presence of competing maskers (e.g., other voices or background music). This capability arises despite the fact that masking stimuli can disrupt neural responses at the cortical level. Since the origins of such interference effects remain unknown, in this study, we work to identify and quantify neural interference effects that originate due to masking occurring within and outside receptive fields of neurons. We record from single and multi-unit auditory sites from field L, the auditory cortex homologue in zebra finches. We use a novel method called spike timing-based stimulus filtering that uses the measured response of each neuron to create an individualized stimulus set. In contrast to previous adaptive experimental approaches, which have typically focused on the average firing rate, this method uses the complete pattern of neural responses, including spike timing information, in the calculation of the receptive field. When we generate and present novel stimuli for each neuron that mask the regions within the receptive field, we find that the time-varying information in the neural responses is disrupted, degrading neural discrimination performance and decreasing spike timing reliability and sparseness. We also find that, while removing stimulus energy from frequency regions outside the receptive field does not significantly affect neural responses for many sites, adding a masker in these frequency regions can nonetheless have a significant impact on neural responses and discriminability without a significant change in the average firing rate. These findings suggest that maskers can interfere with neural responses by disrupting stimulus timing information with power either within or outside the receptive fields of neurons.
Collapse
Affiliation(s)
- Eric Larson
- Department of Biomedical Engineering, Hearing Research Center, Boston University, Boston, MA 02215, USA.
| | | | | | | | | |
Collapse
|
30
|
Sharpee TO, Nagel KI, Doupe AJ. Two-dimensional adaptation in the auditory forebrain. J Neurophysiol 2011; 106:1841-61. [PMID: 21753019 PMCID: PMC3296429 DOI: 10.1152/jn.00905.2010] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2010] [Accepted: 07/07/2011] [Indexed: 11/22/2022] Open
Abstract
Sensory neurons exhibit two universal properties: sensitivity to multiple stimulus dimensions, and adaptation to stimulus statistics. How adaptation affects encoding along primary dimensions is well characterized for most sensory pathways, but if and how it affects secondary dimensions is less clear. We studied these effects for neurons in the avian equivalent of primary auditory cortex, responding to temporally modulated sounds. We showed that the firing rate of single neurons in field L was affected by at least two components of the time-varying sound log-amplitude. When overall sound amplitude was low, neural responses were based on nonlinear combinations of the mean log-amplitude and its rate of change (first time differential). At high mean sound amplitude, the two relevant stimulus features became the first and second time derivatives of the sound log-amplitude. Thus a strikingly systematic relationship between dimensions was conserved across changes in stimulus intensity, whereby one of the relevant dimensions approximated the time differential of the other dimension. In contrast to stimulus mean, increases in stimulus variance did not change relevant dimensions, but selectively increased the contribution of the second dimension to neural firing, illustrating a new adaptive behavior enabled by multidimensional encoding. Finally, we demonstrated theoretically that inclusion of time differentials as additional stimulus features, as seen so prominently in the single-neuron responses studied here, is a useful strategy for encoding naturalistic stimuli, because it can lower the necessary sampling rate while maintaining the robustness of stimulus reconstruction to correlated noise.
Collapse
Affiliation(s)
- Tatyana O Sharpee
- The Crick-Jacobs Center for Theoretical and Computational Biology, Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, and the Center for Theoretical Biological Physics, University of California, San Diego, La Jolla, CA, USA.
| | | | | |
Collapse
|
31
|
Chang TR, Chiu TW, Sun X, Poon PWF. Modeling frequency modulated responses of midbrain auditory neurons based on trigger features and artificial neural networks. Brain Res 2011; 1434:90-101. [PMID: 22035565 DOI: 10.1016/j.brainres.2011.09.042] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2011] [Revised: 09/20/2011] [Accepted: 09/21/2011] [Indexed: 11/25/2022]
Abstract
Frequency modulation (FM) is an important building block of communication signals for animals and human. Attempts to predict the response of central neurons to FM sounds have not been very successful, though achieving successful results could bring insights regarding the underlying neural mechanisms. Here we proposed a new method to predict responses of FM-sensitive neurons in the auditory midbrain. First we recorded single unit responses in anesthetized rats using a random FM tone to construct their spectro-temporal receptive fields (STRFs). Training of neurons in the artificial neural network to respond to a second random FM tone was based on the temporal information derived from the STRF. Specifically, the time window covered by the presumed trigger feature and its delay time to spike occurrence were used to train a finite impulse response neural network (FIRNN) to respond to this random FM. Finally we tested the model performance in predicting the response to another similar FM stimuli (third random FM tone). We found good performance in predicting the time of responses if not also the response magnitudes. Furthermore, the weighting function of the FIRNN showed temporal 'bumps' suggesting temporal integration of synaptic inputs from different frequency laminae. This article is part of a Special Issue entitled: Neural Coding.
Collapse
Affiliation(s)
- T R Chang
- Dept. of Computer Sciences and Information Engineering, Southern Taiwan University, Tainan, Taiwan.
| | | | | | | |
Collapse
|
32
|
Sarko D, Nidiffer A, III A, Ghose D, Hillock-Dunn R, Fister M, Krueger J, Wallace M. Spatial and Temporal Features of Multisensory Processes. Front Neurosci 2011. [DOI: 10.1201/9781439812174-15] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
33
|
Sarko D, Nidiffer A, III A, Ghose D, Hillock-Dunn R, Fister M, Krueger J, Wallace M. Spatial and Temporal Features of Multisensory Processes. Front Neurosci 2011. [DOI: 10.1201/b11092-15] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
34
|
Sharpee TO, Atencio CA, Schreiner CE. Hierarchical representations in the auditory cortex. Curr Opin Neurobiol 2011; 21:761-7. [PMID: 21704508 DOI: 10.1016/j.conb.2011.05.027] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2011] [Revised: 05/18/2011] [Accepted: 05/20/2011] [Indexed: 11/20/2022]
Abstract
Understanding the neural mechanisms of invariant object recognition remains one of the major unsolved problems in neuroscience. A common solution that is thought to be employed by diverse sensory systems is to create hierarchical representations of increasing complexity and tolerance. However, in the mammalian auditory system many aspects of this hierarchical organization remain undiscovered, including the prominent classes of high-level representations (that would be analogous to face selectivity in the visual system or selectivity to bird's own song in the bird) and the dominant types of invariant transformations. Here we review the recent progress that begins to probe the hierarchy of auditory representations, and the computational approaches that can be helpful in achieving this feat.
Collapse
Affiliation(s)
- Tatyana O Sharpee
- The Salk Institute for Biological Studies, La Jolla CA 92037, United States.
| | | | | |
Collapse
|
35
|
Jaramillo S, Zador AM. The auditory cortex mediates the perceptual effects of acoustic temporal expectation. Nat Neurosci 2010; 14:246-51. [PMID: 21170056 PMCID: PMC3152437 DOI: 10.1038/nn.2688] [Citation(s) in RCA: 194] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2010] [Accepted: 10/12/2010] [Indexed: 11/23/2022]
Abstract
When events occur at predictable instants, anticipation improves performance. Knowledge of event timing modulates motor circuits, improving response speed. By contrast, the neuronal mechanisms underlying changes in sensory perception due to expectation are not well understood. We have developed a novel behavioral paradigm for rats in which we manipulated expectations about sound timing. Valid expectations improved both the speed and the accuracy of subjects’ performance, indicating not only improved motor preparedness but also enhanced perception. Single neuron recordings in primary auditory cortex revealed enhanced representation of sounds during periods of heightened expectation. Furthermore, we found that activity in auditory cortex was causally linked to the performance of the task, and that changes in the neuronal representation of sounds predicted performance on a trial-by-trial basis. Our results indicate that changes in neuronal representation as early as primary sensory cortex mediate the perceptual advantage conferred by temporal expectation.
Collapse
|
36
|
The functional asymmetry of auditory cortex is reflected in the organization of local cortical circuits. Nat Neurosci 2010; 13:1413-20. [PMID: 20953193 PMCID: PMC3140463 DOI: 10.1038/nn.2659] [Citation(s) in RCA: 82] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2010] [Accepted: 09/07/2010] [Indexed: 11/08/2022]
Abstract
The primary auditory cortex (A1) is organized tonotopically, with neurons sensitive to high and low frequencies arranged in a rostro-caudal gradient. We used laser scanning photostimulation in acute slices to study the organization of local excitatory connections onto layers 2 and 3 (L2/3) of the mouse A1. Consistent with the organization of other cortical regions, synaptic inputs along the isofrequency axis (orthogonal to the tonotopic axis) arose predominantly within a column. By contrast, we found that local connections along the tonotopic axis differed from those along the isofrequency axis: some input pathways to L3 (but not L2) arose predominantly out-of-column. In vivo cell-attached recordings revealed differences between the sound-responsiveness of neurons in L2 and L3. Our results are consistent with the hypothesis that auditory cortical microcircuitry is specialized to the one-dimensional representation of frequency in the auditory cortex.
Collapse
|
37
|
Lin FG, Liu RC. Subset of thin spike cortical neurons preserve the peripheral encoding of stimulus onsets. J Neurophysiol 2010; 104:3588-99. [PMID: 20943946 DOI: 10.1152/jn.00295.2010] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
An important question in auditory neuroscience concerns how the neural representation of sound features changes from the periphery to the cortex. Here we focused on the encoding of sound onsets and we used a modeling approach to explore the degree to which auditory cortical neurons follow a similar envelope integration mechanism found at the auditory periphery. Our "forward" model was able to predict relatively accurately the timing of first spikes evoked by natural communication calls in the auditory cortex of awake, head-restrained mice, but only for a subset of cortical neurons. These neurons were systematically different in their encoding of the calls, exhibiting less call selectivity, shorter latency, greater precision, and more transient spiking compared with the same factors of their poorly predicted counterparts. Importantly, neurons that fell into this best-predicted group all had thin spike waveforms, suggestive of suspected interneurons conveying feedforward inhibition. Indeed, our population of call-excited thin spike neurons had significantly higher spontaneous rates and larger frequency tuning bandwidths than those of thick spike neurons. Thus the fidelity of our model's first spike predictions segregated neurons into one earlier responding subset, potentially dominated by suspected interneurons, which preserved a peripheral mechanism for encoding sound onsets and another longer latency subset that reflected higher, likely centrally constructed nonlinearities. These results therefore provide support for the hypothesis that physiologically distinct subclasses of neurons in the auditory cortex may contribute hierarchically to the representation of natural stimuli.
Collapse
Affiliation(s)
- Frank G Lin
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | | |
Collapse
|
38
|
Brimijoin WO, O'Neill WE. Patterned tone sequences reveal non-linear interactions in auditory spectrotemporal receptive fields in the inferior colliculus. Hear Res 2010; 267:96-110. [PMID: 20430078 PMCID: PMC3978381 DOI: 10.1016/j.heares.2010.04.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2009] [Revised: 04/06/2010] [Accepted: 04/06/2010] [Indexed: 11/28/2022]
Abstract
Linear measures of auditory receptive fields do not always fully account for a neuron's response to spectrotemporally-complex signals such as frequency-modulated sweeps (FM) and communication sounds. A possible source of this discrepancy is cross-frequency interactions, common response properties which may be missed by linear receptive fields but captured using two-tone masking. Using a patterned tonal sequence that included a balanced set of all possible tone-to-tone transitions, we have here combined the spectrotemporal receptive field with two-tone masking to measure spectrotemporal response maps (STRM). Recording from single units in the mustached bat inferior colliculus, we found significant non-linear interactions between sequential tones in all sampled units. In particular, tone-pair STRMs revealed three common features not visible in linear single-tone STRMs: 1) two-tone facilitative interactions, 2) frequency-specific suppression, and 3) post-stimulatory suppression in the absence of spiking. We also found a correlative relationship between these nonlinear receptive field features and sensitivity for different rates and directions of FM sweeps, dynamic features found in many vocalizations, including speech. The overwhelming prevalence of cross-frequency interactions revealed by this technique provides further evidence of the central auditory system's role as a pattern-detector, and underscores the need to include nonlinearity in measures of the receptive field.
Collapse
Affiliation(s)
- W Owen Brimijoin
- Department of Brain and Cognitive Sciences, College of Arts, Science, and Engineering, University of Rochester, Rochester, NY 14627, USA.
| | | |
Collapse
|
39
|
Context dependence of spectro-temporal receptive fields with implications for neural coding. Hear Res 2010; 271:123-32. [PMID: 20123121 DOI: 10.1016/j.heares.2010.01.014] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Revised: 01/25/2010] [Accepted: 01/27/2010] [Indexed: 11/23/2022]
Abstract
The spectro-temporal receptive field (STRF) is frequently used to characterize the linear frequency-time filter properties of the auditory system up to the neuron recorded from. STRFs are extremely stimulus dependent, reflecting the strong non-linearities in the auditory system. Changes in the STRF with stimulus type (tonal, noise-like, vocalizations), sound level and spectro-temporal sound density are reviewed here. Effects on STRF shape of task and attention are also briefly reviewed. Models to account for these changes, potential improvements to STRF analysis, and implications for neural coding are discussed.
Collapse
|
40
|
Huetz C, Gourévitch B, Edeline JM. Neural codes in the thalamocortical auditory system: from artificial stimuli to communication sounds. Hear Res 2010; 271:147-58. [PMID: 20116422 DOI: 10.1016/j.heares.2010.01.010] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/16/2009] [Revised: 01/22/2010] [Accepted: 01/22/2010] [Indexed: 10/19/2022]
Abstract
Over the last 15 years, an increasing number of studies have described the responsiveness of thalamic and cortical neurons to communication sounds. Whereas initial studies have simply looked for neurons exhibiting higher firing rate to conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determine the relative contribution of "rate coding" and "temporal coding" to the information transmitted by spike trains. In this article, we aim at reviewing the different strategies employed by thalamic and cortical neurons to encode information about acoustic stimuli, from artificial to natural sounds. Considering data obtained with simple stimuli, we first illustrate that different facets of temporal code, ranging from a strict correspondence between spike-timing and stimulus temporal features to more complex coding strategies, do already exist with artificial stimuli. We then review lines of evidence indicating that spike-timing provides an efficient code for discriminating communication sounds from thalamus, primary and non-primary auditory cortex up to frontal areas. As the neural code probably developed, and became specialized, over evolution to allow precise and reliable processing of sounds that are of survival value, we argue that spike-timing based coding strategies might set the foundations of our perceptive abilities.
Collapse
Affiliation(s)
- Chloé Huetz
- Centre de Neurosciences Paris Sud, UMR CNRS 8195, Université Paris-Sud, 91405 Orsay Cedex, France
| | | | | |
Collapse
|
41
|
Meliza CD, Chi Z, Margoliash D. Representations of conspecific song by starling secondary forebrain auditory neurons: toward a hierarchical framework. J Neurophysiol 2009; 103:1195-208. [PMID: 20032245 DOI: 10.1152/jn.00464.2009] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The functional organization giving rise to stimulus selectivity in higher-order auditory neurons remains under active study. We explored the selectivity for motifs, spectrotemporally distinct perceptual units in starling song, recording the responses of 96 caudomedial mesopallium (CMM) neurons in European starlings (Sturnus vulgaris) under awake-restrained and urethane-anesthetized conditions. A subset of neurons was highly selective between motifs. Selectivity was correlated with low spontaneous firing rates and high spike timing precision, and all but one of the selective neurons had similar spike waveforms. Neurons were further tested with stimuli in which the notes comprising the motifs were manipulated. Responses to most of the isolated notes were similar in amplitude, duration, and temporal pattern to the responses elicited by those notes in the context of the motif. For these neurons, we could accurately predict the responses to motifs from the sum of the responses to notes. Some notes were suppressed by the motif context, such that removing other notes from motifs unmasked additional excitation. Models of linear summation of note responses consistently outperformed spectrotemporal receptive field models in predicting responses to song stimuli. Tests with randomized sequences of notes confirmed the predictive power of these models. Whole notes gave better predictions than did note fragments. Thus in CMM, auditory objects (motifs) can be represented by a linear combination of excitation and suppression elicited by the note components of the object. We hypothesize that the receptive fields arise from selective convergence by inputs responding to specific spectrotemporal features of starling notes.
Collapse
Affiliation(s)
- C Daniel Meliza
- Dept. of Organismal Biology and Anatomy, Univ. of Chicago, 1027 E 57th St., Chicago, IL 60637, USA.
| | | | | |
Collapse
|
42
|
Abstract
Spectrotemporal receptive fields of nonlinear neurons in primary auditory cortex are stimulus dependent or context dependent. Here we show that a variant of stimulus-specific adaptation also contributes to this context dependence. Responses to sound stimulus frequencies close to the neuron's best frequency adapt with an average time constant of approximately 7 s. In contrast, responses away from the best frequency do not adapt, but in fact slightly increase over our 30-s observation window. Such stimulus-specific adaptation could function in enhancing stimulus discrimination and in maximizing neural information transmission by reducing redundancy. It also needs to be taken into account when comparing spectrotemporal receptive fields measured under adapted and nonadapted conditions.
Collapse
|
43
|
Asari H, Zador AM. Long-lasting context dependence constrains neural encoding models in rodent auditory cortex. J Neurophysiol 2009; 102:2638-56. [PMID: 19675288 DOI: 10.1152/jn.00577.2009] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Acoustic processing requires integration over time. We have used in vivo intracellular recording to measure neuronal integration times in anesthetized rats. Using natural sounds and other stimuli, we found that synaptic inputs to auditory cortical neurons showed a rather long context dependence, up to > or =4 s (tau approximately 1 s), even though sound-evoked excitatory and inhibitory conductances per se rarely lasted greater, similar 100 ms. Thalamic neurons showed only a much faster form of adaptation with a decay constant tau <100 ms, indicating that the long-lasting form originated from presynaptic mechanisms in the cortex, such as synaptic depression. Restricting knowledge of the stimulus history to only a few hundred milliseconds reduced the predictable response component to about half that of the optimal infinite-history model. Our results demonstrate the importance of long-range temporal effects in auditory cortex and suggest a potential neural substrate for auditory processing that requires integration over timescales of seconds or longer, such as stream segregation.
Collapse
Affiliation(s)
- Hiroki Asari
- Cold Spring Harbor Laboratory, Watson School of Biological Sciences, Cold Spring Harbor, New York 11724, USA
| | | |
Collapse
|
44
|
Pienkowski M, Shaw G, Eggermont JJ. Wiener-Volterra characterization of neurons in primary auditory cortex using poisson-distributed impulse train inputs. J Neurophysiol 2009; 101:3031-41. [PMID: 19321635 DOI: 10.1152/jn.91242.2008] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
An extension of the Wiener-Volterra theory to a Poisson-distributed impulse train input was used to characterize the temporal response properties of neurons in primary auditory cortex (AI) of the ketamine-anesthetized cat. Both first- and second-order "Poisson-Wiener" (PW) models were tested on their predictions of temporal modulation transfer functions (tMTFs), which were derived from extracellular spike responses to periodic click trains with click repetition rates of 2-64 Hz. Second-order (i.e., nonlinear) PW fits to the measured tMTFs could be described as very good in a majority of cases (e.g., predictability >or=80%) and were almost always superior to first-order (i.e., linear) fits. In all sampled neurons, second-order PW kernels showed strong compressive nonlinearities (i.e., a depression of the impulse response) but never expansive nonlinearities (i.e., a facilitation of the impulse response). In neurons with low-pass tMTFs, the depression decayed exponentially with the interstimulus lag, whereas in neurons with band-pass tMTFs, the depression was typically double-peaked, and the second peak occurred at a lag that correlated with the neuron's best modulation frequency. It appears that modulation-tuning in AI arises in part from an interplay of two nonlinear processes with distinct time courses.
Collapse
Affiliation(s)
- Martin Pienkowski
- Department of Physiology, University of Calgary, Calgary, Alberta, Canada, T2N 1N4
| | | | | |
Collapse
|
45
|
Qin L, Wang J, Sato Y. Heterogeneous Neuronal Responses to Frequency-Modulated Tones in the Primary Auditory Cortex of Awake Cats. J Neurophysiol 2008; 100:1622-34. [DOI: 10.1152/jn.90364.2008] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies in anesthetized animals reported that the primary auditory cortex (A1) showed homogenous phasic responses to FM tones, namely a transient response to a particular instantaneous frequency when FM sweeps traversed a neuron's tone-evoked receptive field (TRF). Here, in awake cats, we report that A1 cells exhibit heterogeneous FM responses, consisting of three patterns. The first is continuous firing when a slow FM sweep traverses the receptive field of a cell with a sustained tonal response. The duration and amplitude of FM response decrease with increasing sweep speed. The second pattern is transient firing corresponding to the cell's phasic tonal response. This response could be evoked only by a fast FM sweep through the cell's TRF, suggesting a preference for fast FM. The third pattern was associated with the off response to pure tones and was composed of several discrete response peaks during slow FM stimulus. These peaks were not predictable from the cell's tonal response but reliably reflected the time when FM swept across specific frequencies. Our A1 samples often exhibited a complex response pattern, combining two or three of the basic patterns above, resulting in a heterogeneous response population. The diversity of FM responses suggests that A1 use multiple mechanisms to fully represent the whole range of FM parameters, including frequency extent, sweep speed, and direction.
Collapse
|
46
|
Qin L, Wang JY, Sato Y. Representations of Cat Meows and Human Vowels in the Primary Auditory Cortex of Awake Cats. J Neurophysiol 2008; 99:2305-19. [DOI: 10.1152/jn.01125.2007] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous investigation of neural responses to cat meows in the primary auditory cortex (A1) of the anesthetized cat revealed a preponderance of phasic responses aligned to stimulus onset, offset, or envelope peaks. Sustained responses during stationary components of the stimulus were rarely seen. This observation motivates further investigation into how stationary components of naturalistic auditory stimuli are encoded by A1 neurons. We therefore explored neuronal response patterns in A1 of the awake cat using natural meows, time-reversed meows, and human vowels as stimuli. We found heterogeneous response types: ∼2/3 of units classified as “phasic cells” responding only to amplitude envelope variations and the remaining 1/3 were “phasic-tonic cells” with continuous responses during the stationary components. The classification was upheld across all stimuli tested for a given cell. The differences of phasic responses were correlated with amplitude-envelope differences in the early stimulus portion (<100 ms), whereas the differences between tonic responses were correlated with ongoing spectral differences in the later stimulus portion. Phasic-tonic cells usually had a characteristic frequency (CF) <5 kHz, which corresponded to the dominant spectral range of vocalizations, suggesting that the cells encode spectral information. Phasic cells had CFs across the tested frequency range (<16 kHz). Instantaneous firing rates for natural and time-reversed meows were different, but mean rates for different categories of stimuli were similar. Evidence for cat's A1 preferring conspecific meows was not found. These functionally heterogeneous responses may serve to encode ongoing changes in sound spectra or amplitude envelope occurring throughout the entirety of the sound stimulus.
Collapse
|
47
|
Young ED. Neural representation of spectral and temporal information in speech. Philos Trans R Soc Lond B Biol Sci 2008; 363:923-45. [PMID: 17827107 PMCID: PMC2606788 DOI: 10.1098/rstb.2007.2151] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Speech is the most interesting and one of the most complex sounds dealt with by the auditory system. The neural representation of speech needs to capture those features of the signal on which the brain depends in language communication. Here we describe the representation of speech in the auditory nerve and in a few sites in the central nervous system from the perspective of the neural coding of important aspects of the signal. The representation is tonotopic, meaning that the speech signal is decomposed by frequency and different frequency components are represented in different populations of neurons. Essential to the representation are the properties of frequency tuning and nonlinear suppression. Tuning creates the decomposition of the signal by frequency, and nonlinear suppression is essential for maintaining the representation across sound levels. The representation changes in central auditory neurons by becoming more robust against changes in stimulus intensity and more transient. However, it is probable that the form of the representation at the auditory cortex is fundamentally different from that at lower levels, in that stimulus features other than the distribution of energy across frequency are analysed.
Collapse
Affiliation(s)
- Eric D Young
- Department of Biomedical Engineering, Centre for Hearing and Balance, Johns Hopkins University, 720 Rutland Avenue, Baltimore, MD 21205, USA.
| |
Collapse
|
48
|
|
49
|
Nelken I, Bizley JK, Nodal FR, Ahmed B, King AJ, Schnupp JWH. Responses of auditory cortex to complex stimuli: functional organization revealed using intrinsic optical signals. J Neurophysiol 2008; 99:1928-41. [PMID: 18272880 DOI: 10.1152/jn.00469.2007] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We used optical imaging of intrinsic signals to study the large-scale organization of ferret auditory cortex in response to complex sounds. Cortical responses were collected during continuous stimulation by sequences of sounds with varying frequency, period, or interaural level differences. We used a set of stimuli that differ in spectral structure, but have the same periodicity and therefore evoke the same pitch percept (click trains, sinusoidally amplitude modulated tones, and iterated ripple noise). These stimuli failed to reveal a consistent periodotopic map across the auditory fields imaged. Rather, gradients of period sensitivity differed for the different types of periodic stimuli. Binaural interactions were studied both with single contralateral, ipsilateral, and diotic broadband noise bursts and with sequences of broadband noise bursts with varying level presented contralaterally, ipsilaterally, or in opposite phase to both ears. Contralateral responses were generally largest and ipsilateral responses were smallest when using single noise bursts, but the extent of the activated area was large and comparable in all three aural configurations. Modulating the amplitude in counter phase to the two ears generally produced weaker modulation of the optical signals than the modulation produced by the monaural stimuli. These results suggest that binaural interactions seen in cortex are most likely predominantly due to subcortical processing. Thus our optical imaging data do not support the theory that the primary or nonprimary cortical fields imaged are topographically organized to form consistent maps of systematically varying sensitivity either to stimulus pitch or to simple binaural properties of the acoustic stimuli.
Collapse
Affiliation(s)
- Israel Nelken
- Department of Neurobiology, Interdisciplinary Center for Neural Computation, The Hebrew University, Jerusalem, Israel.
| | | | | | | | | | | |
Collapse
|
50
|
Kajikawa Y, de la Mothe LA, Blumell S, Sterbing-D'Angelo SJ, D'Angelo W, Camalier CR, Hackett TA. Coding of FM sweep trains and twitter calls in area CM of marmoset auditory cortex. Hear Res 2008; 239:107-25. [PMID: 18342463 DOI: 10.1016/j.heares.2008.01.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2006] [Revised: 01/28/2008] [Accepted: 01/31/2008] [Indexed: 11/18/2022]
Abstract
The primate auditory cortex contains three interconnected regions (core, belt, parabelt), which are further subdivided into discrete areas. The caudomedial area (CM) is one of about seven areas in the belt region that has been the subject of recent anatomical and physiological studies conducted to define the functional organization of auditory cortex. The main goal of the present study was to examine temporal coding in area CM of marmoset monkeys using two related classes of acoustic stimuli: (1) marmoset twitter calls; and (2) frequency-modulated (FM) sweep trains modeled after the twitter call. The FM sweep trains were presented at repetition rates between 1 and 24 Hz, overlapping the natural phrase frequency of the twitter call (6-8 Hz). Multiunit recordings in CM revealed robust phase-locked responses to twitter calls and FM sweep trains. For the latter, phase-locking quantified by vector strength (VS) was best at repetition rates between 2 and 8 Hz, with a mean of about 5 Hz. Temporal response patterns were not strictly phase-locked, but exhibited dynamic features that varied with the repetition rate. To examine these properties, classification of the repetition rate from the temporal response pattern evoked by twitter calls and FM sweep trains was examined by Fisher's linear discrimination analysis (LDA). Response classification by LDA revealed that information was encoded not only by phase-locking, but also other components of the temporal response pattern. For FM sweep trains, classification was best for repetition rates from 2 to 8 Hz. Thus, the majority of neurons in CM can accurately encode the envelopes of temporally complex stimuli over the behaviorally-relevant range of the twitter call. This suggests that CM could be engaged in processing that requires relatively precise temporal envelope discrimination, and supports the hypothesis that CM is positioned at an early stage of processing in the auditory cortex of primates.
Collapse
|