1
|
Singh R, Bharadwaj HM. Cortical temporal integration can account for limits of temporal perception: investigations in the binaural system. Commun Biol 2023; 6:981. [PMID: 37752215 PMCID: PMC10522716 DOI: 10.1038/s42003-023-05361-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 09/15/2023] [Indexed: 09/28/2023] Open
Abstract
The auditory system has exquisite temporal coding in the periphery which is transformed into a rate-based code in central auditory structures, like auditory cortex. However, the cortex is still able to synchronize, albeit at lower modulation rates, to acoustic fluctuations. The perceptual significance of this cortical synchronization is unknown. We estimated physiological synchronization limits of cortex (in humans with electroencephalography) and brainstem neurons (in chinchillas) to dynamic binaural cues using a novel system-identification technique, along with parallel perceptual measurements. We find that cortex can synchronize to dynamic binaural cues up to approximately 10 Hz, which aligns well with our measured limits of perceiving dynamic spatial information and utilizing dynamic binaural cues for spatial unmasking, i.e. measures of binaural sluggishness. We also find that the tracking limit for frequency modulation (FM) is similar to the limit for spatial tracking, demonstrating that this sluggish tracking is a more general perceptual limit that can be accounted for by cortical temporal integration limits.
Collapse
Affiliation(s)
- Ravinderjit Singh
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Hari M Bharadwaj
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA.
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, USA.
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
2
|
Osmanski MS, Wang X. Perceptual specializations for processing species-specific vocalizations in the common marmoset ( Callithrix jacchus). Proc Natl Acad Sci U S A 2023; 120:e2221756120. [PMID: 37276391 PMCID: PMC10268253 DOI: 10.1073/pnas.2221756120] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 05/03/2023] [Indexed: 06/07/2023] Open
Abstract
How humans and animals segregate sensory information into discrete, behaviorally meaningful categories is one of the hallmark questions in neuroscience. Much of the research around this topic in the auditory system has centered around human speech perception, in which categorical processes result in an enhanced sensitivity for acoustically meaningful differences and a reduced sensitivity for nonmeaningful distinctions. Much less is known about whether nonhuman primates process their species-specific vocalizations in a similar manner. We address this question in the common marmoset, a small arboreal New World primate with a rich vocal repertoire produced across a range of behavioral contexts. We first show that marmosets perceptually categorize their vocalizations in ways that correspond to previously defined call types for this species. Next, we show that marmosets are differentially sensitive to changes in particular acoustic features of their most common call types and that these sensitivity differences are matched to the population statistics of their vocalizations in ways that likely maximize category formation. Finally, we show that marmosets are less sensitive to changes in these acoustic features when within the natural range of variability of their calls, which possibly reflects perceptual specializations which maintain existing call categories. These findings suggest specializations for categorical vocal perception in a New World primate species and pave the way for future studies examining their underlying neural mechanisms.
Collapse
Affiliation(s)
- Michael S. Osmanski
- Department of Biomedical Engineering, Laboratory of Auditory Neurophysiology, The Johns Hopkins University School of Medicine, Baltimore, MD21205
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Laboratory of Auditory Neurophysiology, The Johns Hopkins University School of Medicine, Baltimore, MD21205
| |
Collapse
|
3
|
Schultheiβ H, Zulfiqar I, Verardo C, Jolivet RB, Moerel M. Modelling homeostatic plasticity in the auditory cortex results in neural signatures of tinnitus. Neuroimage 2023; 271:119987. [PMID: 36940510 DOI: 10.1016/j.neuroimage.2023.119987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 12/23/2022] [Accepted: 02/25/2023] [Indexed: 03/22/2023] Open
Abstract
Tinnitus is a clinical condition where a sound is perceived without an external sound source. Homeostatic plasticity (HSP), serving to increase neural activity as compensation for the reduced input to the auditory pathway after hearing loss, has been proposed as a mechanism underlying tinnitus. In support, animal models of tinnitus show evidence of increased neural activity after hearing loss, including increased spontaneous and sound-driven firing rate, as well as increased neural noise throughout the auditory processing pathway. Bridging these findings to human tinnitus, however, has proven to be challenging. Here we implement hearing loss-induced HSP in a Wilson-Cowan Cortical Model of the auditory cortex to predict how homeostatic principles operating at the microscale translate to the meso- to macroscale accessible through human neuroimaging. We observed HSP-induced response changes in the model that were previously proposed as neural signatures of tinnitus, but that have also been reported as correlates of hearing loss and hyperacusis. As expected, HSP increased spontaneous and sound-driven responsiveness in hearing-loss affected frequency channels of the model. We furthermore observed evidence of increased neural noise and the appearance of spatiotemporal modulations in neural activity, which we discuss in light of recent human neuroimaging findings. Our computational model makes quantitative predictions that require experimental validation, and may thereby serve as the basis of future human studies of hearing loss, tinnitus, and hyperacusis.
Collapse
Affiliation(s)
- Hannah Schultheiβ
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Master Systems Biology, Faculty of Science and Engineering, Maastricht University, Maastricht, the Netherlands
| | - Isma Zulfiqar
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Claudio Verardo
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands; The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Renaud B Jolivet
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands
| | - Michelle Moerel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands.
| |
Collapse
|
4
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
5
|
A Redundant Cortical Code for Speech Envelope. J Neurosci 2023; 43:93-112. [PMID: 36379706 PMCID: PMC9838705 DOI: 10.1523/jneurosci.1616-21.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 08/19/2022] [Accepted: 10/23/2022] [Indexed: 11/17/2022] Open
Abstract
Animal communication sounds exhibit complex temporal structure because of the amplitude fluctuations that comprise the sound envelope. In human speech, envelope modulations drive synchronized activity in auditory cortex (AC), which correlates strongly with comprehension (Giraud and Poeppel, 2012; Peelle and Davis, 2012; Haegens and Zion Golumbic, 2018). Studies of envelope coding in single neurons, performed in nonhuman animals, have focused on periodic amplitude modulation (AM) stimuli and use response metrics that are not easy to juxtapose with data from humans. In this study, we sought to bridge these fields. Specifically, we looked directly at the temporal relationship between stimulus envelope and spiking, and we assessed whether the apparent diversity across neurons' AM responses contributes to the population representation of speech-like sound envelopes. We gathered responses from single neurons to vocoded speech stimuli and compared them to sinusoidal AM responses in auditory cortex (AC) of alert, freely moving Mongolian gerbils of both sexes. While AC neurons displayed heterogeneous tuning to AM rate, their temporal dynamics were stereotyped. Preferred response phases accumulated near the onsets of sinusoidal AM periods for slower rates (<8 Hz), and an over-representation of amplitude edges was apparent in population responses to both sinusoidal AM and vocoded speech envelopes. Crucially, this encoding bias imparted a decoding benefit: a classifier could discriminate vocoded speech stimuli using summed population activity, while higher frequency modulations required a more sophisticated decoder that tracked spiking responses from individual cells. Together, our results imply that the envelope structure relevant to parsing an acoustic stream could be read-out from a distributed, redundant population code.SIGNIFICANCE STATEMENT Animal communication sounds have rich temporal structure and are often produced in extended sequences, including the syllabic structure of human speech. Although the auditory cortex (AC) is known to play a crucial role in representing speech syllables, the contribution of individual neurons remains uncertain. Here, we characterized the representations of both simple, amplitude-modulated sounds and complex, speech-like stimuli within a broad population of cortical neurons, and we found an overrepresentation of amplitude edges. Thus, a phasic, redundant code in auditory cortex can provide a mechanistic explanation for segmenting acoustic streams like human speech.
Collapse
|
6
|
Souffi S, Varnet L, Zaidi M, Bathellier B, Huetz C, Edeline JM. Reduction in sound discrimination in noise is related to envelope similarity and not to a decrease in envelope tracking abilities. J Physiol 2023; 601:123-149. [PMID: 36373184 DOI: 10.1113/jp283526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/08/2022] [Indexed: 11/15/2022] Open
Abstract
Humans and animals constantly face challenging acoustic environments, such as various background noises, that impair the detection, discrimination and identification of behaviourally relevant sounds. Here, we disentangled the role of temporal envelope tracking in the reduction in neuronal and behavioural discrimination between communication sounds in situations of acoustic degradations. By collecting neuronal activity from six different levels of the auditory system, from the auditory nerve up to the secondary auditory cortex, in anaesthetized guinea-pigs, we found that tracking of slow changes of the temporal envelope is a general functional property of auditory neurons for encoding communication sounds in quiet conditions and in adverse, challenging conditions. Results from a go/no-go sound discrimination task in mice support the idea that the loss of distinct slow envelope cues in noisy conditions impacted the discrimination performance. Together, these results suggest that envelope tracking is potentially a universal mechanism operating in the central auditory system, which allows the detection of any between-stimulus difference in the slow envelope and thus copes with degraded conditions. KEY POINTS: In quiet conditions, envelope tracking in the low amplitude modulation range (<20 Hz) is correlated with the neuronal discrimination between communication sounds as quantified by mutual information from the cochlear nucleus up to the auditory cortex. At each level of the auditory system, auditory neurons retain their abilities to track the communication sound envelopes in situations of acoustic degradation, such as vocoding and the addition of masking noises up to a signal-to-noise ratio of -10 dB. In noisy conditions, the increase in between-stimulus envelope similarity explains the reduction in both behavioural and neuronal discrimination in the auditory system. Envelope tracking can be viewed as a universal mechanism that allows neural and behavioural discrimination as long as the temporal envelope of communication sounds displays some differences.
Collapse
Affiliation(s)
- Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Léo Varnet
- Laboratoire des systèmes perceptifs, UMR CNRS 8248, Département d'Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences & Lettres, Paris, France
| | - Meryem Zaidi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Brice Bathellier
- Institut de l'Audition, Institut Pasteur, Université de Paris, INSERM, Paris, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| |
Collapse
|
7
|
Chen J, Jennings SG. Temporal Envelope Coding of the Human Auditory Nerve Inferred from Electrocochleography: Comparison with Envelope Following Responses. J Assoc Res Otolaryngol 2022; 23:803-814. [PMID: 35948693 PMCID: PMC9789235 DOI: 10.1007/s10162-022-00865-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 07/12/2022] [Indexed: 01/06/2023] Open
Abstract
Neural coding of the slow amplitude fluctuations of sound (i.e., temporal envelope) is thought to be essential for speech understanding; however, such coding by the human auditory nerve is poorly understood. Here, neural coding of the temporal envelope by the human auditory nerve is inferred from measurements of the compound action potential in response to an amplitude modulated carrier (CAPENV) for modulation frequencies ranging from 20 to 1000 Hz. The envelope following response (EFR) was measured simultaneously with CAPENV from active electrodes placed on the high forehead and tympanic membrane, respectively. Results support the hypothesis that phase locking to higher modulation frequencies (> 80 Hz) will be stronger for CAPENV, compared to EFR, consistent with the upper-frequency limits of phase locking for auditory nerve fibers compared to auditory brainstem/cortex neurons. Future work is needed to determine the extent to which (1) CAPENV is a useful tool for studying how temporal processing of the auditory nerve is affected by aging, hearing loss, and noise-induced cochlear synaptopathy and (2) CAPENV reveals the relationship between auditory nerve temporal processing and perception of the temporal envelope.
Collapse
Affiliation(s)
- Jessica Chen
- Department of Communication Sciences and Disorders, The University of Utah, 390 South BEHS 1201, Salt Lake City, UT, USA
| | - Skyler G Jennings
- Department of Communication Sciences and Disorders, The University of Utah, 390 South BEHS 1201, Salt Lake City, UT, USA.
| |
Collapse
|
8
|
Hsu C, Liu T, Lee D, Yeh D, Chen Y, Liang W, Juan C. Amplitude modulating frequency overrides carrier frequency in tACS-induced phosphene percept. Hum Brain Mapp 2022; 44:914-926. [PMID: 36250439 PMCID: PMC9875935 DOI: 10.1002/hbm.26111] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 09/24/2022] [Accepted: 10/03/2022] [Indexed: 01/28/2023] Open
Abstract
The amplitude modulated (AM) neural oscillation is an essential feature of neural dynamics to coordinate distant brain areas. The AM transcranial alternating current stimulation (tACS) has recently been adopted to examine various cognitive functions, but its neural mechanism remains unclear. The current study utilized the phosphene phenomenon to investigate whether, in an AM-tACS, the AM frequency could modulate or even override the carrier frequency in phosphene percept. We measured the phosphene threshold and the perceived flash rate/pattern from 12 human subjects (four females, aged from 20-44 years old) under tACS that paired carrier waves (10, 14, 18, 22 Hz) with different envelope conditions (0, 2, 4 Hz) over the mid-occipital and left facial areas. We also examined the phosphene source by adopting a high-density stimulation montage. Our results revealed that (1) phosphene threshold was higher for AM-tACS than sinusoidal tACS and demonstrated different carrier frequency functions in two stimulation montages. (2) AM-tACS slowed down the phosphene flashing and abolished the relation between the carrier frequency and flash percept in sinusoidal tACS. This effect was independent of the intensity change of the stimulation. (3) Left facial stimulation elicited phosphene in the upper-left visual field, while occipital stimulation elicited equally distributed phosphene. (4) The near-eye electrodermal activity (EDA) measured under the threshold-level occipital tACS was greater than the lowest power sufficient to elicit retinal phosphene. Our results show that AM frequency may override the carrier frequency and determine the perceived flashing frequency of AM-tACS-induced phosphene.
Collapse
Affiliation(s)
- Che‐Yi Hsu
- Institute of Cognitive Neuroscience, College of Health Sciences and TechnologyNational Central UniversityTaoyuanTaiwan
| | - Tzu‐Ling Liu
- Institute of Cognitive Neuroscience, College of Health Sciences and TechnologyNational Central UniversityTaoyuanTaiwan,Cognitive Intelligence and Precision Healthcare Research CenterNational Central UniversityTaoyuanTaiwan
| | - Dong‐Han Lee
- Institute of Cognitive Neuroscience, College of Health Sciences and TechnologyNational Central UniversityTaoyuanTaiwan,Cognitive Intelligence and Precision Healthcare Research CenterNational Central UniversityTaoyuanTaiwan
| | - Ding‐Ruey Yeh
- Institute of Cognitive Neuroscience, College of Health Sciences and TechnologyNational Central UniversityTaoyuanTaiwan
| | - Yan‐Hsun Chen
- Institute of Cognitive Neuroscience, College of Health Sciences and TechnologyNational Central UniversityTaoyuanTaiwan,Cognitive Intelligence and Precision Healthcare Research CenterNational Central UniversityTaoyuanTaiwan
| | - Wei‐Kuang Liang
- Institute of Cognitive Neuroscience, College of Health Sciences and TechnologyNational Central UniversityTaoyuanTaiwan,Cognitive Intelligence and Precision Healthcare Research CenterNational Central UniversityTaoyuanTaiwan
| | - Chi‐Hung Juan
- Institute of Cognitive Neuroscience, College of Health Sciences and TechnologyNational Central UniversityTaoyuanTaiwan,Cognitive Intelligence and Precision Healthcare Research CenterNational Central UniversityTaoyuanTaiwan,Department of PsychologyKaohsiung Medical UniversityKaohsiungTaiwan
| |
Collapse
|
9
|
Liu XP, Wang X. Distinct neuronal types contribute to hybrid temporal encoding strategies in primate auditory cortex. PLoS Biol 2022; 20:e3001642. [PMID: 35613218 PMCID: PMC9132345 DOI: 10.1371/journal.pbio.3001642] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/22/2022] [Indexed: 11/18/2022] Open
Abstract
Studies of the encoding of sensory stimuli by the brain often consider recorded neurons as a pool of identical units. Here, we report divergence in stimulus-encoding properties between subpopulations of cortical neurons that are classified based on spike timing and waveform features. Neurons in auditory cortex of the awake marmoset (Callithrix jacchus) encode temporal information with either stimulus-synchronized or nonsynchronized responses. When we classified single-unit recordings using either a criteria-based or an unsupervised classification method into regular-spiking, fast-spiking, and bursting units, a subset of intrinsically bursting neurons formed the most highly synchronized group, with strong phase-locking to sinusoidal amplitude modulation (SAM) that extended well above 20 Hz. In contrast with other unit types, these bursting neurons fired primarily on the rising phase of SAM or the onset of unmodulated stimuli, and preferred rapid stimulus onset rates. Such differentiating behavior has been previously reported in bursting neuron models and may reflect specializations for detection of acoustic edges. These units responded to natural stimuli (vocalizations) with brief and precise spiking at particular time points that could be decoded with high temporal stringency. Regular-spiking units better reflected the shape of slow modulations and responded more selectively to vocalizations with overall firing rate increases. Population decoding using time-binned neural activity found that decoding behavior differed substantially between regular-spiking and bursting units. A relatively small pool of bursting units was sufficient to identify the stimulus with high accuracy in a manner that relied on the temporal pattern of responses. These unit type differences may contribute to parallel and complementary neural codes. Neurons in auditory cortex show highly diverse responses to sounds. This study suggests that neuronal type inferred from baseline firing properties accounts for much of this diversity, with a subpopulation of bursting units being specialized for precise temporal encoding.
Collapse
Affiliation(s)
- Xiao-Ping Liu
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- * E-mail: (X-PL); (XW)
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- * E-mail: (X-PL); (XW)
| |
Collapse
|
10
|
Degraded cortical temporal processing in the valproic acid-induced rat model of autism. Neuropharmacology 2022; 209:109000. [PMID: 35182575 DOI: 10.1016/j.neuropharm.2022.109000] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 01/12/2022] [Accepted: 02/13/2022] [Indexed: 11/21/2022]
Abstract
Hearing disorders, such as abnormal speech perception, are frequently reported in individuals with autism. However, the mechanisms underlying these auditory-associated signature deficits in autism remain largely unknown. In this study, we documented significant behavioral impairments in the sound temporal rate discrimination task for rats prenatally exposed to valproic acid (VPA), a well-validated animal model for studying the pathology of autism. In parallel, there was a large-scale degradation in temporal information-processing in their primary auditory cortices (A1) at both levels of spiking outputs and synaptic inputs. Substantially increased spine density of excitatory neurons and decreased numbers of parvalbumin- and somatostatin-labeled inhibitory inter-neurons were also recorded in the A1 after VPA exposure. Given the fact that cortical temporal processing of sound is associated with speech perception in humans, these results in the animal model of VPA exposure provide insight into a possible neurological mechanism underlying auditory and language-related deficits in individuals with autism.
Collapse
|
11
|
Dheerendra P, Baumann S, Joly O, Balezeau F, Petkov CI, Thiele A, Griffiths TD. The Representation of Time Windows in Primate Auditory Cortex. Cereb Cortex 2021; 32:3568-3580. [PMID: 34875029 PMCID: PMC9376871 DOI: 10.1093/cercor/bhab434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/04/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
Whether human and nonhuman primates process the temporal dimension of sound similarly remains an open question. We examined the brain basis for the processing of acoustic time windows in rhesus macaques using stimuli simulating the spectrotemporal complexity of vocalizations. We conducted functional magnetic resonance imaging in awake macaques to identify the functional anatomy of response patterns to different time windows. We then contrasted it against the responses to identical stimuli used previously in humans. Despite a similar overall pattern, ranging from the processing of shorter time windows in core areas to longer time windows in lateral belt and parabelt areas, monkeys exhibited lower sensitivity to longer time windows than humans. This difference in neuronal sensitivity might be explained by a specialization of the human brain for processing longer time windows in speech.
Collapse
Affiliation(s)
- Pradeep Dheerendra
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK
| | - Simon Baumann
- National Institute of Mental Health, NIH, Bethesda, MD 20892-1148, USA.,Department of Psychology, University of Turin, Torino 10124, Italy
| | - Olivier Joly
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | | | - Alexander Thiele
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| |
Collapse
|
12
|
Fuglsang SA, Madsen KH, Puonti O, Hjortkjær J, Siebner HR. Mapping cortico-subcortical sensitivity to 4 Hz amplitude modulation depth in human auditory system with functional MRI. Neuroimage 2021; 246:118745. [PMID: 34808364 DOI: 10.1016/j.neuroimage.2021.118745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 11/17/2021] [Accepted: 11/18/2021] [Indexed: 10/19/2022] Open
Abstract
Temporal modulations in the envelope of acoustic waveforms at rates around 4 Hz constitute a strong acoustic cue in speech and other natural sounds. It is often assumed that the ascending auditory pathway is increasingly sensitive to slow amplitude modulation (AM), but sensitivity to AM is typically considered separately for individual stages of the auditory system. Here, we used blood oxygen level dependent (BOLD) fMRI in twenty human subjects (10 male) to measure sensitivity of regional neural activity in the auditory system to 4 Hz temporal modulations. Participants were exposed to AM noise stimuli varying parametrically in modulation depth to characterize modulation-depth effects on BOLD responses. A Bayesian hierarchical modeling approach was used to model potentially nonlinear relations between AM depth and group-level BOLD responses in auditory regions of interest (ROIs). Sound stimulation activated the auditory brainstem and cortex structures in single subjects. BOLD responses to noise exposure in core and belt auditory cortices scaled positively with modulation depth. This finding was corroborated by whole-brain cluster-level inference. Sensitivity to AM depth variations was particularly pronounced in the Heschl's gyrus but also found in higher-order auditory cortical regions. None of the sound-responsive subcortical auditory structures showed a BOLD response profile that reflected the parametric variation in AM depth. The results are compatible with the notion that early auditory cortical regions play a key role in processing low-rate modulation content of sounds in the human auditory system.
Collapse
Affiliation(s)
- Søren A Fuglsang
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark.
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Jens Hjortkjær
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg and Frederiksberg, Copenhagen, Denmark; Department of Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
13
|
Downer JD, Bigelow J, Runfeldt MJ, Malone BJ. Temporally precise population coding of dynamic sounds by auditory cortex. J Neurophysiol 2021; 126:148-169. [PMID: 34077273 DOI: 10.1152/jn.00709.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Fluctuations in the amplitude envelope of complex sounds provide critical cues for hearing, particularly for speech and animal vocalizations. Responses to amplitude modulation (AM) in the ascending auditory pathway have chiefly been described for single neurons. How neural populations might collectively encode and represent information about AM remains poorly characterized, even in primary auditory cortex (A1). We modeled population responses to AM based on data recorded from A1 neurons in awake squirrel monkeys and evaluated how accurately single trial responses to modulation frequencies from 4 to 512 Hz could be decoded as functions of population size, composition, and correlation structure. We found that a population-based decoding model that simulated convergent, equally weighted inputs was highly accurate and remarkably robust to the inclusion of neurons that were individually poor decoders. By contrast, average rate codes based on convergence performed poorly; effective decoding using average rates was only possible when the responses of individual neurons were segregated, as in classical population decoding models using labeled lines. The relative effectiveness of dynamic rate coding in auditory cortex was explained by shared modulation phase preferences among cortical neurons, despite heterogeneity in rate-based modulation frequency tuning. Our results indicate significant population-based synchrony in primary auditory cortex and suggest that robust population coding of the sound envelope information present in animal vocalizations and speech can be reliably achieved even with indiscriminate pooling of cortical responses. These findings highlight the importance of firing rate dynamics in population-based sensory coding.NEW & NOTEWORTHY Fundamental questions remain about population coding in primary auditory cortex (A1). In particular, issues of spike timing in models of neural populations have been largely ignored. We find that spike-timing in response to sound envelope fluctuations is highly similar across neuron populations in A1. This property of shared envelope phase preference allows for a simple population model involving unweighted convergence of neuronal responses to classify amplitude modulation frequencies with high accuracy.
Collapse
Affiliation(s)
- Joshua D Downer
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - James Bigelow
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Melissa J Runfeldt
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, California
| |
Collapse
|
14
|
Yao JD, Sanes DH. Temporal Encoding is Required for Categorization, But Not Discrimination. Cereb Cortex 2021; 31:2886-2897. [PMID: 33429423 DOI: 10.1093/cercor/bhaa396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/26/2020] [Accepted: 11/03/2020] [Indexed: 11/14/2022] Open
Abstract
Core auditory cortex (AC) neurons encode slow fluctuations of acoustic stimuli with temporally patterned activity. However, whether temporal encoding is necessary to explain auditory perceptual skills remains uncertain. Here, we recorded from gerbil AC neurons while they discriminated between a 4-Hz amplitude modulation (AM) broadband noise and AM rates >4 Hz. We found a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task. A population decoder that relied on temporal information outperformed a decoder that relied on spike count alone, but the spike count decoder still remained sufficient to explain average behavioral performance. This leaves open the possibility that more demanding perceptual judgments require temporal information. Thus, we asked whether accurate classification of different AM rates between 4 and 12 Hz required the information contained in AC temporal discharge patterns. Indeed, accurate classification of these AM stimuli depended on the inclusion of temporal information rather than spike count alone. Overall, our results compare two different representations of time-varying acoustic features that can be accessed by downstream circuits required for perceptual judgments.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, USA.,Department of Psychology, New York University, New York, NY 10003, USA.,Department of Biology, New York University, New York, NY 10003, USA.,Neuroscience Institute, NYU Langone Medical Center, New York University, New York, NY 10016, USA
| |
Collapse
|
15
|
Speech frequency-following response in human auditory cortex is more than a simple tracking. Neuroimage 2020; 226:117545. [PMID: 33186711 DOI: 10.1016/j.neuroimage.2020.117545] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 10/29/2020] [Accepted: 11/02/2020] [Indexed: 11/20/2022] Open
Abstract
The human auditory cortex is recently found to contribute to the frequency following response (FFR) and the cortical component has been shown to be more relevant to speech perception. However, it is not clear how cortical FFR may contribute to the processing of speech fundamental frequency (F0) and the dynamic pitch. Using intracranial EEG recordings, we observed a significant FFR at the fundamental frequency (F0) for both speech and speech-like harmonic complex stimuli in the human auditory cortex, even in the missing fundamental condition. Both the spectral amplitude and phase coherence of the cortical FFR showed a significant harmonic preference, and attenuated from the primary auditory cortex to the surrounding associative auditory cortex. The phase coherence of the speech FFR was found significantly higher than that of the harmonic complex stimuli, especially in the left hemisphere, showing a high timing fidelity of the cortical FFR in tracking dynamic F0 in speech. Spectrally, the frequency band of the cortical FFR was largely overlapped with the range of the human vocal pitch. Taken together, our study parsed the intrinsic properties of the cortical FFR and reveals a preference for speech-like sounds, supporting its potential role in processing speech intonation and lexical tones.
Collapse
|
16
|
Johnson JS, Niwa M, O'Connor KN, Sutter ML. Amplitude modulation encoding in the auditory cortex: comparisons between the primary and middle lateral belt regions. J Neurophysiol 2020; 124:1706-1726. [PMID: 33026929 DOI: 10.1152/jn.00171.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In macaques, the middle lateral auditory cortex (ML) is a belt region adjacent to the primary auditory cortex (A1) and believed to be at a hierarchically higher level. Although ML single-unit responses have been studied for several auditory stimuli, the ability of ML cells to encode amplitude modulation (AM)-an ability that has been widely studied in A1-has not yet been characterized. Here, we compared the responses of A1 and ML neurons to amplitude-modulated (AM) noise in awake macaques. Although several of the basic properties of A1 and ML responses to AM noise were similar, we found several key differences. ML neurons were less likely to phase lock, did not phase lock as strongly, and were more likely to respond in a nonsynchronized fashion than A1 cells, consistent with a temporal-to-rate transformation as information ascends the auditory hierarchy. ML neurons tended to have lower temporally (phase-locking) based best modulation frequencies than A1 neurons. Neurons that decreased their firing rate in response to AM noise relative to their firing rate in response to unmodulated noise became more common at the level of ML than they were in A1. In both A1 and ML, we found a prevalent class of neurons that usually have enhanced rate responses relative to responses to the unmodulated noise at lower modulation frequencies and suppressed rate responses relative to responses to the unmodulated noise at middle modulation frequencies.NEW & NOTEWORTHY ML neurons synchronized less than A1 neurons, consistent with a hierarchical temporal-to-rate transformation. Both A1 and ML had a class of modulation transfer functions previously unreported in the cortex with a low-modulation-frequency (MF) peak, a middle-MF trough, and responses similar to unmodulated noise responses at high MFs. The results support a hierarchical shift toward a two-pool opponent code, where subtraction of neural activity between two populations of oppositely tuned neurons encodes AM.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience, University of California, Davis, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
17
|
Wang H, Tang D, Wu Y, Zhou L, Sun S. The state of the art of sound therapy for subjective tinnitus in adults. Ther Adv Chronic Dis 2020; 11:2040622320956426. [PMID: 32973991 PMCID: PMC7493236 DOI: 10.1177/2040622320956426] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 08/12/2020] [Indexed: 12/19/2022] Open
Abstract
Background: Sound therapy is a clinically common method of tinnitus management. Various forms of sound therapy have been developed, but there are controversies regarding the selection criteria and the efficacy of different forms of sound therapy in the clinic. Our goal was to review the types and forms of sound therapy and our understanding of how the different characteristics of tinnitus patients influence their curative effects so as to provide a reference for personalized choice of tinnitus sound therapy. Method: Using an established methodological framework, a search of six databases including PubMed identified 43 records that met our inclusion criteria. The search strategy used the following key words: tinnitus AND (acoustic OR sound OR music) AND (treatment OR therapy OR management OR intervention OR measure). Results: There are various forms of sound therapy, and most of them show positive therapeutic effects. The effect of customized sound therapy is generally better than that of non-customized sound therapy, and patients with more severe initial tinnitus respond better to sound therapy. Conclusion: Sound therapy can effectively suppress tinnitus, at least in some patients. However, there is a lack of randomized controlled trials to identify effective management strategies. Further studies are needed to identify the most effective form of sound therapy for individualized therapy, and large, multicenter, long-term follow-up studies are still needed in order to develop more effective and targeted sound-therapy protocols. In addition, it is necessary to analyze the characteristics of individual tinnitus patients and to unify the assessment criteria of tinnitus.
Collapse
Affiliation(s)
- Haiyan Wang
- ENT Institute and Otorhinolaryngology Department of Eye and ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai, China
| | - Dongmei Tang
- ENT Institute and Otorhinolaryngology Department of Eye and ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai, China
| | - Yongzhen Wu
- ENT Institute and Otorhinolaryngology Department of Eye and ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai, China
| | - Li Zhou
- Shanghai High School, Shanghai, China
| | - Shan Sun
- ENT Institute and Otorhinolaryngology Department of Eye and ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, 83 Fenyang Road, Shanghai 200031, China
| |
Collapse
|
18
|
Xiong C, Liu X, Kong L, Yan J. Thalamic gating contributes to forward suppression in the auditory cortex. PLoS One 2020; 15:e0236760. [PMID: 32726372 PMCID: PMC7390390 DOI: 10.1371/journal.pone.0236760] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Accepted: 07/11/2020] [Indexed: 11/18/2022] Open
Abstract
The neural mechanisms underlying forward suppression in the auditory cortex remain a puzzle. Little attention is paid to thalamic contribution despite the important fact that the thalamus gates upstreaming information to the auditory cortex. This study compared the time courses of forward suppression in the auditory thalamus, thalamocortical inputs and cortex using the two-tone stimulus paradigm. The preceding and succeeding tones were 20-ms long. Their frequency and amplitude were set at the characteristic frequency and 20 dB above the minimum threshold of given neurons, respectively. In the ventral division of the medial geniculate body of the thalamus, we found that the duration of complete forward suppression was about 75 ms and the duration of partial suppression was from 75 ms to about 300 ms after the onset of the preceding tone. We also found that during the partial suppression period, the responses to the succeeding tone were further suppressed in the primary auditory cortex. The forward suppression of thalamocortical field excitatory postsynaptic potentials was between those of thalamic and cortical neurons but much closer to that of thalamic ones. Our results indicate that early suppression in the cortex could result from complete suppression in the thalamus whereas later suppression may involve thalamocortical and intracortical circuitry. This suggests that the complete suppression that occurs in the thalamus provides the cortex with a "silence" window that could potentially benefit cortical processing and/or perception of the information carried by the preceding sound.
Collapse
Affiliation(s)
- Colin Xiong
- Department of Physiology and Pharmacology, Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| | - Xiuping Liu
- Department of Physiology and Pharmacology, Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| | - Lingzhi Kong
- Department of Physiology and Pharmacology, Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| | - Jun Yan
- Department of Physiology and Pharmacology, Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
- * E-mail:
| |
Collapse
|
19
|
Erb J, Schmitt LM, Obleser J. Temporal selectivity declines in the aging human auditory cortex. eLife 2020; 9:55300. [PMID: 32618270 PMCID: PMC7410487 DOI: 10.7554/elife.55300] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/02/2020] [Indexed: 12/03/2022] Open
Abstract
Current models successfully describe the auditory cortical response to natural sounds with a set of spectro-temporal features. However, these models have hardly been linked to the ill-understood neurobiological changes that occur in the aging auditory cortex. Modelling the hemodynamic response to a rich natural sound mixture in N = 64 listeners of varying age, we here show that in older listeners’ auditory cortex, the key feature of temporal rate is represented with a markedly broader tuning. This loss of temporal selectivity is most prominent in primary auditory cortex and planum temporale, with no such changes in adjacent auditory or other brain areas. Amongst older listeners, we observe a direct relationship between chronological age and temporal-rate tuning, unconfounded by auditory acuity or model goodness of fit. In line with senescent neural dedifferentiation more generally, our results highlight decreased selectivity to temporal information as a hallmark of the aging auditory cortex. It can often be difficult for an older person to understand what someone is saying, particularly in noisy environments. Exactly how and why this age-related change occurs is not clear, but it is thought that older individuals may become less able to tune in to certain features of sound. Newer tools are making it easier to study age-related changes in hearing in the brain. For example, functional magnetic resonance imaging (fMRI) can allow scientists to ‘see’ and measure how certain parts of the brain react to different features of sound. Using fMRI data, researchers can compare how younger and older people process speech. They can also track how speech processing in the brain changes with age. Now, Erb et al. show that older individuals have a harder time tuning into the rhythm of speech. In the experiments, 64 people between the ages of 18 to 78 were asked to listen to speech in a noisy setting while they underwent fMRI. The researchers then tested a computer model using the data. In the older individuals, the brain’s tuning to the timing or rhythm of speech was broader, while the younger participants were more able to finely tune into this feature of sound. The older a person was the less able their brain was to distinguish rhythms in speech, likely making it harder to understand what had been said. This hearing change likely occurs because brain cells become less specialised overtime, which can contribute to many kinds of age-related cognitive decline. This new information about why understanding speech becomes more difficult with age may help scientists develop better hearing aids that are individualised to a person’s specific needs.
Collapse
Affiliation(s)
- Julia Erb
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
20
|
Gao L, Wang X. Subthreshold Activity Underlying the Diversity and Selectivity of the Primary Auditory Cortex Studied by Intracellular Recordings in Awake Marmosets. Cereb Cortex 2020; 29:994-1005. [PMID: 29377991 DOI: 10.1093/cercor/bhy006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2017] [Indexed: 11/14/2022] Open
Abstract
Extracellular recording studies have revealed diverse and selective neural responses in the primary auditory cortex (A1) of awake animals. However, we have limited knowledge on subthreshold events that give rise to these responses, especially in non-human primates, as intracellular recordings in awake animals pose substantial technical challenges. We developed a novel intracellular recording technique in awake marmosets to systematically study subthreshold activity of A1 neurons that underlies their diverse and selective spiking responses. Our findings showed that in contrast to predominantly transient depolarization observed in A1 of anesthetized animals, both transient and sustained depolarization (during or beyond the stimulus period) were observed. Comparing with spiking responses, subthreshold responses were often longer lasting in duration and more broadly tuned in frequency, and showed narrower intensity tuning in non-monotonic neurons and lower response threshold in monotonic neurons. These observations demonstrated the enhancement of stimulus selectivity from subthreshold to spiking responses in individual A1 neurons. Furthermore, A1 neurons classified as regular- or fast-spiking subpopulation based on their spike shapes exhibited distinct response properties in frequency and intensity domains. These findings provide valuable insights into cortical integration and transformation of auditory information at the cellular level in auditory cortex of awake non-human primates.
Collapse
Affiliation(s)
- Lixia Gao
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Interdisciplinary Institute of Neuroscience and Technology, Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, People's Republic of China
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
21
|
Experience-Dependent Coding of Time-Dependent Frequency Trajectories by Off Responses in Secondary Auditory Cortex. J Neurosci 2020; 40:4469-4482. [PMID: 32327533 PMCID: PMC7275866 DOI: 10.1523/jneurosci.2665-19.2020] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 04/02/2020] [Accepted: 04/07/2020] [Indexed: 11/21/2022] Open
Abstract
Time-dependent frequency trajectories are an inherent feature of many behaviorally relevant sounds, such as species-specific vocalizations. Dynamic frequency trajectories, even in short sounds, often convey meaningful information, which may be used to differentiate sound categories. However, it is not clear what and where neural responses in the auditory cortical pathway are critical for conveying information about behaviorally relevant frequency trajectories, and how these responses change with experience. Here, we uncover tuning to subtle variations in frequency trajectories in auditory cortex of female mice. We found that auditory cortical responses could be modulated by variations in a pure tone trajectory as small as 1/24th of an octave, comparable to what has been reported in primates. In particular, late spiking after the end of a sound stimulus was more often sensitive to the sound's subtle frequency variation compared with spiking during the sound. Such “Off” responses in the adult A2, but not those in core auditory cortex, were plastic in a way that may enhance the representation of a newly acquired, behaviorally relevant sound category. We illustrate this with the maternal mouse paradigm for natural vocalization learning. By using an ethologically inspired paradigm to drive auditory responses in higher-order neurons, our results demonstrate that mouse auditory cortex can track fine frequency changes, which allows A2 Off responses in particular to better respond to pitch trajectories that distinguish behaviorally relevant, natural sound categories. SIGNIFICANCE STATEMENT A whistle's pitch conveys meaning to its listener, as when dogs learn that distinct pitch trajectories whistled by their owner differentiate specific commands. Many species use pitch trajectories in their own vocalizations to distinguish sound categories, such as in human languages, such as Mandarin. How and where auditory neural activity encodes these pitch trajectories as their meaning is learned but not well understood, especially for short-duration sounds. We studied this in mice, where infants use ultrasonic whistles to communicate to adults. We found that late neural firing after a sound ends can be tuned to how the pitch changes in time, and that this response in a secondary auditory cortical field changes with experience to acquire a pitch change's meaning.
Collapse
|
22
|
Zuk NJ, Teoh ES, Lalor EC. EEG-based classification of natural sounds reveals specialized responses to speech and music. Neuroimage 2020; 210:116558. [DOI: 10.1016/j.neuroimage.2020.116558] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 12/23/2019] [Accepted: 01/14/2020] [Indexed: 11/30/2022] Open
|
23
|
Bigelow J, Malone B. Extracellular voltage thresholds for maximizing information extraction in primate auditory cortex: implications for a brain computer interface. J Neural Eng 2020; 18. [PMID: 32126540 DOI: 10.1088/1741-2552/ab7c19] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Accepted: 03/03/2020] [Indexed: 01/08/2023]
Abstract
OBJECTIVE Research by Oby et al (2016) demonstrated that the optimal threshold for extracting information from visual and motor cortices may differ from the optimal threshold for identifying single neurons via spike sorting methods. The optimal threshold for extracting information from auditory cortex has yet to be identified, nor has the optimal temporal scale for representing auditory cortical activity. Here, we describe a procedure to jointly optimize the extracellular threshold and bin size with respect to the decoding accuracy achieved by a linear classifier for a diverse set of auditory stimuli. APPROACH We used linear multichannel arrays to record extracellular neural activity from the auditory cortex of awake squirrel monkeys passively listening to both simple and complex sounds. We executed a grid search of the coordinate space defined by the voltage threshold (in units of standard deviation) and the bin size (in units of milliseconds), and computed decoding accuracy at each point. MAIN RESULTS The optimal threshold for information extraction was consistently near two standard deviations below the voltage trace mean, which falls significantly below the range of three to five standard deviations typically used as inputs to spike sorting algorithms in basic research and in brain-computer interface (BCI) applications. The optimal binwidth was minimized at the optimal voltage threshold, particularly for acoustic stimuli dominated by temporally dynamic features, indicating that permissive thresholding permits readout of cortical responses with temporal precision on the order of a few milliseconds. SIGNIFICANCE The improvements in decoding accuracy we observed for optimal readout parameters suggest that standard thresholding methods substantially underestimate the information present in auditory cortical spiking patterns. The fact that optimal thresholds were relatively low indicates that local populations of cortical neurons exhibit high temporal coherence that could be leveraged in service of future auditory BCI applications.
Collapse
Affiliation(s)
- James Bigelow
- OHNS, University of California System, San Francisco, California, UNITED STATES
| | - Brian Malone
- OHNS, University of California System, 675 Nelson Rising Lane (Room 535), University of California San Francisco, San Francisco, San Francisco, California, 94158, UNITED STATES
| |
Collapse
|
24
|
Ewert SD, Paraouty N, Lorenzi C. A two‐path model of auditory modulation detection using temporal fine structure and envelope cues. Eur J Neurosci 2020; 51:1265-1278. [DOI: 10.1111/ejn.13846] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Revised: 01/18/2018] [Accepted: 01/18/2018] [Indexed: 11/30/2022]
Affiliation(s)
- Stephan D. Ewert
- Medizinische Physik and Cluster of Excellence Hearing4All Universität Oldenburg 26111 Oldenburg Germany
| | - Nihaad Paraouty
- Laboratoire des systèmes perceptifs Département d’études cognitives, École normale supérieure CNRS PSL Research University Paris France
| | - Christian Lorenzi
- Laboratoire des systèmes perceptifs Département d’études cognitives, École normale supérieure CNRS PSL Research University Paris France
| |
Collapse
|
25
|
Zulfiqar I, Moerel M, Formisano E. Spectro-Temporal Processing in a Two-Stream Computational Model of Auditory Cortex. Front Comput Neurosci 2020; 13:95. [PMID: 32038212 PMCID: PMC6987265 DOI: 10.3389/fncom.2019.00095] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 12/23/2019] [Indexed: 12/14/2022] Open
Abstract
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is optimized for analyzing fine-grained temporal and spectral information, respectively. Here we use a Wilson and Cowan firing-rate modeling framework to simulate spectro-temporal processing of sounds in these auditory streams and to investigate the link between neural population activity and behavioral results of psychoacoustic experiments. The proposed model consisted of two core (A1 and R, representing primary areas) and two belt (Slow and Fast, representing rostral and caudal processing respectively) areas, differing in terms of their spectral and temporal response properties. First, we simulated the responses to amplitude modulated (AM) noise and tones. In agreement with electrophysiological results, we observed an area-dependent transition from a temporal (synchronization) to a rate code when moving from low to high modulation rates. Simulated neural responses in a task of amplitude modulation detection suggested that thresholds derived from population responses in core areas closely resembled those of psychoacoustic experiments in human listeners. For tones, simulated modulation threshold functions were found to be dependent on the carrier frequency. Second, we simulated the responses to complex tones with missing fundamental stimuli and found that synchronization of responses in the Fast area accurately encoded pitch, with the strength of synchronization depending on number and order of harmonic components. Finally, using speech stimuli, we showed that the spectral and temporal structure of the speech was reflected in parallel by the modeled areas. The analyses highlighted that the Slow stream coded with high spectral precision the aspects of the speech signal characterized by slow temporal changes (e.g., prosody), while the Fast stream encoded primarily the faster changes (e.g., phonemes, consonants, temporal pitch). Interestingly, the pitch of a speaker was encoded both spatially (i.e., tonotopically) in Slow area and temporally in Fast area. Overall, performed simulations showed that the model is valuable for generating hypotheses on how the different cortical areas/streams may contribute toward behaviorally relevant aspects of auditory processing. The model can be used in combination with physiological models of neurovascular coupling to generate predictions for human functional MRI experiments.
Collapse
Affiliation(s)
- Isma Zulfiqar
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
| | - Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| | - Elia Formisano
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| |
Collapse
|
26
|
Elie JE, Theunissen FE. Invariant neural responses for sensory categories revealed by the time-varying information for communication calls. PLoS Comput Biol 2019; 15:e1006698. [PMID: 31557151 PMCID: PMC6762074 DOI: 10.1371/journal.pcbi.1006698] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 06/08/2019] [Indexed: 12/20/2022] Open
Abstract
Although information theoretic approaches have been used extensively in the analysis of the neural code, they have yet to be used to describe how information is accumulated in time while sensory systems are categorizing dynamic sensory stimuli such as speech sounds or visual objects. Here, we present a novel method to estimate the cumulative information for stimuli or categories. We further define a time-varying categorical information index that, by comparing the information obtained for stimuli versus categories of these same stimuli, quantifies invariant neural representations. We use these methods to investigate the dynamic properties of avian cortical auditory neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from the complete vocal repertoire of this species. We found that the time-varying rates carry 5 times more information than the mean firing rates even in the first 100 ms. We also found that cumulative information has slow time constants (100–600 ms) relative to the typical integration time of single neurons, reflecting the fact that the behaviorally informative features of auditory objects are time-varying sound patterns. When we correlated firing rates and information values, we found that average information correlates with average firing rate but that higher-rates found at the onset response yielded similar information values as the lower-rates found in the sustained response: the onset and sustained response of avian cortical auditory neurons provide similar levels of independent information about call identity and call-type. Finally, our information measures allowed us to rigorously define categorical neurons; these categorical neurons show a high degree of invariance for vocalizations within a call-type. Peak invariance is found around 150 ms after stimulus onset. Surprisingly, call-type invariant neurons were found in both primary and secondary avian auditory areas. Just as the recognition of faces requires neural representations that are invariant to scale and rotation, the recognition of behaviorally relevant auditory objects, such as spoken words, requires neural representations that are invariant to the speaker uttering the word and to his or her location. Here, we used information theory to investigate the time course of the neural representation of bird communication calls and of behaviorally relevant categories of these same calls: the call-types of the bird’s repertoire. We found that neurons in both the primary and secondary avian auditory cortex exhibit invariant responses to call renditions within a call-type, suggestive of a potential role for extracting the meaning of these communication calls. We also found that time plays an important role: first, neural responses carry significantly more information when represented by temporal patterns calculated at the small time scale of 10 ms than when measured as average rates and, second, this information accumulates in a non-redundant fashion up to long integration times of 600 ms. This rich temporal neural representation is matched to the temporal richness found in the communication calls of this species.
Collapse
Affiliation(s)
- Julie E. Elie
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Bioengineering, University of California Berkeley, Berkeley, California, United States of America
- * E-mail:
| | - Frédéric E. Theunissen
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Psychology, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
27
|
Liu X, Wei F, Cheng Y, Zhang Y, Jia G, Zhou J, Zhu M, Shan Y, Sun X, Yu L, Merzenich MM, Lurie DI, Zheng Q, Zhou X. Auditory Training Reverses Lead (Pb)-Toxicity-Induced Changes in Sound-Azimuth Selectivity of Cortical Neurons. Cereb Cortex 2019; 29:3294-3304. [PMID: 30137254 DOI: 10.1093/cercor/bhy199] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 07/20/2018] [Accepted: 07/26/2018] [Indexed: 01/16/2023] Open
Abstract
Lead (Pb) causes significant adverse effects on the developing brain, resulting in cognitive and learning disabilities in children. The process by which lead produces these negative changes is largely unknown. The fact that children with these syndromes also show deficits in central auditory processing, however, indicates a speculative but disturbing relationship between lead-exposure, impaired auditory processing, and behavioral dysfunction. Here we studied in rats the changes in cortical spatial tuning impacted by early lead-exposure and their potential restoration to normal by auditory training. We found animals that were exposed to lead early in life displayed significant behavioral impairments compared with naïve controls while conducting the sound-azimuth discrimination task. Lead-exposure also degraded the sound-azimuth selectivity of neurons in the primary auditory cortex. Subsequent sound-azimuth discrimination training, however, restored to nearly normal the lead-degraded cortical azimuth selectivity. This reversal of cortical spatial fidelity was paralleled by changes in cortical expression of certain excitatory and inhibitory neurotransmitter receptor subunits. These results in a rodent model demonstrate the persisting neurotoxic effects of early lead-exposure on behavioral and cortical neuronal processing of spatial information of sound. They also indicate that attention-demanding auditory training may remediate lead-induced cortical neurological deficits even after these deficits have occurred.
Collapse
Affiliation(s)
- Xia Liu
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | - Fanfan Wei
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | - Yuan Cheng
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Yifan Zhang
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Guoqiang Jia
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Jie Zhou
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Min Zhu
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Ye Shan
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | - Xinde Sun
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | | | - Diana I Lurie
- Center for Structural and Functional Neuroscience, Center for Environmental Health Sciences, Department of Biomedical & Pharmaceutical Sciences, College of Health Professions and Biomedical Sciences, University of Montana, Missoula, MT, USA
| | - Qingyin Zheng
- Transformative Otology and Neuroscience Center, Binzhou Medical University, Yantai, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| |
Collapse
|
28
|
Koumura T, Terashima H, Furukawa S. Cascaded Tuning to Amplitude Modulation for Natural Sound Recognition. J Neurosci 2019; 39:5517-5533. [PMID: 31092586 PMCID: PMC6616280 DOI: 10.1523/jneurosci.2914-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Revised: 04/12/2019] [Accepted: 04/15/2019] [Indexed: 12/03/2022] Open
Abstract
The auditory system converts the physical properties of a sound waveform to neural activities and processes them for recognition. During the process, the tuning to amplitude modulation (AM) is successively transformed by a cascade of brain regions. To test the functional significance of the AM tuning, we conducted single-unit recording in a deep neural network (DNN) trained for natural sound recognition. We calculated the AM representation in the DNN and quantitatively compared it with those reported in previous neurophysiological studies. We found that an auditory-system-like AM tuning emerges in the optimized DNN. Better-recognizing models showed greater similarity to the auditory system. We isolated the factors forming the AM representation in the different brain regions. Because the model was not designed to reproduce any anatomical or physiological properties of the auditory system other than the cascading architecture, the observed similarity suggests that the AM tuning in the auditory system might also be an emergent property for natural sound recognition during evolution and development.SIGNIFICANCE STATEMENT This study suggests that neural tuning to amplitude modulation may be a consequence of the auditory system evolving for natural sound recognition. We modeled the function of the entire auditory system; that is, recognizing sounds from raw waveforms with as few anatomical or physiological assumptions as possible. We analyzed the model using single-unit recording, which enabled a fair comparison with neurophysiological data with as few methodological biases as possible. Interestingly, our results imply that frequency decomposition in the inner ear might not be necessary for processing amplitude modulation. This implication could not have been obtained if we had used a model that assumes frequency decomposition.
Collapse
Affiliation(s)
- Takuya Koumura
- NTT Communication Science Laboratories, Atsugi, Kanagawa, Japan 243-0198
| | - Hiroki Terashima
- NTT Communication Science Laboratories, Atsugi, Kanagawa, Japan 243-0198
| | - Shigeto Furukawa
- NTT Communication Science Laboratories, Atsugi, Kanagawa, Japan 243-0198
| |
Collapse
|
29
|
Distinct processing of tone offset in two primary auditory cortices. Sci Rep 2019; 9:9581. [PMID: 31270350 PMCID: PMC6610078 DOI: 10.1038/s41598-019-45952-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Accepted: 06/19/2019] [Indexed: 11/08/2022] Open
Abstract
In the rodent auditory system, the primary cortex is subdivided into two regions, both receiving direct inputs from the auditory thalamus: the primary auditory cortex (A1) and the anterior auditory field (AAF). Although neurons in the two regions display different response properties, like response latency, firing threshold or tuning bandwidth, it is still not clear whether they process sound in a distinct way. Using in vivo electrophysiological recordings in the mouse auditory cortex, we found that AAF neurons have significantly stronger responses to tone offset than A1 neurons. AAF neurons also display faster and more transient responses than A1 neurons. Additionally, offset responses in AAF – unlike in A1, increase with sound duration. Local field potential (LFP) and laminar analyses suggest that the differences in sound responses between these two primary cortices are both of subcortical and intracortical origin. These results emphasize the potentially critical role of AAF for temporal processing. Our study reveals a distinct role of two primary auditory cortices in tone processing and highlights the complexity of sound encoding at the cortical level.
Collapse
|
30
|
Convento S, Wegner-Clemens KA, Yau JM. Reciprocal Interactions Between Audition and Touch in Flutter Frequency Perception. Multisens Res 2019; 32:67-85. [PMID: 31059492 DOI: 10.1163/22134808-20181334] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Accepted: 11/09/2018] [Indexed: 11/19/2022]
Abstract
In both audition and touch, sensory cues comprising repeating events are perceived either as a continuous signal or as a stream of temporally discrete events (flutter), depending on the events' repetition rate. At high repetition rates (>100 Hz), auditory and tactile cues interact reciprocally in pitch processing. The frequency of a cue experienced in one modality systematically biases the perceived frequency of a cue experienced in the other modality. Here, we tested whether audition and touch also interact in the processing of low-frequency stimulation. We also tested whether multisensory interactions occurred if the stimulation in one modality comprised click trains and the stimulation in the other modality comprised amplitude-modulated signals. We found that auditory cues bias touch and tactile cues bias audition on a flutter discrimination task. Even though participants were instructed to attend to a single sensory modality and ignore the other cue, the flutter rate in the attended modality is perceived to be similar to that of the distractor modality. Moreover, we observed similar interaction patterns regardless of stimulus type and whether the same stimulus types were experienced by both senses. Combined with earlier studies, our results suggest that the nervous system extracts and combines temporal rate information from multisensory environmental signals, regardless of stimulus type, in both the low- and high temporal frequency domains. This function likely reflects the importance of temporal frequency as a fundamental feature of our multisensory experience.
Collapse
Affiliation(s)
- Silvia Convento
- 1Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Kira A Wegner-Clemens
- 2Department of Neurosurgery, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Jeffrey M Yau
- 1Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| |
Collapse
|
31
|
Venezia JH, Thurman SM, Richards VM, Hickok G. Hierarchy of speech-driven spectrotemporal receptive fields in human auditory cortex. Neuroimage 2018; 186:647-666. [PMID: 30500424 DOI: 10.1016/j.neuroimage.2018.11.049] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Revised: 10/11/2018] [Accepted: 11/26/2018] [Indexed: 12/22/2022] Open
Abstract
Existing data indicate that cortical speech processing is hierarchically organized. Numerous studies have shown that early auditory areas encode fine acoustic details while later areas encode abstracted speech patterns. However, it remains unclear precisely what speech information is encoded across these hierarchical levels. Estimation of speech-driven spectrotemporal receptive fields (STRFs) provides a means to explore cortical speech processing in terms of acoustic or linguistic information associated with characteristic spectrotemporal patterns. Here, we estimate STRFs from cortical responses to continuous speech in fMRI. Using a novel approach based on filtering randomly-selected spectrotemporal modulations (STMs) from aurally-presented sentences, STRFs were estimated for a group of listeners and categorized using a data-driven clustering algorithm. 'Behavioral STRFs' highlighting STMs crucial for speech recognition were derived from intelligibility judgments. Clustering revealed that STRFs in the supratemporal plane represented a broad range of STMs, while STRFs in the lateral temporal lobe represented circumscribed STM patterns important to intelligibility. Detailed analysis recovered a bilateral organization with posterior-lateral regions preferentially processing STMs associated with phonological information and anterior-lateral regions preferentially processing STMs associated with word- and phrase-level information. Regions in lateral Heschl's gyrus preferentially processed STMs associated with vocalic information (pitch).
Collapse
Affiliation(s)
- Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, USA; Dept. of Otolaryngology, School of Medicine, Loma Linda University, Loma Linda, CA, USA.
| | | | - Virginia M Richards
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, USA
| | - Gregory Hickok
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
32
|
Erb J, Armendariz M, De Martino F, Goebel R, Vanduffel W, Formisano E. Homology and Specificity of Natural Sound-Encoding in Human and Monkey Auditory Cortex. Cereb Cortex 2018; 29:3636-3650. [DOI: 10.1093/cercor/bhy243] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 08/08/2018] [Accepted: 09/05/2018] [Indexed: 01/01/2023] Open
Abstract
Abstract
Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.
Collapse
Affiliation(s)
- Julia Erb
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Wim Vanduffel
- Laboratorium voor Neuro-en Psychofysiologie, KU Leuven, Leuven, Belgium
- MGH Martinos Center, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Leuven Brain Institute, Leuven, Belgium
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Maastricht Center for Systems Biology (MaCSBio), MD Maastricht, The Netherlands
| |
Collapse
|
33
|
Morrison JA, Valdizón-Rodríguez R, Goldreich D, Faure PA. Tuning for rate and duration of frequency-modulated sweeps in the mammalian inferior colliculus. J Neurophysiol 2018; 120:985-997. [DOI: 10.1152/jn.00065.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Responses of auditory duration-tuned neurons (DTNs) are selective for stimulus duration. We used single-unit extracellular recording to investigate how the inferior colliculus (IC) encodes frequency-modulated (FM) sweeps in the big brown bat. It was unclear whether the responses of so-called “FM DTNs” encode signal duration, like classic pure-tone DTNs, or the FM sweep rate. Most FM cells had spiking responses selective for downward FM sweeps. We presented cells with linear FM sweeps whose center frequency (CEF) was set to the best excitatory frequency and whose bandwidth (BW) maximized the spike count. With these baseline parameters, we stimulated cells with linear FM sweeps randomly varied in duration to measure the range of excitatory FM durations and/or sweep rates. To separate FM rate and FM duration tuning, we doubled (and halved) the BW of the baseline FM stimulus while keeping the CEF constant and then recollected each cell’s FM duration tuning curve. If the cell was tuned to FM duration, then the best duration (or range of excitatory durations) should remain constant despite changes in signal BW; however, if the cell was tuned to the FM rate, then the best duration should covary with the same FM rate at each BW. A Bayesian model comparison revealed that the majority of neurons were tuned to the FM sweep rate, although a few cells showed tuning for FM duration. We conclude that the dominant parameter for temporal tuning of FM neurons in the IC is FM sweep rate and not FM duration. NEW & NOTEWORTHY Reports of inferior colliculus neurons with response selectivity to the duration of frequency-modulated (FM) stimuli exist, yet it remains unclear whether such cells are tuned to the FM duration or the FM sweep rate. To disambiguate these hypotheses, we presented neurons with variable-duration FM signals that were systematically manipulated in bandwidth. A Bayesian model comparison revealed that most temporally selective midbrain cells were tuned to the FM sweep rate and not the FM duration.
Collapse
Affiliation(s)
- James A. Morrison
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | | | - Daniel Goldreich
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Paul A. Faure
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
34
|
Yao JD, Sanes DH. Developmental deprivation-induced perceptual and cortical processing deficits in awake-behaving animals. eLife 2018; 7:33891. [PMID: 29873632 PMCID: PMC6005681 DOI: 10.7554/elife.33891] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Accepted: 06/04/2018] [Indexed: 01/02/2023] Open
Abstract
Sensory deprivation during development induces lifelong changes to central nervous system function that are associated with perceptual impairments. However, the relationship between neural and behavioral deficits is uncertain due to a lack of simultaneous measurements during task performance. Therefore, we telemetrically recorded from auditory cortex neurons in gerbils reared with developmental conductive hearing loss as they performed an auditory task in which rapid fluctuations in amplitude are detected. These data were compared to a measure of auditory brainstem temporal processing from each animal. We found that developmental HL diminished behavioral performance, but did not alter brainstem temporal processing. However, the simultaneous assessment of neural and behavioral processing revealed that perceptual deficits were associated with a degraded cortical population code that could be explained by greater trial-to-trial response variability. Our findings suggest that the perceptual limitations that attend early hearing loss are best explained by an encoding deficit in auditory cortex.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, United States
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, United States.,Department of Psychology, New York University, New York, United States.,Department of Biology, New York University, New York, United States.,Neuroscience Institute, NYU Langone Medical Center, New York, United States
| |
Collapse
|
35
|
Martin LM, García-Rosales F, Beetz MJ, Hechavarría JC. Processing of temporally patterned sounds in the auditory cortex of Seba's short-tailed bat,Carollia perspicillata. Eur J Neurosci 2018; 46:2365-2379. [PMID: 28921742 DOI: 10.1111/ejn.13702] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 09/06/2017] [Accepted: 09/07/2017] [Indexed: 11/29/2022]
Abstract
This article presents a characterization of cortical responses to artificial and natural temporally patterned sounds in the bat species Carollia perspicillata, a species that produces vocalizations at rates above 50 Hz. Multi-unit activity was recorded in three different experiments. In the first experiment, amplitude-modulated (AM) pure tones were used as stimuli to drive auditory cortex (AC) units. AC units of both ketamine-anesthetized and awake bats could lock their spikes to every cycle of the stimulus modulation envelope, but only if the modulation frequency was below 22 Hz. In the second experiment, two identical communication syllables were presented at variable intervals. Suppressed responses to the lagging syllable were observed, unless the second syllable followed the first one with a delay of at least 80 ms (i.e., 12.5 Hz repetition rate). In the third experiment, natural distress vocalization sequences were used as stimuli to drive AC units. Distress sequences produced by C. perspicillata contain bouts of syllables repeated at intervals of ~60 ms (16 Hz). Within each bout, syllables are repeated at intervals as short as 14 ms (~71 Hz). Cortical units could follow the slow temporal modulation flow produced by the occurrence of multisyllabic bouts, but not the fast acoustic flow created by rapid syllable repetition within the bouts. Taken together, our results indicate that even in fast vocalizing animals, such as bats, cortical neurons can only track the temporal structure of acoustic streams modulated at frequencies lower than 22 Hz.
Collapse
Affiliation(s)
- Lisa M Martin
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| | - Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| | - M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| | - Julio C Hechavarría
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| |
Collapse
|
36
|
Malek S, Sperschneider K. Aftereffects of Spectrally Similar and Dissimilar Spectral Motion Adaptors in the Tritone Paradox. Front Psychol 2018; 9:677. [PMID: 29867653 PMCID: PMC5953344 DOI: 10.3389/fpsyg.2018.00677] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 04/19/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Stephanie Malek
- Psychology Department, Martin Luther University Halle-Wittenberg, Halle, Germany
- *Correspondence: Stephanie Malek
| | | |
Collapse
|
37
|
Felix RA, Gourévitch B, Portfors CV. Subcortical pathways: Towards a better understanding of auditory disorders. Hear Res 2018; 362:48-60. [PMID: 29395615 PMCID: PMC5911198 DOI: 10.1016/j.heares.2018.01.008] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Revised: 12/11/2017] [Accepted: 01/16/2018] [Indexed: 01/13/2023]
Abstract
Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders.
Collapse
Affiliation(s)
- Richard A Felix
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA
| | - Boris Gourévitch
- Unité de Génétique et Physiologie de l'Audition, UMRS 1120 INSERM, Institut Pasteur, Université Pierre et Marie Curie, F-75015, Paris, France; CNRS, France
| | - Christine V Portfors
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA.
| |
Collapse
|
38
|
Hoglen NEG, Larimer P, Phillips EAK, Malone BJ, Hasenstaub AR. Amplitude modulation coding in awake mice and squirrel monkeys. J Neurophysiol 2018; 119:1753-1766. [PMID: 29364073 DOI: 10.1152/jn.00101.2017] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Both mice and primates are used to model the human auditory system. The primate order possesses unique cortical specializations that govern auditory processing. Given the power of molecular and genetic tools available in the mouse model, it is essential to understand the similarities and differences in auditory cortical processing between mice and primates. To address this issue, we directly compared temporal encoding properties of neurons in the auditory cortex of awake mice and awake squirrel monkeys (SQMs). Stimuli were drawn from a sinusoidal amplitude modulation (SAM) paradigm, which has been used previously both to characterize temporal precision and to model the envelopes of natural sounds. Neural responses were analyzed with linear template-based decoders. In both species, spike timing information supported better modulation frequency discrimination than rate information, and multiunit responses generally supported more accurate discrimination than single-unit responses from the same site. However, cortical responses in SQMs supported better discrimination overall, reflecting superior temporal precision and greater rate modulation relative to the spontaneous baseline and suggesting that spiking activity in mouse cortex was less strictly regimented by incoming acoustic information. The quantitative differences we observed between SQM and mouse cortex support the idea that SQMs offer advantages for modeling precise responses to fast envelope dynamics relevant to human auditory processing. Nevertheless, our results indicate that cortical temporal processing is qualitatively similar in mice and SQMs and thus recommend the mouse model for mechanistic questions, such as development and circuit function, where its substantial methodological advantages can be exploited. NEW & NOTEWORTHY To understand the advantages of different model organisms, it is necessary to directly compare sensory responses across species. Contrasting temporal processing in auditory cortex of awake squirrel monkeys and mice, with parametrically matched amplitude-modulated tone stimuli, reveals a similar role of timing information in stimulus encoding. However, disparities in response precision and strength suggest that anatomical and biophysical differences between squirrel monkeys and mice produce quantitative but not qualitative differences in processing strategy.
Collapse
Affiliation(s)
- Nerissa E G Hoglen
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California , San Francisco, California.,Department of Psychiatry, University of California , San Francisco, California.,Neuroscience Graduate Program, University of California , San Francisco, California
| | - Phillip Larimer
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Department of Neurology, University of California , San Francisco, California
| | - Elizabeth A K Phillips
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Neuroscience Graduate Program, University of California , San Francisco, California
| | - Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California , San Francisco, California
| | - Andrea R Hasenstaub
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California , San Francisco, California
| |
Collapse
|
39
|
A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance. J Neurosci 2017; 36:11097-11106. [PMID: 27798189 DOI: 10.1523/jneurosci.1302-16.2016] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2016] [Accepted: 09/02/2016] [Indexed: 01/06/2023] Open
Abstract
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. SIGNIFICANCE STATEMENT The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds.
Collapse
|
40
|
Zuk N, Delgutte B. Neural coding of time-varying interaural time differences and time-varying amplitude in the inferior colliculus. J Neurophysiol 2017; 118:544-563. [PMID: 28381487 DOI: 10.1152/jn.00797.2016] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Revised: 03/29/2017] [Accepted: 03/31/2017] [Indexed: 11/22/2022] Open
Abstract
Binaural cues occurring in natural environments are frequently time varying, either from the motion of a sound source or through interactions between the cues produced by multiple sources. Yet, a broad understanding of how the auditory system processes dynamic binaural cues is still lacking. In the current study, we directly compared neural responses in the inferior colliculus (IC) of unanesthetized rabbits to broadband noise with time-varying interaural time differences (ITD) with responses to noise with sinusoidal amplitude modulation (SAM) over a wide range of modulation frequencies. On the basis of prior research, we hypothesized that the IC, one of the first stages to exhibit tuning of firing rate to modulation frequency, might use a common mechanism to encode time-varying information in general. Instead, we found weaker temporal coding for dynamic ITD compared with amplitude modulation and stronger effects of adaptation for amplitude modulation. The differences in temporal coding of dynamic ITD compared with SAM at the single-neuron level could be a neural correlate of "binaural sluggishness," the inability to perceive fluctuations in time-varying binaural cues at high modulation frequencies, for which a physiological explanation has so far remained elusive. At ITD-variation frequencies of 64 Hz and above, where a temporal code was less effective, noise with a dynamic ITD could still be distinguished from noise with a constant ITD through differences in average firing rate in many neurons, suggesting a frequency-dependent tradeoff between rate and temporal coding of time-varying binaural information.NEW & NOTEWORTHY Humans use time-varying binaural cues to parse auditory scenes comprising multiple sound sources and reverberation. However, the neural mechanisms for doing so are poorly understood. Our results demonstrate a potential neural correlate for the reduced detectability of fluctuations in time-varying binaural information at high speeds, as occurs in reverberation. The results also suggest that the neural mechanisms for processing time-varying binaural and monaural cues are largely distinct.
Collapse
Affiliation(s)
- Nathaniel Zuk
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts.,Speech and Hearing Bioscience and Technology Program, Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts; and
| | - Bertrand Delgutte
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts; .,Speech and Hearing Bioscience and Technology Program, Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts; and.,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
41
|
Aging affects the balance of neural entrainment and top-down neural modulation in the listening brain. Nat Commun 2017; 8:15801. [PMID: 28654081 PMCID: PMC5490185 DOI: 10.1038/ncomms15801] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Accepted: 05/04/2017] [Indexed: 12/02/2022] Open
Abstract
Healthy aging is accompanied by listening difficulties, including decreased speech comprehension, that stem from an ill-understood combination of sensory and cognitive changes. Here, we use electroencephalography to demonstrate that auditory neural oscillations of older adults entrain less firmly and less flexibly to speech-paced (∼3 Hz) rhythms than younger adults’ during attentive listening. These neural entrainment effects are distinct in magnitude and origin from the neural response to sound per se. Non-entrained parieto-occipital alpha (8–12 Hz) oscillations are enhanced in young adults, but suppressed in older participants, during attentive listening. Entrained neural phase and task-induced alpha amplitude exert opposite, complementary effects on listening performance: higher alpha amplitude is associated with reduced entrainment-driven behavioural performance modulation. Thus, alpha amplitude as a task-driven, neuro-modulatory signal can counteract the behavioural corollaries of neural entrainment. Balancing these two neural strategies may present new paths for intervention in age-related listening difficulties. The changes that accompany age-related decreases in speech comprehension are not yet understood. Here, authors show that older adults are less able to entrain to speech-paced auditory rhythms and that the behavioural consequences can be counteracted by top-down neural modulation.
Collapse
|
42
|
Downer JD, Niwa M, Sutter ML. Hierarchical differences in population coding within auditory cortex. J Neurophysiol 2017; 118:717-731. [PMID: 28446588 PMCID: PMC5539454 DOI: 10.1152/jn.00899.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 04/21/2017] [Accepted: 04/21/2017] [Indexed: 01/04/2023] Open
Abstract
Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation (rnoise) between simultaneously recorded neurons and found that whereas engagement decreased average rnoise in A1, engagement increased average rnoise in ML. This finding surprised us, because attentive states are commonly reported to decrease average rnoise We analyzed the effect of rnoise on AM coding in both A1 and ML and found that whereas engagement-related shifts in rnoise in A1 enhance AM coding, rnoise shifts in ML have little effect. These results imply that the effect of rnoise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing rnoise Therefore, the hierarchical emergence of rnoise-robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity.NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
43
|
Boubenec Y, Lawlor J, Górska U, Shamma S, Englitz B. Detecting changes in dynamic and complex acoustic environments. eLife 2017; 6. [PMID: 28262095 PMCID: PMC5367897 DOI: 10.7554/elife.24910] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 03/04/2017] [Indexed: 01/28/2023] Open
Abstract
Natural sounds such as wind or rain, are characterized by the statistical occurrence of their constituents. Despite their complexity, listeners readily detect changes in these contexts. We here address the neural basis of statistical decision-making using a combination of psychophysics, EEG and modelling. In a texture-based, change-detection paradigm, human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. The potential's amplitude scaled with the duration of pre-change exposure, suggesting a time-dependent decision threshold. Auditory cortex-related potentials showed no response to the change. A dual timescale, statistical estimation model accounted for subjects' performance. Furthermore, a decision-augmented auditory cortex model accounted for performance and reaction times, suggesting that the primary cortical representation requires little post-processing to enable change-detection in complex acoustic environments. DOI:http://dx.doi.org/10.7554/eLife.24910.001
Collapse
Affiliation(s)
- Yves Boubenec
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France
| | - Jennifer Lawlor
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France
| | - Urszula Górska
- Department of Neurophysiology, Donders Centre for Neuroscience, Radboud Universiteit, Nijmegen, Netherlands.,Psychophysiology Laboratory, Institute of Psychology, Jagiellonian University, Krakow, Poland.,Smoluchowski Institute of Physics, Jagiellonian University, Krakow, Poland
| | - Shihab Shamma
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France.,Department of Electrical and Computer Engineering, University of Maryland, College Park, United States.,Institute for Systems Research, University of Maryland, College Park, United States
| | - Bernhard Englitz
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France.,Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France.,Department of Neurophysiology, Donders Centre for Neuroscience, Radboud Universiteit, Nijmegen, Netherlands
| |
Collapse
|
44
|
Gao L, Kostlan K, Wang Y, Wang X. Distinct Subthreshold Mechanisms Underlying Rate-Coding Principles in Primate Auditory Cortex. Neuron 2016; 91:905-919. [PMID: 27478016 PMCID: PMC5292152 DOI: 10.1016/j.neuron.2016.07.004] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Revised: 05/25/2016] [Accepted: 06/28/2016] [Indexed: 12/15/2022]
Abstract
A key computational principle for encoding time-varying signals in auditory and somatosensory cortices of monkeys is the opponent model of rate coding by two distinct populations of neurons. However, the subthreshold mechanisms that give rise to this computation have not been revealed. Because the rate-coding neurons are only observed in awake conditions, it is especially challenging to probe their underlying cellular mechanisms. Using a novel intracellular recording technique that we developed in awake marmosets, we found that the two types of rate-coding neurons in auditory cortex exhibited distinct subthreshold responses. While the positive-monotonic neurons (monotonically increasing firing rate with increasing stimulus repetition frequency) displayed sustained depolarization at high repetition frequency, the negative-monotonic neurons (opposite trend) instead exhibited hyperpolarization at high repetition frequency but sustained depolarization at low repetition frequency. The combination of excitatory and inhibitory subthreshold events allows the cortex to represent time-varying signals through these two opponent neuronal populations.
Collapse
|
45
|
Overton JA, Recanzone GH. Effects of aging on the response of single neurons to amplitude-modulated noise in primary auditory cortex of rhesus macaque. J Neurophysiol 2016; 115:2911-23. [PMID: 26936987 DOI: 10.1152/jn.01098.2015] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2015] [Accepted: 03/02/2016] [Indexed: 12/13/2022] Open
Abstract
Temporal envelope processing is critical for speech comprehension, which is known to be affected by normal aging. Whereas the macaque is an excellent animal model for human cerebral cortical function, few studies have investigated neural processing in the auditory cortex of aged, nonhuman primates. Therefore, we investigated age-related changes in the spiking activity of neurons in primary auditory cortex (A1) of two aged macaque monkeys using amplitude-modulated (AM) noise and compared these responses with data from a similar study in young monkeys (Yin P, Johnson JS, O'Connor KN, Sutter ML. J Neurophysiol 105: 582-600, 2011). For each neuron, we calculated firing rate (rate code) and phase-locking using phase-projected vector strength (temporal code). We made several key findings where neurons in old monkeys differed from those in young monkeys. Old monkeys had higher spontaneous and driven firing rates, fewer neurons that synchronized with the AM stimulus, and fewer neurons that had differential responses to AM stimuli with both a rate and temporal code. Finally, whereas rate and temporal tuning functions were positively correlated in young monkeys, this relationship was lost in older monkeys at both the population and single neuron levels. These results are consistent with considerable evidence from rodents and primates of an age-related decrease in inhibition throughout the auditory pathway. Furthermore, this dual coding in A1 is thought to underlie the capacity to encode multiple features of an acoustic stimulus. The apparent loss of ability to encode AM with both rate and temporal codes may have consequences for stream segregation and effective speech comprehension in complex listening environments.
Collapse
Affiliation(s)
| | - Gregg H Recanzone
- Center for Neuroscience, University of California, Davis, California; and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
46
|
Tang H, Crain S, Johnson BW. Dual temporal encoding mechanisms in human auditory cortex: Evidence from MEG and EEG. Neuroimage 2016; 128:32-43. [DOI: 10.1016/j.neuroimage.2015.12.053] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 12/01/2015] [Accepted: 12/30/2015] [Indexed: 11/25/2022] Open
|
47
|
Lee CM, Osman AF, Volgushev M, Escabí MA, Read HL. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields. J Neurophysiol 2016; 115:1886-904. [PMID: 26843599 DOI: 10.1152/jn.00784.2015] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 01/29/2016] [Indexed: 11/22/2022] Open
Abstract
Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices.
Collapse
Affiliation(s)
- Christopher M Lee
- Department of Psychology, University of Connecticut, Storrs, Connecticut
| | - Ahmad F Osman
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut; and
| | - Maxim Volgushev
- Department of Psychology, University of Connecticut, Storrs, Connecticut
| | - Monty A Escabí
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut; and Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut
| | - Heather L Read
- Department of Psychology, University of Connecticut, Storrs, Connecticut; Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut; and
| |
Collapse
|
48
|
Okamoto H, Kakigi R. Encoding of frequency-modulation (FM) rates in human auditory cortex. Sci Rep 2015; 5:18143. [PMID: 26656920 PMCID: PMC4677350 DOI: 10.1038/srep18143] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 11/13/2015] [Indexed: 11/09/2022] Open
Abstract
Frequency-modulated sounds play an important role in our daily social life. However, it currently remains unclear whether frequency modulation rates affect neural activity in the human auditory cortex. In the present study, using magnetoencephalography, we investigated the auditory evoked N1m and sustained field responses elicited by temporally repeated and superimposed frequency-modulated sweeps that were matched in the spectral domain, but differed in frequency modulation rates (1, 4, 16, and 64 octaves per sec). The results obtained demonstrated that the higher rate frequency-modulated sweeps elicited the smaller N1m and the larger sustained field responses. Frequency modulation rate had a significant impact on the human brain responses, thereby providing a key for disentangling a series of natural frequency-modulated sounds such as speech and music.
Collapse
Affiliation(s)
- Hidehiko Okamoto
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, Japan.,Department of Physiological Sciences, School of Life Science, SOKENDAI (The Graduate University for Advanced Studies), Hayama, Japan
| | - Ryusuke Kakigi
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, Japan.,Department of Physiological Sciences, School of Life Science, SOKENDAI (The Graduate University for Advanced Studies), Hayama, Japan
| |
Collapse
|
49
|
Abstract
Frequency modulation is critical to human speech. Evidence from psychophysics, neurophysiology, and neuroimaging suggests that there are neuronal populations tuned to this property of speech. Consistent with this, extended exposure to frequency change produces direction specific aftereffects in frequency change detection. We show that this aftereffect occurs extremely rapidly, requiring only a single trial of just 100-ms duration. We demonstrate this using a long, randomized series of frequency sweeps (both upward and downward, by varying amounts) and analyzing intertrial adaptation effects. We show the point of constant frequency is shifted systematically towards the previous trial's sweep direction (i.e., a frequency sweep aftereffect). Furthermore, the perception of glide direction is also independently influenced by the glide presented two trials previously. The aftereffect is frequency tuned, as exposure to a frequency sweep from a set centered on 1,000 Hz does not influence a subsequent trial drawn from a set centered on 400 Hz. More generally, the rapidity of adaptation suggests the auditory system is constantly adapting and "tuning" itself to the most recent environmental conditions.
Collapse
|
50
|
Abstract
Noise correlations (r(noise)) between neurons can affect a neural population's discrimination capacity, even without changes in mean firing rates of neurons. r(noise), the degree to which the response variability of a pair of neurons is correlated, has been shown to change with attention with most reports showing a reduction in r(noise). However, the effect of reducing r(noise) on sensory discrimination depends on many factors, including the tuning similarity, or tuning correlation (r(tuning)), between the pair. Theoretically, reducing r(noise) should enhance sensory discrimination when the pair exhibits similar tuning, but should impair discrimination when tuning is dissimilar. We recorded from pairs of neurons in primary auditory cortex (A1) under two conditions: while rhesus macaque monkeys (Macaca mulatta) actively performed a threshold amplitude modulation (AM) detection task and while they sat passively awake. We report that, for pairs with similar AM tuning, average r(noise) in A1 decreases when the animal performs the AM detection task compared with when sitting passively. For pairs with dissimilar tuning, the average r(noise) did not significantly change between conditions. This suggests that attention-related modulation can target selective subcircuits to decorrelate noise. These results demonstrate that engagement in an auditory task enhances population coding in primary auditory cortex by selectively reducing deleterious r(noise) and leaving beneficial r(noise) intact.
Collapse
|