1
|
Tamaoki Y, Pasapula V, Danaphongse TT, Reyes AR, Chandler CR, Borland MS, Riley JR, Carroll AM, Engineer CT. Pairing tones with vagus nerve stimulation improves brain stem responses to speech in the valproic acid model of autism. J Neurophysiol 2024; 132:1426-1436. [PMID: 39319784 DOI: 10.1152/jn.00325.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 09/16/2024] [Accepted: 09/20/2024] [Indexed: 09/26/2024] Open
Abstract
Receptive language deficits and aberrant auditory processing are often observed in individuals with autism spectrum disorders (ASD). Symptoms associated with ASD are observed in rodents prenatally exposed to valproic acid (VPA), including deficits in speech sound discrimination ability. These perceptual difficulties are accompanied by changes in neural activity patterns. In both cortical and subcortical levels of the auditory pathway, VPA-exposed rats have impaired responses to speech sounds. Developing a method to improve these neural deficits throughout the auditory pathway is necessary. The purpose of this study was to investigate the ability of vagus nerve stimulation (VNS) paired with sounds to restore degraded inferior colliculus (IC) responses in VPA-exposed rats. VNS paired with the speech sound "dad" was presented to a group of VPA-exposed rats 300 times per day for 20 days. Another group of VPA-exposed rats were presented with VNS paired with multiple tone frequencies for 20 days. The IC responses were recorded from 19 saline-exposed control rats and 18 VPA-exposed with no VNS, 8 VNS-speech paired VPA-exposed, and 7 VNS-tone paired VPA-exposed female and male rats. Pairing VNS with tones increased the IC response strength to speech sounds by 44% compared to VPA-exposed rats alone. Contrarily, VNS-speech pairing significantly decreased the IC response to speech compared with VPA-exposed rats by 5%. The present research indicates that pairing VNS with tones improved sound processing in rats exposed to VPA and suggests that auditory processing can be improved through targeted plasticity.NEW & NOTEWORTHY Pairing vagus nerve stimulation (VNS) with sounds has improved auditory processing in the auditory cortex of normal-hearing rats and autism models of rats. This study tests the ability of VNS-sound pairing to restore auditory processing in the inferior colliculus (IC) of valproic acid (VPA)-exposed rats. Pairing VNS with tones significantly reversed the degraded sound processing in the IC in VPA-exposed rats. The findings provide evidence that auditory processing in autism rat models can be improved through VNS.
Collapse
Affiliation(s)
- Yuko Tamaoki
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Varun Pasapula
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Tanya T Danaphongse
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Alfonso R Reyes
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Collin R Chandler
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Michael S Borland
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Jonathan R Riley
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Alan M Carroll
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Crystal T Engineer
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| |
Collapse
|
2
|
Carroll AM, Riley JR, Borland MS, Danaphongse TT, Hays SA, Kilgard MP, Engineer CT. Bursts of vagus nerve stimulation paired with auditory rehabilitation fail to improve speech sound perception in rats with hearing loss. iScience 2024; 27:109527. [PMID: 38585658 PMCID: PMC10995867 DOI: 10.1016/j.isci.2024.109527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/09/2023] [Accepted: 03/15/2024] [Indexed: 04/09/2024] Open
Abstract
Hearing loss can lead to long-lasting effects on the central nervous system, and current therapies, such as auditory training and rehabilitation, show mixed success in improving perception and speech comprehension. Vagus nerve stimulation (VNS) is an adjunctive therapy that can be paired with rehabilitation to facilitate behavioral recovery after neural injury. However, VNS for auditory recovery has not been tested after severe hearing loss or significant damage to peripheral receptors. This study investigated the utility of pairing VNS with passive or active auditory rehabilitation in a rat model of noise-induced hearing loss. Although auditory rehabilitation helped rats improve their frequency discrimination, learn novel speech discrimination tasks, and achieve speech-in-noise performance similar to normal hearing controls, VNS did not enhance recovery of speech sound perception. These results highlight the limitations of VNS as an adjunctive therapy for hearing loss rehabilitation and suggest that optimal benefits from neuromodulation may require restored peripheral signaling.
Collapse
Affiliation(s)
- Alan M. Carroll
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road, Richardson, TX 75080-3021, USA
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, Richardson, TX 75080-3021, USA
| | - Jonathan R. Riley
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road, Richardson, TX 75080-3021, USA
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, Richardson, TX 75080-3021, USA
| | - Michael S. Borland
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road, Richardson, TX 75080-3021, USA
| | - Tanya T. Danaphongse
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road, Richardson, TX 75080-3021, USA
| | - Seth A. Hays
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road, Richardson, TX 75080-3021, USA
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, 800 West Campbell Road, Richardson, TX 75080-3021, USA
| | - Michael P. Kilgard
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road, Richardson, TX 75080-3021, USA
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, Richardson, TX 75080-3021, USA
| | - Crystal T. Engineer
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road, Richardson, TX 75080-3021, USA
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, Richardson, TX 75080-3021, USA
| |
Collapse
|
3
|
Martin A, Souffi S, Huetz C, Edeline JM. Can Extensive Training Transform a Mouse into a Guinea Pig? An Evaluation Based on the Discriminative Abilities of Inferior Colliculus Neurons. BIOLOGY 2024; 13:92. [PMID: 38392310 PMCID: PMC10886615 DOI: 10.3390/biology13020092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/19/2024] [Accepted: 01/30/2024] [Indexed: 02/24/2024]
Abstract
Humans and animals maintain accurate discrimination between communication sounds in the presence of loud sources of background noise. In previous studies performed in anesthetized guinea pigs, we showed that, in the auditory pathway, the highest discriminative abilities between conspecific vocalizations were found in the inferior colliculus. Here, we trained CBA/J mice in a Go/No-Go task to discriminate between two similar guinea pig whistles, first in quiet conditions, then in two types of noise, a stationary noise and a chorus noise at three SNRs. Control mice were passively exposed to the same number of whistles as trained mice. After three months of extensive training, inferior colliculus (IC) neurons were recorded under anesthesia and the responses were quantified as in our previous studies. In quiet, the mean values of the firing rate, the temporal reliability and mutual information obtained from trained mice were higher than from the exposed mice and the guinea pigs. In stationary and chorus noise, there were only a few differences between the trained mice and the guinea pigs; and the lowest mean values of the parameters were found in the exposed mice. These results suggest that behavioral training can trigger plasticity in IC that allows mice neurons to reach guinea pig-like discrimination abilities.
Collapse
Affiliation(s)
- Alexandra Martin
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| |
Collapse
|
4
|
Souffi S, Varnet L, Zaidi M, Bathellier B, Huetz C, Edeline JM. Reduction in sound discrimination in noise is related to envelope similarity and not to a decrease in envelope tracking abilities. J Physiol 2023; 601:123-149. [PMID: 36373184 DOI: 10.1113/jp283526] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/08/2022] [Indexed: 11/15/2022] Open
Abstract
Humans and animals constantly face challenging acoustic environments, such as various background noises, that impair the detection, discrimination and identification of behaviourally relevant sounds. Here, we disentangled the role of temporal envelope tracking in the reduction in neuronal and behavioural discrimination between communication sounds in situations of acoustic degradations. By collecting neuronal activity from six different levels of the auditory system, from the auditory nerve up to the secondary auditory cortex, in anaesthetized guinea-pigs, we found that tracking of slow changes of the temporal envelope is a general functional property of auditory neurons for encoding communication sounds in quiet conditions and in adverse, challenging conditions. Results from a go/no-go sound discrimination task in mice support the idea that the loss of distinct slow envelope cues in noisy conditions impacted the discrimination performance. Together, these results suggest that envelope tracking is potentially a universal mechanism operating in the central auditory system, which allows the detection of any between-stimulus difference in the slow envelope and thus copes with degraded conditions. KEY POINTS: In quiet conditions, envelope tracking in the low amplitude modulation range (<20 Hz) is correlated with the neuronal discrimination between communication sounds as quantified by mutual information from the cochlear nucleus up to the auditory cortex. At each level of the auditory system, auditory neurons retain their abilities to track the communication sound envelopes in situations of acoustic degradation, such as vocoding and the addition of masking noises up to a signal-to-noise ratio of -10 dB. In noisy conditions, the increase in between-stimulus envelope similarity explains the reduction in both behavioural and neuronal discrimination in the auditory system. Envelope tracking can be viewed as a universal mechanism that allows neural and behavioural discrimination as long as the temporal envelope of communication sounds displays some differences.
Collapse
Affiliation(s)
- Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Léo Varnet
- Laboratoire des systèmes perceptifs, UMR CNRS 8248, Département d'Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences & Lettres, Paris, France
| | - Meryem Zaidi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Brice Bathellier
- Institut de l'Audition, Institut Pasteur, Université de Paris, INSERM, Paris, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| |
Collapse
|
5
|
Riley JR, Borland MS, Tamaoki Y, Skipton SK, Engineer CT. Auditory Brainstem Responses Predict Behavioral Deficits in Rats with Varying Levels of Noise-Induced Hearing Loss. Neuroscience 2021; 477:63-75. [PMID: 34634426 DOI: 10.1016/j.neuroscience.2021.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 09/30/2021] [Accepted: 10/04/2021] [Indexed: 11/30/2022]
Abstract
Intense noise exposure is a leading cause of hearing loss, which results in degraded speech sound discrimination ability, particularly in noisy environments. The development of an animal model of speech discrimination deficits due to noise induced hearing loss (NIHL) would enable testing of potential therapies to improve speech sound processing. Rats can accurately detect and discriminate human speech sounds in the presence of quiet and background noise. Further, it is known that profound hearing loss results in functional deafness in rats. In this study, we generated rats with a range of impairments which model the large range of hearing impairments observed in patients with NIHL. One month after noise exposure, we stratified rats into three distinct deficit groups based on their auditory brainstem response (ABR) thresholds. These groups exhibited markedly different behavioral outcomes across a range of tasks. Rats with moderate hearing loss (30 dB shifts in ABR threshold) were not impaired in speech sound detection or discrimination. Rats with severe hearing loss (55 dB shifts) were impaired at discriminating speech sounds in the presence of background noise. Rats with profound hearing loss (70 dB shifts) were unable to detect and discriminate speech sounds above chance level performance. Across groups, ABR threshold accurately predicted behavioral performance on all tasks. This model of long-term impaired speech discrimination in noise, demonstrated by the severe group, mimics the most common clinical presentation of NIHL and represents a useful tool for developing and improving interventions to target restoration of hearing.
Collapse
Affiliation(s)
- Jonathan R Riley
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA.
| | - Michael S Borland
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA
| | - Yuko Tamaoki
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA
| | - Samantha K Skipton
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA
| | - Crystal T Engineer
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA
| |
Collapse
|
6
|
Souffi S, Nodal FR, Bajo VM, Edeline JM. When and How Does the Auditory Cortex Influence Subcortical Auditory Structures? New Insights About the Roles of Descending Cortical Projections. Front Neurosci 2021; 15:690223. [PMID: 34413722 PMCID: PMC8369261 DOI: 10.3389/fnins.2021.690223] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/14/2021] [Indexed: 12/28/2022] Open
Abstract
For decades, the corticofugal descending projections have been anatomically well described but their functional role remains a puzzling question. In this review, we will first describe the contributions of neuronal networks in representing communication sounds in various types of degraded acoustic conditions from the cochlear nucleus to the primary and secondary auditory cortex. In such situations, the discrimination abilities of collicular and thalamic neurons are clearly better than those of cortical neurons although the latter remain very little affected by degraded acoustic conditions. Second, we will report the functional effects resulting from activating or inactivating corticofugal projections on functional properties of subcortical neurons. In general, modest effects have been observed in anesthetized and in awake, passively listening, animals. In contrast, in behavioral tasks including challenging conditions, behavioral performance was severely reduced by removing or transiently silencing the corticofugal descending projections. This suggests that the discriminative abilities of subcortical neurons may be sufficient in many acoustic situations. It is only in particularly challenging situations, either due to the task difficulties and/or to the degraded acoustic conditions that the corticofugal descending connections bring additional abilities. Here, we propose that it is both the top-down influences from the prefrontal cortex, and those from the neuromodulatory systems, which allow the cortical descending projections to impact behavioral performance in reshaping the functional circuitry of subcortical structures. We aim at proposing potential scenarios to explain how, and under which circumstances, these projections impact on subcortical processing and on behavioral responses.
Collapse
Affiliation(s)
- Samira Souffi
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| | - Fernando R Nodal
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Jean-Marc Edeline
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| |
Collapse
|
7
|
Homma NY, Hullett PW, Atencio CA, Schreiner CE. Auditory Cortical Plasticity Dependent on Environmental Noise Statistics. Cell Rep 2021; 30:4445-4458.e5. [PMID: 32234479 PMCID: PMC7326484 DOI: 10.1016/j.celrep.2020.03.014] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 08/07/2019] [Accepted: 03/05/2020] [Indexed: 01/14/2023] Open
Abstract
During critical periods, neural circuits develop to form receptive fields that adapt to the sensory environment and enable optimal performance of relevant tasks. We hypothesized that early exposure to background noise can improve signal-in-noise processing, and the resulting receptive field plasticity in the primary auditory cortex can reveal functional principles guiding that important task. We raised rat pups in different spectro-temporal noise statistics during their auditory critical period. As adults, they showed enhanced behavioral performance in detecting vocalizations in noise. Concomitantly, encoding of vocalizations in noise in the primary auditory cortex improves with noise-rearing. Significantly, spectro-temporal modulation plasticity shifts cortical preferences away from the exposed noise statistics, thus reducing noise interference with the foreground sound representation. Auditory cortical plasticity shapes receptive field preferences to optimally extract foreground information in noisy environments during noise-rearing. Early noise exposure induces cortical circuits to implement efficient coding in the joint spectral and temporal modulation domain. After rearing rats in moderately loud spectro-temporally modulated background noise, Homma et al. investigated signal-in-noise processing in the primary auditory cortex. Noise-rearing improved vocalization-in-noise performance in both behavioral testing and neural decoding. Cortical plasticity shifted neuronal spectro-temporal modulation preferences away from the exposed noise statistics.
Collapse
Affiliation(s)
- Natsumi Y Homma
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Patrick W Hullett
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Craig A Atencio
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA.
| |
Collapse
|
8
|
Pupillometry as a reliable metric of auditory detection and discrimination across diverse stimulus paradigms in animal models. Sci Rep 2021; 11:3108. [PMID: 33542266 PMCID: PMC7862232 DOI: 10.1038/s41598-021-82340-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 01/08/2021] [Indexed: 12/30/2022] Open
Abstract
Estimates of detection and discrimination thresholds are often used to explore broad perceptual similarities between human subjects and animal models. Pupillometry shows great promise as a non-invasive, easily-deployable method of comparing human and animal thresholds. Using pupillometry, previous studies in animal models have obtained threshold estimates to simple stimuli such as pure tones, but have not explored whether similar pupil responses can be evoked by complex stimuli, what other stimulus contingencies might affect stimulus-evoked pupil responses, and if pupil responses can be modulated by experience or short-term training. In this study, we used an auditory oddball paradigm to estimate detection and discrimination thresholds across a wide range of stimuli in guinea pigs. We demonstrate that pupillometry yields reliable detection and discrimination thresholds across a range of simple (tones) and complex (conspecific vocalizations) stimuli; that pupil responses can be robustly evoked using different stimulus contingencies (low-level acoustic changes, or higher level categorical changes); and that pupil responses are modulated by short-term training. These results lay the foundation for using pupillometry as a reliable method of estimating thresholds in large experimental cohorts, and unveil the full potential of using pupillometry to explore broad similarities between humans and animal models.
Collapse
|
9
|
Song F, Zhan Y, Ford JC, Cai DC, Fellows AM, Shan F, Song P, Chen G, Soli SD, Shi Y, Buckey JC. Increased Right Frontal Brain Activity During the Mandarin Hearing-in-Noise Test. Front Neurosci 2020; 14:614012. [PMID: 33390894 PMCID: PMC7773781 DOI: 10.3389/fnins.2020.614012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Accepted: 11/26/2020] [Indexed: 11/13/2022] Open
Abstract
Purpose Previous studies have revealed increased frontal brain activation during speech comprehension in background noise. Few, however, used tonal languages. The normal pattern of brain activation during a challenging speech-in-nose task using a tonal language remains unclear. The Mandarin Hearing-in-Noise Test (HINT) is a well-established test for assessing the ability to interpret speech in background noise. The current study used Mandarin HINT (MHINT) sentences and functional magnetic resonance imaging (fMRI) to assess brain activation with MHINT sentences. Methods Thirty native Mandarin-speaking subjects with normal peripheral hearing were recruited. Functional MRI was performed while subjects were presented with either HINT “clear” sentences with low-level background noise [signal-to-noise ratio (SNR) = +3 dB] or “noisy” sentences with high-level background noise (SNR = −5 dB). Subjects were instructed to answer with a button press whether a visually presented target word was included in the sentence. Brain activation between noisy and clear sentences was compared. Activation in each condition was also compared to a resting, no sentence presentation, condition. Results Noisy sentence comprehension showed increased activity in areas associated with tone processing and working memory, including the right superior and middle frontal gyri [Brodmann Areas (BAs) 46, 10]. Reduced activity with noisy sentences was seen in auditory, language, memory and somatosensory areas, including the bilateral superior and middle temporal gyri, left Heschl’s gyrus (BAs 21, 22), right temporal pole (BA 38), bilateral amygdala-hippocampus junction, and parahippocampal gyrus (BAs 28, 35), left inferior parietal lobule extending to left postcentral gyrus (BAs 2, 40), and left putamen. Conclusion Increased frontal activation in the right hemisphere occurred when comprehending noisy spoken sentences in Mandarin. Compared to studies using non-tonal languages, this activation was strongly right-sided and involved subregions not previously reported. These findings may reflect additional effort in lexical tone perception in this tonal language. Additionally, this continuous fMRI protocol may offer a time-efficient way to assess group differences in brain activation with a challenging speech-in-noise task.
Collapse
Affiliation(s)
- Fengxiang Song
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Yi Zhan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - James C Ford
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, NH, United States.,Department of Psychiatry, Dartmouth-Hitchcock, Lebanon, NH, United States
| | - Dan-Chao Cai
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Abigail M Fellows
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
| | - Fei Shan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Pengrui Song
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Guochao Chen
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | | | - Yuxin Shi
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Jay C Buckey
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
| |
Collapse
|
10
|
Monaghan JJM, Garcia-Lazaro JA, McAlpine D, Schaette R. Hidden Hearing Loss Impacts the Neural Representation of Speech in Background Noise. Curr Biol 2020; 30:4710-4721.e4. [PMID: 33035490 PMCID: PMC7728162 DOI: 10.1016/j.cub.2020.09.046] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 07/08/2020] [Accepted: 09/15/2020] [Indexed: 01/26/2023]
Abstract
Many individuals with seemingly normal hearing abilities struggle to understand speech in noisy backgrounds. To understand why this might be the case, we investigated the neural representation of speech in the auditory midbrain of gerbils with "hidden hearing loss" through noise exposure that increased hearing thresholds only temporarily. In noise-exposed animals, we observed significantly increased neural responses to speech stimuli, with a more pronounced increase at moderate than at high sound intensities. Noise exposure reduced discriminability of neural responses to speech in background noise at high sound intensities, with impairment most severe for tokens with relatively greater spectral energy in the noise-exposure frequency range (2-4 kHz). At moderate sound intensities, discriminability was surprisingly improved, which was unrelated to spectral content. A model combining damage to high-threshold auditory nerve fibers with increased response gain of central auditory neurons reproduced these effects, demonstrating that a specific combination of peripheral damage and central compensation could explain listening difficulties despite normal hearing thresholds.
Collapse
Affiliation(s)
- Jessica J M Monaghan
- National Acoustic Laboratories, Australian Hearing Hub, Macquarie University, Sydney, NSW 2109, Australia; Macquarie University Hearing & Department of Linguistics, Australian Hearing Hub, Macquarie University, Sydney, NSW 2109, Australia
| | - Jose A Garcia-Lazaro
- Ear Institute, University College London, 332 Grays Inn Road, London WC1X 8EE, UK
| | - David McAlpine
- Macquarie University Hearing & Department of Linguistics, Australian Hearing Hub, Macquarie University, Sydney, NSW 2109, Australia; Ear Institute, University College London, 332 Grays Inn Road, London WC1X 8EE, UK
| | - Roland Schaette
- Ear Institute, University College London, 332 Grays Inn Road, London WC1X 8EE, UK.
| |
Collapse
|
11
|
Harun R, Jun E, Park HH, Ganupuru P, Goldring AB, Hanks TD. Timescales of Evidence Evaluation for Decision Making and Associated Confidence Judgments Are Adapted to Task Demands. Front Neurosci 2020; 14:826. [PMID: 32903672 PMCID: PMC7438826 DOI: 10.3389/fnins.2020.00826] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2020] [Accepted: 07/15/2020] [Indexed: 01/29/2023] Open
Abstract
Decision making often involves choosing actions based on relevant evidence. This can benefit from focussing evidence evaluation on the timescale of greatest relevance based on the situation. Here, we use an auditory change detection task to determine how people adjust their timescale of evidence evaluation depending on task demands for detecting changes in their environment and assessing their internal confidence in those decisions. We confirm previous results that people adopt shorter timescales of evidence evaluation for detecting changes in contexts with shorter signal durations, while bolstering those results with model-free analyses not previously used and extending the results to the auditory domain. We also extend these results to show that in contexts with shorter signal durations, people also adopt correspondingly shorter timescales of evidence evaluation for assessing confidence in their decision about detecting a change. These results provide important insights into adaptability and flexible control of evidence evaluation for decision making.
Collapse
Affiliation(s)
- Rashed Harun
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Elizabeth Jun
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Heui Hye Park
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Preetham Ganupuru
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Adam B Goldring
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Timothy D Hanks
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| |
Collapse
|
12
|
Adcock KS, Chandler C, Buell EP, Solorzano BR, Loerwald KW, Borland MS, Engineer CT. Vagus nerve stimulation paired with tones restores auditory processing in a rat model of Rett syndrome. Brain Stimul 2020; 13:1494-1503. [PMID: 32800964 DOI: 10.1016/j.brs.2020.08.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 07/26/2020] [Accepted: 08/07/2020] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Rett syndrome is a rare neurological disorder associated with a mutation in the X-linked gene MECP2. This disorder mainly affects females, who typically have seemingly normal early development followed by a regression of acquired skills. The rodent Mecp2 model exhibits many of the classic neural abnormalities and behavioral deficits observed in individuals with Rett syndrome. Similar to individuals with Rett syndrome, both auditory discrimination ability and auditory cortical responses are impaired in heterozygous Mecp2 rats. The development of therapies that can enhance plasticity in auditory networks and improve auditory processing has the potential to impact the lives of individuals with Rett syndrome. Evidence suggests that precisely timed vagus nerve stimulation (VNS) paired with sound presentation can drive robust neuroplasticity in auditory networks and enhance the benefits of auditory therapy. OBJECTIVE The aim of this study was to investigate the ability of VNS paired with tones to restore auditory processing in Mecp2 transgenic rats. METHODS Seventeen female heterozygous Mecp2 rats and 8 female wild-type (WT) littermates were used in this study. The rats were exposed to multiple tone frequencies paired with VNS 300 times per day for 20 days. Auditory cortex responses were then examined following VNS-tone pairing therapy or no therapy. RESULTS Our results indicate that Mecp2 mutation alters auditory cortex responses to sounds compared to WT controls. VNS-tone pairing in Mecp2 rats improves the cortical response strength to both tones and speech sounds compared to untreated Mecp2 rats. Additionally, VNS-tone pairing increased the information contained in the neural response that can be used to discriminate between different consonant sounds. CONCLUSION These results demonstrate that VNS-sound pairing may represent a strategy to enhance auditory function in individuals with Rett syndrome.
Collapse
Affiliation(s)
- Katherine S Adcock
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Collin Chandler
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, Erik Jonsson School of Engineering and Computer Science, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Elizabeth P Buell
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Bleyda R Solorzano
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Kristofer W Loerwald
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Michael S Borland
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Crystal T Engineer
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, Erik Jonsson School of Engineering and Computer Science, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA.
| |
Collapse
|
13
|
Chiang CH, Lee J, Wang C, Williams AJ, Lucas TH, Cohen YE, Viventi J. A modular high-density μECoG system on macaque vlPFC for auditory cognitive decoding. J Neural Eng 2020; 17:046008. [PMID: 32498058 DOI: 10.1088/1741-2552/ab9986] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
OBJECTIVE A fundamental goal of the auditory system is to parse the auditory environment into distinct perceptual representations. Auditory perception is mediated by the ventral auditory pathway, which includes the ventrolateral prefrontal cortex (vlPFC). Because large-scale recordings of auditory signals are quite rare, the spatiotemporal resolution of the neuronal code that underlies vlPFC's contribution to auditory perception has not been fully elucidated. Therefore, we developed a modular, chronic, high-resolution, multi-electrode array system with long-term viability in order to identify the information that could be decoded from μECoG vlPFC signals. APPROACH We molded three separate μECoG arrays into one and implanted this system in a non-human primate. A custom 3D-printed titanium chamber was mounted on the left hemisphere. The molded 294-contact μECoG array was implanted subdurally over the vlPFC. μECoG activity was recorded while the monkey participated in a 'hearing-in-noise' task in which they reported hearing a 'target' vocalization from a background 'chorus' of vocalizations. We titrated task difficulty by varying the sound level of the target vocalization, relative to the chorus (target-to-chorus ratio, TCr). MAIN RESULTS We decoded the TCr and the monkey's behavioral choices from the μECoG signal. We analyzed decoding accuracy as a function of number of electrodes, spatial resolution, and time from implantation. Over a one-year period, we found significant decoding with individual electrodes that increased significantly as we decoded simultaneously more electrodes. Further, we found that the decoding for behavioral choice was better than the decoding of TCr. Finally, because the decoding accuracy of individual electrodes varied on a day-by-day basis, electrode arrays with high channel counts ensure robust decoding in the long term. SIGNIFICANCE Our results demonstrate the utility of high-resolution and high-channel-count, chronic µECoG recording. We developed a surface electrode array that can be scaled to cover larger cortical areas without increasing the chamber footprint.
Collapse
Affiliation(s)
- Chia-Han Chiang
- Department of Biomedical Engineering, Duke University, Durham, NC, United States of America. These authors contributed equally to this work
| | | | | | | | | | | | | |
Collapse
|
14
|
O’Sullivan C, Weible AP, Wehr M. Disruption of Early or Late Epochs of Auditory Cortical Activity Impairs Speech Discrimination in Mice. Front Neurosci 2020; 13:1394. [PMID: 31998064 PMCID: PMC6965026 DOI: 10.3389/fnins.2019.01394] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 12/10/2019] [Indexed: 11/22/2022] Open
Abstract
Speech evokes robust activity in auditory cortex, which contains information over a wide range of spatial and temporal scales. It remains unclear which components of these neural representations are causally involved in the perception and processing of speech sounds. Here we compared the relative importance of early and late speech-evoked activity for consonant discrimination. We trained mice to discriminate the initial consonants in spoken words, and then tested the effect of optogenetically suppressing different temporal windows of speech-evoked activity in auditory cortex. We found that both early and late suppression disrupted performance equivalently. These results suggest that mice are impaired at recognizing either type of disrupted representation because it differs from those learned in training.
Collapse
Affiliation(s)
- Conor O’Sullivan
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
- Department of Biology, University of Oregon, Eugene, OR, United States
| | - Aldis P. Weible
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
| | - Michael Wehr
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
- Department of Psychology, University of Oregon, Eugene, OR, United States
- *Correspondence: Michael Wehr,
| |
Collapse
|
15
|
Occelli F, Hasselmann F, Bourien J, Eybalin M, Puel J, Desvignes N, Wiszniowski B, Edeline JM, Gourévitch B. Age-related Changes in Auditory Cortex Without Detectable Peripheral Alterations: A Multi-level Study in Sprague–Dawley Rats. Neuroscience 2019; 404:184-204. [DOI: 10.1016/j.neuroscience.2019.02.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 01/21/2019] [Accepted: 02/01/2019] [Indexed: 01/31/2023]
|
16
|
Koerner TK, Zhang Y. Differential effects of hearing impairment and age on electrophysiological and behavioral measures of speech in noise. Hear Res 2018; 370:130-142. [DOI: 10.1016/j.heares.2018.10.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 10/06/2018] [Accepted: 10/14/2018] [Indexed: 10/28/2022]
|
17
|
Steadman MA, Sumner CJ. Changes in Neuronal Representations of Consonants in the Ascending Auditory System and Their Role in Speech Recognition. Front Neurosci 2018; 12:671. [PMID: 30369863 PMCID: PMC6194309 DOI: 10.3389/fnins.2018.00671] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 09/06/2018] [Indexed: 11/25/2022] Open
Abstract
A fundamental task of the ascending auditory system is to produce representations that facilitate the recognition of complex sounds. This is particularly challenging in the context of acoustic variability, such as that between different talkers producing the same phoneme. These representations are transformed as information is propagated throughout the ascending auditory system from the inner ear to the auditory cortex (AI). Investigating these transformations and their role in speech recognition is key to understanding hearing impairment and the development of future clinical interventions. Here, we obtained neural responses to an extensive set of natural vowel-consonant-vowel phoneme sequences, each produced by multiple talkers, in three stages of the auditory processing pathway. Auditory nerve (AN) representations were simulated using a model of the peripheral auditory system and extracellular neuronal activity was recorded in the inferior colliculus (IC) and primary auditory cortex (AI) of anaesthetized guinea pigs. A classifier was developed to examine the efficacy of these representations for recognizing the speech sounds. Individual neurons convey progressively less information from AN to AI. Nonetheless, at the population level, representations are sufficiently rich to facilitate recognition of consonants with a high degree of accuracy at all stages indicating a progression from a dense, redundant representation to a sparse, distributed one. We examined the timescale of the neural code for consonant recognition and found that optimal timescales increase throughout the ascending auditory system from a few milliseconds in the periphery to several tens of milliseconds in the cortex. Despite these longer timescales, we found little evidence to suggest that representations up to the level of AI become increasingly invariant to across-talker differences. Instead, our results support the idea that the role of the subcortical auditory system is one of dimensionality expansion, which could provide a basis for flexible classification of arbitrary speech sounds.
Collapse
Affiliation(s)
- Mark A. Steadman
- MRC Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Christian J. Sumner
- MRC Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
18
|
Attarha M, Bigelow J, Merzenich MM. Unintended Consequences of White Noise Therapy for Tinnitus—Otolaryngology's Cobra Effect. JAMA Otolaryngol Head Neck Surg 2018; 144:938-943. [DOI: 10.1001/jamaoto.2018.1856] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Affiliation(s)
- Mouna Attarha
- Posit Science Corporation, San Francisco, California
| | - James Bigelow
- Coleman Memorial Laboratory, Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco
| | - Michael M. Merzenich
- Posit Science Corporation, San Francisco, California
- Coleman Memorial Laboratory, Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco
| |
Collapse
|
19
|
Engineer CT, Rahebi KC, Borland MS, Buell EP, Im KW, Wilson LG, Sharma P, Vanneste S, Harony-Nicolas H, Buxbaum JD, Kilgard MP. Shank3-deficient rats exhibit degraded cortical responses to sound. Autism Res 2017; 11:59-68. [PMID: 29052348 DOI: 10.1002/aur.1883] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Revised: 09/25/2017] [Accepted: 10/02/2017] [Indexed: 02/06/2023]
Abstract
Individuals with SHANK3 mutations have severely impaired receptive and expressive language abilities. While brain responses are known to be abnormal in these individuals, the auditory cortex response to sound has remained largely understudied. In this study, we document the auditory cortex response to speech and non-speech sounds in the novel Shank3-deficient rat model. We predicted that the auditory cortex response to sounds would be impaired in Shank3-deficient rats. We found that auditory cortex responses were weaker in Shank3 heterozygous rats compared to wild-type rats. Additionally, Shank3 heterozygous responses had less spontaneous auditory cortex firing and were unable to respond well to rapid trains of noise bursts. The rat model of the auditory impairments in SHANK3 mutation could be used to test potential rehabilitation or drug therapies to improve the communication impairments observed in individuals with Phelan-McDermid syndrome. Autism Res 2018, 11: 59-68. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY Individuals with SHANK3 mutations have severely impaired language abilities, yet the auditory cortex response to sound has remained largely understudied. In this study, we found that auditory cortex responses were weaker and were unable to respond well to rapid sounds in Shank3-deficient rats compared to control rats. The rat model of the auditory impairments in SHANK3 mutation could be used to test potential rehabilitation or drug therapies to improve the communication impairments observed in individuals with Phelan-McDermid syndrome.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Michael S Borland
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Kwok W Im
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Linda G Wilson
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Pryanka Sharma
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Sven Vanneste
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Hala Harony-Nicolas
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY.,Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Joseph D Buxbaum
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY.,Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY.,Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY.,Fishberg Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY.,Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY.,The Mindich Child Health and Development Institute, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| |
Collapse
|
20
|
Christison-Lagay KL, Bennur S, Cohen YE. Contribution of spiking activity in the primary auditory cortex to detection in noise. J Neurophysiol 2017; 118:3118-3131. [PMID: 28855294 DOI: 10.1152/jn.00521.2017] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 08/25/2017] [Accepted: 08/27/2017] [Indexed: 01/08/2023] Open
Abstract
A fundamental problem in hearing is detecting a "target" stimulus (e.g., a friend's voice) that is presented with a noisy background (e.g., the din of a crowded restaurant). Despite its importance to hearing, a relationship between spiking activity and behavioral performance during such a "detection-in-noise" task has yet to be fully elucidated. In this study, we recorded spiking activity in primary auditory cortex (A1) while rhesus monkeys detected a target stimulus that was presented with a noise background. Although some neurons were modulated, the response of the typical A1 neuron was not modulated by the stimulus- and task-related parameters of our task. In contrast, we found more robust representations of these parameters in population-level activity: small populations of neurons matched the monkeys' behavioral sensitivity. Overall, these findings are consistent with the hypothesis that the sensory evidence, which is needed to solve such detection-in-noise tasks, is represented in population-level A1 activity and may be available to be read out by downstream neurons that are involved in mediating this task.NEW & NOTEWORTHY This study examines the contribution of A1 to detecting a sound that is presented with a noisy background. We found that population-level A1 activity, but not single neurons, could provide the evidence needed to make this perceptual decision.
Collapse
Affiliation(s)
| | - Sharath Bennur
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Yale E Cohen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania; .,Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania; and.,Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
21
|
Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams. J Neurosci 2017; 36:4895-906. [PMID: 27122044 DOI: 10.1523/jneurosci.4202-15.2016] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Accepted: 03/29/2016] [Indexed: 01/04/2023] Open
Abstract
UNLABELLED Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC-) before any behavioral training. A separate group of 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. SIGNIFICANCE STATEMENT Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing. These animals also exhibited abnormal neural plasticity after training, which may be a mechanism for why some children with dyslexia do not respond to intervention. These results are in stark contrast to our previous work with a different candidate gene, which caused a different set of deficits. Our results shed some light on possible neural and genetic mechanisms causing heterogeneity in the dyslexic population.
Collapse
|
22
|
Engineer CT, Shetake JA, Engineer ND, Vrana WA, Wolf JT, Kilgard MP. Temporal plasticity in auditory cortex improves neural discrimination of speech sounds. Brain Stimul 2017; 10:543-552. [PMID: 28131520 DOI: 10.1016/j.brs.2017.01.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Revised: 11/22/2016] [Accepted: 01/10/2017] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. OBJECTIVE/HYPOTHESIS We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. METHODS VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. RESULTS Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. CONCLUSION This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States; Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States.
| | - Jai A Shetake
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States
| | - Navzer D Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States; MicroTransponder Inc., 2802 Flintrock Trace Suite 225, Austin, TX 78738, United States
| | - Will A Vrana
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States
| | - Jordan T Wolf
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States; Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States
| |
Collapse
|
23
|
Abstract
UNLABELLED The neural mechanisms that support the robust processing of acoustic signals in the presence of background noise in the auditory system remain largely unresolved. Psychophysical experiments have shown that signal detection is influenced by the signal-to-noise ratio (SNR) and the overall stimulus level, but this relationship has not been fully characterized. We evaluated the neural representation of frequency in rat primary auditory cortex by constructing tonal frequency response areas (FRAs) in primary auditory cortex for different SNRs, tone levels, and noise levels. We show that response strength and selectivity for frequency and sound level depend on interactions between SNRs and tone levels. At low SNRs, jointly increasing the tone and noise levels reduced firing rates and narrowed FRA bandwidths; at higher SNRs, however, increasing the tone and noise levels increased firing rates and expanded bandwidths, as is usually seen for FRAs obtained without background noise. These changes in frequency and intensity tuning decreased tone level and tone frequency discriminability at low SNRs. By contrast, neither response onset latencies nor noise-driven steady-state firing rates meaningfully interacted with SNRs or overall sound levels. Speech detection performance in humans was also shown to depend on the interaction between overall sound level and SNR. Together, these results indicate that signal processing difficulties imposed by high noise levels are quite general and suggest that the neurophysiological changes we see for simple sounds generalize to more complex stimuli. SIGNIFICANCE STATEMENT Effective processing of sounds in background noise is an important feature of the mammalian auditory system and a necessary feature for successful hearing in many listening conditions. Even mild hearing loss strongly affects this ability in humans, seriously degrading the ability to communicate. The mechanisms involved in achieving high performance in background noise are not well understood. We investigated the effects of SNR and overall stimulus level on the frequency tuning of neurons in rat primary auditory cortex. We found that the effects of noise on frequency selectivity are not determined solely by the SNR but depend also on the levels of the foreground tones and background noise. These observations can lead to improvement in therapeutic approaches for hearing-impaired patients.
Collapse
|
24
|
Abstract
OBJECTIVES Noise, as an unwanted sound, has become one of modern society's environmental conundrums, and many children are exposed to higher noise levels than previously assumed. However, the effects of background noise on central auditory processing of toddlers, who are still acquiring language skills, have so far not been determined. The authors evaluated the effects of background noise on toddlers' speech-sound processing by recording event-related brain potentials. The hypothesis was that background noise modulates neural speech-sound encoding and degrades speech-sound discrimination. DESIGN Obligatory P1 and N2 responses for standard syllables and the mismatch negativity (MMN) response for five different syllable deviants presented in a linguistic multifeature paradigm were recorded in silent and background noise conditions. The participants were 18 typically developing 22- to 26-month-old monolingual children with healthy ears. RESULTS The results showed that the P1 amplitude was smaller and the N2 amplitude larger in the noisy conditions compared with the silent conditions. In the noisy condition, the MMN was absent for the intensity and vowel changes and diminished for the consonant, frequency, and vowel duration changes embedded in speech syllables. Furthermore, the frontal MMN component was attenuated in the noisy condition. However, noise had no effect on P1, N2, or MMN latencies. CONCLUSIONS The results from this study suggest multiple effects of background noise on the central auditory processing of toddlers. It modulates the early stages of sound encoding and dampens neural discrimination vital for accurate speech perception. These results imply that speech processing of toddlers, who may spend long periods of daytime in noisy conditions, is vulnerable to background noise. In noisy conditions, toddlers' neural representations of some speech sounds might be weakened. Thus, special attention should be paid to acoustic conditions and background noise levels in children's daily environments, like day-care centers, to ensure a propitious setting for linguistic development. In addition, the evaluation and improvement of daily listening conditions should be an ordinary part of clinical intervention of children with linguistic problems.
Collapse
|
25
|
Koerner TK, Zhang Y, Nelson PB, Wang B, Zou H. Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: A mismatch negativity study. Hear Res 2016; 339:40-9. [PMID: 27267705 DOI: 10.1016/j.heares.2016.06.001] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 05/16/2016] [Accepted: 06/02/2016] [Indexed: 11/17/2022]
Abstract
Successful speech communication requires the extraction of important acoustic cues from irrelevant background noise. In order to better understand this process, this study examined the effects of background noise on mismatch negativity (MMN) latency, amplitude, and spectral power measures as well as behavioral speech intelligibility tasks. Auditory event-related potentials (AERPs) were obtained from 15 normal-hearing participants to determine whether pre-attentive MMN measures recorded in response to a consonant (from /ba/ to /bu/) and vowel change (from /ba/ to /da/) in a double-oddball paradigm can predict sentence-level speech perception. The results showed that background noise increased MMN latencies and decreased MMN amplitudes with a reduction in the theta frequency band power. Differential noise-induced effects were observed for the pre-attentive processing of consonant and vowel changes due to different degrees of signal degradation by noise. Linear mixed-effects models further revealed significant correlations between the MMN measures and speech intelligibility scores across conditions and stimuli. These results confirm the utility of MMN as an objective neural marker for understanding noise-induced variations as well as individual differences in speech perception, which has important implications for potential clinical applications.
Collapse
Affiliation(s)
- Tess K Koerner
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA; Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN 55455, USA; Center for Applied Translational Sensory Science, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Peggy B Nelson
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA; Center for Applied Translational Sensory Science, University of Minnesota, Minneapolis, MN 55455, USA
| | - Boxiang Wang
- School of Statistics, University of Minnesota, Minneapolis, MN 55455, USA
| | - Hui Zou
- School of Statistics, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
26
|
Engineer CT, Rahebi KC, Borland MS, Buell EP, Centanni TM, Fink MK, Im KW, Wilson LG, Kilgard MP. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome. Neurobiol Dis 2015; 83:26-34. [PMID: 26321676 DOI: 10.1016/j.nbd.2015.08.019] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Revised: 07/31/2015] [Accepted: 08/19/2015] [Indexed: 10/23/2022] Open
Abstract
Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States.
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Michael S Borland
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Tracy M Centanni
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Melyssa K Fink
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Kwok W Im
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Linda G Wilson
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| |
Collapse
|
27
|
Behavioral and neural discrimination of speech sounds after moderate or intense noise exposure in rats. Ear Hear 2015; 35:e248-61. [PMID: 25072238 DOI: 10.1097/aud.0000000000000062] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Hearing loss is a commonly experienced disability in a variety of populations including veterans and the elderly and can often cause significant impairment in the ability to understand spoken language. In this study, we tested the hypothesis that neural and behavioral responses to speech will be differentially impaired in an animal model after two forms of hearing loss. DESIGN Sixteen female Sprague-Dawley rats were exposed to one of two types of broadband noise which was either moderate or intense. In nine of these rats, auditory cortex recordings were taken 4 weeks after noise exposure (NE). The other seven were pretrained on a speech sound discrimination task prior to NE and were then tested on the same task after hearing loss. RESULTS Following intense NE, rats had few neural responses to speech stimuli. These rats were able to detect speech sounds but were no longer able to discriminate between speech sounds. Following moderate NE, rats had reorganized cortical maps and altered neural responses to speech stimuli but were still able to accurately discriminate between similar speech sounds during behavioral testing. CONCLUSIONS These results suggest that rats are able to adjust to the neural changes after moderate NE and discriminate speech sounds, but they are not able to recover behavioral abilities after intense NE. Animal models could help clarify the adaptive and pathological neural changes that contribute to speech processing in hearing-impaired populations and could be used to test potential behavioral and pharmacological therapies.
Collapse
|
28
|
Engineer CT, Rahebi KC, Buell EP, Fink MK, Kilgard MP. Speech training alters consonant and vowel responses in multiple auditory cortex fields. Behav Brain Res 2015; 287:256-64. [PMID: 25827927 DOI: 10.1016/j.bbr.2015.03.044] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 03/19/2015] [Accepted: 03/22/2015] [Indexed: 10/23/2022]
Abstract
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States.
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Melyssa K Fink
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| |
Collapse
|
29
|
Liang F, Bai L, Tao HW, Zhang LI, Xiao Z. Thresholding of auditory cortical representation by background noise. Front Neural Circuits 2014; 8:133. [PMID: 25426029 PMCID: PMC4226155 DOI: 10.3389/fncir.2014.00133] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2014] [Accepted: 10/21/2014] [Indexed: 11/13/2022] Open
Abstract
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.
Collapse
Affiliation(s)
- Feixue Liang
- Department of Physiology, School of Basic Medicine, Southern Medical University, Guangzhou Guangdong, China ; Zilkha Neurogenetic Institute, Keck School of Medicine, University of Southern California Los Angeles, CA, USA
| | - Lin Bai
- Department of Physiology, School of Basic Medicine, Southern Medical University, Guangzhou Guangdong, China ; Zilkha Neurogenetic Institute, Keck School of Medicine, University of Southern California Los Angeles, CA, USA
| | - Huizhong W Tao
- Zilkha Neurogenetic Institute, Keck School of Medicine, University of Southern California Los Angeles, CA, USA
| | - Li I Zhang
- Zilkha Neurogenetic Institute, Keck School of Medicine, University of Southern California Los Angeles, CA, USA
| | - Zhongju Xiao
- Department of Physiology, School of Basic Medicine, Southern Medical University, Guangzhou Guangdong, China
| |
Collapse
|
30
|
Engineer CT, Perez CA, Carraway RS, Chang KQ, Roland JL, Kilgard MP. Speech training alters tone frequency tuning in rat primary auditory cortex. Behav Brain Res 2014; 258:166-78. [PMID: 24344364 DOI: 10.1016/j.bbr.2013.10.021] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing.
Collapse
|
31
|
Environmental acoustic enrichment promotes recovery from developmentally degraded auditory cortical processing. J Neurosci 2014; 34:5406-15. [PMID: 24741032 DOI: 10.1523/jneurosci.5310-13.2014] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
It has previously been shown that environmental enrichment can enhance structural plasticity in the brain and thereby improve cognitive and behavioral function. In this study, we reared developmentally noise-exposed rats in an acoustic-enriched environment for ∼4 weeks to investigate whether or not enrichment could restore developmentally degraded behavioral and neuronal processing of sound frequency. We found that noise-exposed rats had significantly elevated sound frequency discrimination thresholds compared with age-matched naive rats. Environmental acoustic enrichment nearly restored to normal the behavioral deficit resulting from early disrupted acoustic inputs. Signs of both degraded frequency selectivity of neurons as measured by the bandwidth of frequency tuning curves and decreased long-term potentiation of field potentials recorded in the primary auditory cortex of these noise-exposed rats also were reversed partially. The observed behavioral and physiological effects induced by enrichment were accompanied by recovery of cortical expressions of certain NMDA and GABAA receptor subunits and brain-derived neurotrophic factor. These studies in a rodent model show that environmental acoustic enrichment promotes recovery from early noise-induced auditory cortical dysfunction and indicate a therapeutic potential of this noninvasive approach for normalizing neurological function from pathologies that cause hearing and associated language impairments in older children and adults.
Collapse
|
32
|
Centanni TM, Chen F, Booker AM, Engineer CT, Sloan AM, Rennaker RL, LoTurco JJ, Kilgard MP. Speech sound processing deficits and training-induced neural plasticity in rats with dyslexia gene knockdown. PLoS One 2014; 9:e98439. [PMID: 24871331 PMCID: PMC4037188 DOI: 10.1371/journal.pone.0098439] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2013] [Accepted: 05/02/2014] [Indexed: 11/18/2022] Open
Abstract
In utero RNAi of the dyslexia-associated gene Kiaa0319 in rats (KIA-) degrades cortical responses to speech sounds and increases trial-by-trial variability in onset latency. We tested the hypothesis that KIA- rats would be impaired at speech sound discrimination. KIA- rats needed twice as much training in quiet conditions to perform at control levels and remained impaired at several speech tasks. Focused training using truncated speech sounds was able to normalize speech discrimination in quiet and background noise conditions. Training also normalized trial-by-trial neural variability and temporal phase locking. Cortical activity from speech trained KIA- rats was sufficient to accurately discriminate between similar consonant sounds. These results provide the first direct evidence that assumed reduced expression of the dyslexia-associated gene KIAA0319 can cause phoneme processing impairments similar to those seen in dyslexia and that intensive behavioral therapy can eliminate these impairments.
Collapse
Affiliation(s)
- Tracy M. Centanni
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Fuyi Chen
- Physiology and Neurobiology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Anne M. Booker
- Physiology and Neurobiology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Crystal T. Engineer
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Andrew M. Sloan
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Robert L. Rennaker
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Joseph J. LoTurco
- Physiology and Neurobiology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Michael P. Kilgard
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| |
Collapse
|
33
|
Degraded speech sound processing in a rat model of fragile X syndrome. Brain Res 2014; 1564:72-84. [PMID: 24713347 DOI: 10.1016/j.brainres.2014.03.049] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Revised: 03/29/2014] [Accepted: 03/31/2014] [Indexed: 12/29/2022]
Abstract
Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies.
Collapse
|
34
|
Engineer CT, Centanni TM, Im KW, Borland MS, Moreno NA, Carraway RS, Wilson LG, Kilgard MP. Degraded auditory processing in a rat model of autism limits the speech representation in non-primary auditory cortex. Dev Neurobiol 2014; 74:972-86. [PMID: 24639033 DOI: 10.1002/dneu.22175] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2013] [Revised: 02/17/2014] [Accepted: 03/07/2014] [Indexed: 01/22/2023]
Abstract
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism.
Collapse
Affiliation(s)
- C T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, 75080
| | | | | | | | | | | | | | | |
Collapse
|
35
|
Centanni TM, Sloan AM, Reed AC, Engineer CT, Rennaker RL, Kilgard MP. Detection and identification of speech sounds using cortical activity patterns. Neuroscience 2014; 258:292-306. [PMID: 24286757 PMCID: PMC3898816 DOI: 10.1016/j.neuroscience.2013.11.030] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Revised: 11/14/2013] [Accepted: 11/15/2013] [Indexed: 10/26/2022]
Abstract
We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance and without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/s), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech-processing disorders.
Collapse
Affiliation(s)
| | - A M Sloan
- University of Texas at Dallas, United States
| | - A C Reed
- University of Texas at Dallas, United States
| | | | | | - M P Kilgard
- University of Texas at Dallas, United States
| |
Collapse
|
36
|
McCullagh J, Shinn JB. Auditory cortical processing in noise in younger and older adults. HEARING BALANCE AND COMMUNICATION 2013. [DOI: 10.3109/21695717.2013.855374] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
37
|
Rabinowitz NC, Willmore BDB, King AJ, Schnupp JWH. Constructing noise-invariant representations of sound in the auditory pathway. PLoS Biol 2013; 11:e1001710. [PMID: 24265596 PMCID: PMC3825667 DOI: 10.1371/journal.pbio.1001710] [Citation(s) in RCA: 98] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 10/04/2013] [Indexed: 11/18/2022] Open
Abstract
Along the auditory pathway from auditory nerve to midbrain to cortex, individual neurons adapt progressively to sound statistics, enabling the discernment of foreground sounds, such as speech, over background noise. Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. We rarely hear sounds (such as someone talking) in isolation, but rather against a background of noise. When mixtures of sounds and background noise reach the ears, peripheral auditory neurons represent the whole sound mixture. Previous evidence suggests, however, that the higher auditory brain represents just the sounds of interest, and is less affected by the presence of background noise. The neural mechanisms underlying this transformation are poorly understood. Here, we investigate these mechanisms by studying the representation of sound by populations of neurons at three stages along the auditory pathway; we simulate the auditory nerve and record from neurons in the midbrain and primary auditory cortex of anesthetized ferrets. We find that the transformation from noise-sensitive representations of sound to noise-tolerant processing takes place gradually along the pathway from auditory nerve to midbrain to cortex. Our results suggest that this results from neurons adapting to the statistics of heard sounds.
Collapse
Affiliation(s)
- Neil C. Rabinowitz
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- Center for Neural Science, New York University, New York, New York, United States of America
- * E-mail: (N.C.R.); (J.W.H.S.)
| | - Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan W. H. Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- * E-mail: (N.C.R.); (J.W.H.S.)
| |
Collapse
|
38
|
Engineer CT, Perez CA, Carraway RS, Chang KQ, Roland JL, Sloan AM, Kilgard MP. Similarity of cortical activity patterns predicts generalization behavior. PLoS One 2013; 8:e78607. [PMID: 24147140 PMCID: PMC3797841 DOI: 10.1371/journal.pone.0078607] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2013] [Accepted: 09/20/2013] [Indexed: 11/23/2022] Open
Abstract
Humans and animals readily generalize previously learned knowledge to new situations. Determining similarity is critical for assigning category membership to a novel stimulus. We tested the hypothesis that category membership is initially encoded by the similarity of the activity pattern evoked by a novel stimulus to the patterns from known categories. We provide behavioral and neurophysiological evidence that activity patterns in primary auditory cortex contain sufficient information to explain behavioral categorization of novel speech sounds by rats. Our results suggest that category membership might be encoded by the similarity of the activity pattern evoked by a novel speech sound to the patterns evoked by known sounds. Categorization based on featureless pattern matching may represent a general neural mechanism for ensuring accurate generalization across sensory and cognitive systems.
Collapse
Affiliation(s)
- Crystal T. Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
- * E-mail:
| | - Claudia A. Perez
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Ryan S. Carraway
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Kevin Q. Chang
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Jarod L. Roland
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Andrew M. Sloan
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Michael P. Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| |
Collapse
|
39
|
Cortical inhibition reduces information redundancy at presentation of communication sounds in the primary auditory cortex. J Neurosci 2013; 33:10713-28. [PMID: 23804094 DOI: 10.1523/jneurosci.0079-13.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In all sensory modalities, intracortical inhibition shapes the functional properties of cortical neurons but also influences the responses to natural stimuli. Studies performed in various species have revealed that auditory cortex neurons respond to conspecific vocalizations by temporal spike patterns displaying a high trial-to-trial reliability, which might result from precise timing between excitation and inhibition. Studying the guinea pig auditory cortex, we show that partial blockage of GABAA receptors by gabazine (GBZ) application (10 μm, a concentration that promotes expansion of cortical receptive fields) increased the evoked firing rate and the spike-timing reliability during presentation of communication sounds (conspecific and heterospecific vocalizations), whereas GABAB receptor antagonists [10 μm saclofen; 10-50 μm CGP55845 (p-3-aminopropyl-p-diethoxymethyl phosphoric acid)] had nonsignificant effects. Computing mutual information (MI) from the responses to vocalizations using either the evoked firing rate or the temporal spike patterns revealed that GBZ application increased the MI derived from the activity of single cortical site but did not change the MI derived from population activity. In addition, quantification of information redundancy showed that GBZ significantly increased redundancy at the population level. This result suggests that a potential role of intracortical inhibition is to reduce information redundancy during the processing of natural stimuli.
Collapse
|
40
|
Ranasinghe KG, Vrana WA, Matney CJ, Kilgard MP. Increasing diversity of neural responses to speech sounds across the central auditory pathway. Neuroscience 2013; 252:80-97. [PMID: 23954862 DOI: 10.1016/j.neuroscience.2013.08.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Revised: 07/24/2013] [Accepted: 08/03/2013] [Indexed: 10/26/2022]
Abstract
Neurons at higher stations of each sensory system are responsive to feature combinations not present at lower levels. As a result, the activity of these neurons becomes less redundant than lower levels. We recorded responses to speech sounds from the inferior colliculus and the primary auditory cortex neurons of rats, and tested the hypothesis that primary auditory cortex neurons are more sensitive to combinations of multiple acoustic parameters compared to inferior colliculus neurons. We independently eliminated periodicity information, spectral information and temporal information in each consonant and vowel sound using a noise vocoder. This technique made it possible to test several key hypotheses about speech sound processing. Our results demonstrate that inferior colliculus responses are spatially arranged and primarily determined by the spectral energy and the fundamental frequency of speech, whereas primary auditory cortex neurons generate widely distributed responses to multiple acoustic parameters, and are not strongly influenced by the fundamental frequency of speech. We found no evidence that inferior colliculus or primary auditory cortex was specialized for speech features such as voice onset time or formants. The greater diversity of responses in primary auditory cortex compared to inferior colliculus may help explain how the auditory system can identify a wide range of speech sounds across a wide range of conditions without relying on any single acoustic cue.
Collapse
Affiliation(s)
- K G Ranasinghe
- The University of Texas at Dallas, School of Behavioral Brain Sciences, 800 West Campbell Road, GR41, Richardson, TX 75080-3021, United States.
| | | | | | | |
Collapse
|
41
|
Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat. PLoS One 2013; 8:e63655. [PMID: 23671691 PMCID: PMC3646040 DOI: 10.1371/journal.pone.0063655] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Accepted: 04/04/2013] [Indexed: 11/29/2022] Open
Abstract
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Collapse
|
42
|
Centanni TM, Engineer CT, Kilgard MP. Cortical speech-evoked response patterns in multiple auditory fields are correlated with behavioral discrimination ability. J Neurophysiol 2013; 110:177-89. [PMID: 23596332 DOI: 10.1152/jn.00092.2013] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Different speech sounds evoke unique patterns of activity in primary auditory cortex (A1). Behavioral discrimination by rats is well correlated with the distinctness of the A1 patterns evoked by individual consonants, but only when precise spike timing is preserved. In this study we recorded the speech-evoked responses in the primary, anterior, ventral, and posterior auditory fields of the rat and evaluated whether activity in these fields is better correlated with speech discrimination ability when spike timing information is included or eliminated. Spike timing information improved consonant discrimination in all four of the auditory fields examined. Behavioral discrimination was significantly correlated with neural discrimination in all four auditory fields. The diversity of speech responses across recordings sites was greater in posterior and ventral auditory fields compared with A1 and anterior auditor fields. These results suggest that, while the various auditory fields of the rat process speech sounds differently, neural activity in each field could be used to distinguish between consonant sounds with accuracy that closely parallels behavioral discrimination. Earlier observations in the visual and somatosensory systems that cortical neurons do not rely on spike timing should be reevaluated with more complex natural stimuli to determine whether spike timing contributes to sensory encoding.
Collapse
Affiliation(s)
- T M Centanni
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas 75080, USA.
| | | | | |
Collapse
|
43
|
Gaucher Q, Huetz C, Gourévitch B, Laudanski J, Occelli F, Edeline JM. How do auditory cortex neurons represent communication sounds? Hear Res 2013; 305:102-12. [PMID: 23603138 DOI: 10.1016/j.heares.2013.03.011] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Revised: 03/18/2013] [Accepted: 03/26/2013] [Indexed: 11/30/2022]
Abstract
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Quentin Gaucher
- Centre de Neurosciences Paris-Sud (CNPS), CNRS UMR 8195, Université Paris-Sud, Bâtiment 446, 91405 Orsay cedex, France
| | | | | | | | | | | |
Collapse
|
44
|
Perez CA, Engineer CT, Jakkamsetti V, Carraway RS, Perry MS, Kilgard MP. Different timescales for the neural coding of consonant and vowel sounds. Cereb Cortex 2013; 23:670-83. [PMID: 22426334 PMCID: PMC3563339 DOI: 10.1093/cercor/bhs045] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders.
Collapse
Affiliation(s)
- Claudia A Perez
- Cognition and Neuroscience Program, School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX 75080, USA
| | | | | | | | | | | |
Collapse
|
45
|
Centanni TM, Booker AB, Sloan AM, Chen F, Maher BJ, Carraway RS, Khodaparast N, Rennaker R, LoTurco JJ, Kilgard MP. Knockdown of the dyslexia-associated gene Kiaa0319 impairs temporal responses to speech stimuli in rat primary auditory cortex. Cereb Cortex 2013; 24:1753-66. [PMID: 23395846 DOI: 10.1093/cercor/bht028] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
One in 15 school age children have dyslexia, which is characterized by phoneme-processing problems and difficulty learning to read. Dyslexia is associated with mutations in the gene KIAA0319. It is not known whether reduced expression of KIAA0319 can degrade the brain's ability to process phonemes. In the current study, we used RNA interference (RNAi) to reduce expression of Kiaa0319 (the rat homolog of the human gene KIAA0319) and evaluate the effect in a rat model of phoneme discrimination. Speech discrimination thresholds in normal rats are nearly identical to human thresholds. We recorded multiunit neural responses to isolated speech sounds in primary auditory cortex (A1) of rats that received in utero RNAi of Kiaa0319. Reduced expression of Kiaa0319 increased the trial-by-trial variability of speech responses and reduced the neural discrimination ability of speech sounds. Intracellular recordings from affected neurons revealed that reduced expression of Kiaa0319 increased neural excitability and input resistance. These results provide the first evidence that decreased expression of the dyslexia-associated gene Kiaa0319 can alter cortical responses and impair phoneme processing in auditory cortex.
Collapse
Affiliation(s)
- T M Centanni
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | | | - A M Sloan
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | - F Chen
- University of Connecticut
| | | | - R S Carraway
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | - N Khodaparast
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | - R Rennaker
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | | | - M P Kilgard
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| |
Collapse
|
46
|
Ma H, Qin L, Dong C, Zhong R, Sato Y. Comparison of neural responses to cat meows and human vowels in the anterior and posterior auditory field of awake cats. PLoS One 2013; 8:e52942. [PMID: 23301004 PMCID: PMC3534661 DOI: 10.1371/journal.pone.0052942] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2012] [Accepted: 11/23/2012] [Indexed: 11/19/2022] Open
Abstract
For humans and animals, the ability to discriminate speech and conspecific vocalizations is an important physiological assignment of the auditory system. To reveal the underlying neural mechanism, many electrophysiological studies have investigated the neural responses of the auditory cortex to conspecific vocalizations in monkeys. The data suggest that vocalizations may be hierarchically processed along an anterior/ventral stream from the primary auditory cortex (A1) to the ventral prefrontal cortex. To date, the organization of vocalization processing has not been well investigated in the auditory cortex of other mammals. In this study, we examined the spike activities of single neurons in two early auditory cortical regions with different anteroposterior locations: anterior auditory field (AAF) and posterior auditory field (PAF) in awake cats, as the animals were passively listening to forward and backward conspecific calls (meows) and human vowels. We found that the neural response patterns in PAF were more complex and had longer latency than those in AAF. The selectivity for different vocalizations based on the mean firing rate was low in both AAF and PAF, and not significantly different between them; however, more vocalization information was transmitted when the temporal response profiles were considered, and the maximum transmitted information by PAF neurons was higher than that by AAF neurons. Discrimination accuracy based on the activities of an ensemble of PAF neurons was also better than that of AAF neurons. Our results suggest that AAF and PAF are similar with regard to which vocalizations they represent but differ in the way they represent these vocalizations, and there may be a complex processing stream between them.
Collapse
Affiliation(s)
- Hanlu Ma
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Ling Qin
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
- Department of Physiology, China Medical University, Shenyang, People’s Republic of China
- * E-mail:
| | - Chao Dong
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
| | - Renjia Zhong
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
- Department of Physiology, China Medical University, Shenyang, People’s Republic of China
| | - Yu Sato
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi, Japan
| |
Collapse
|
47
|
Ranasinghe KG, Carraway RS, Borland MS, Moreno NA, Hanacik EA, Miller RS, Kilgard MP. Speech discrimination after early exposure to pulsed-noise or speech. Hear Res 2012; 289:1-12. [PMID: 22575207 DOI: 10.1016/j.heares.2012.04.020] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2012] [Revised: 04/17/2012] [Accepted: 04/24/2012] [Indexed: 10/28/2022]
Abstract
Early experience of structured inputs and complex sound features generate lasting changes in tonotopy and receptive field properties of primary auditory cortex (A1). In this study we tested whether these changes are severe enough to alter neural representations and behavioral discrimination of speech. We exposed two groups of rat pups during the critical period of auditory development to pulsed-noise or speech. Both groups of rats were trained to discriminate speech sounds when they were young adults, and anesthetized neural responses were recorded from A1. The representation of speech in A1 and behavioral discrimination of speech remained robust to altered spectral and temporal characteristics of A1 neurons after pulsed-noise exposure. Exposure to passive speech during early development provided no added advantage in speech sound processing. Speech training increased A1 neuronal firing rate for speech stimuli in naïve rats, but did not increase responses in rats that experienced early exposure to pulsed-noise or speech. Our results suggest that speech sound processing is resistant to changes in simple neural response properties caused by manipulating early acoustic environment.
Collapse
Affiliation(s)
- Kamalini G Ranasinghe
- School of Behavioral and Brain Sciences, GR41 The University of Texas at Dallas, 800 West Campbell Road, Richardson, TX 75080 3021, USA.
| | | | | | | | | | | | | |
Collapse
|
48
|
Neural mechanisms supporting robust discrimination of spectrally and temporally degraded speech. J Assoc Res Otolaryngol 2012; 13:527-42. [PMID: 22549175 DOI: 10.1007/s10162-012-0328-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2011] [Accepted: 03/26/2012] [Indexed: 10/28/2022] Open
Abstract
Cochlear implants provide good speech discrimination ability despite highly limited amount of information they transmit compared with normal cochlea. Noise vocoded speech, simulating cochlear implants in normal hearing listeners, have demonstrated that spectrally and temporally degraded speech contains sufficient cues to provide accurate speech discrimination. We hypothesized that neural activity patterns generated in the primary auditory cortex by spectrally and temporally degraded speech sounds will account for the robust behavioral discrimination of speech. We examined the behavioral discrimination of noise vocoded consonants and vowels by rats and recorded neural activity patterns from rat primary auditory cortex (A1) for the same sounds. We report the first evidence of behavioral discrimination of degraded speech sounds by an animal model. Our results show that rats are able to accurately discriminate both consonant and vowel sounds even after significant spectral and temporal degradation. The degree of degradation that rats can tolerate is comparable to human listeners. We observed that neural discrimination based on spatiotemporal patterns (spike timing) of A1 neurons is highly correlated with behavioral discrimination of consonants and that neural discrimination based on spatial activity patterns (spike count) of A1 neurons is highly correlated with behavioral discrimination of vowels. The results of the current study indicate that speech discrimination is resistant to degradation as long as the degraded sounds generate distinct patterns of neural activity.
Collapse
|
49
|
Engineer ND, Engineer CT, Reed AC, Pandya PK, Jakkamsetti V, Moucha R, Kilgard MP. Inverted-U function relating cortical plasticity and task difficulty. Neuroscience 2012; 205:81-90. [PMID: 22249158 DOI: 10.1016/j.neuroscience.2011.12.056] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2011] [Revised: 12/23/2011] [Accepted: 12/28/2011] [Indexed: 11/29/2022]
Abstract
Many psychological and physiological studies with simple stimuli have suggested that perceptual learning specifically enhances the response of primary sensory cortex to task-relevant stimuli. The aim of this study was to determine whether auditory discrimination training on complex tasks enhances primary auditory cortex responses to a target sequence relative to non-target and novel sequences. We collected responses from more than 2000 sites in 31 rats trained on one of six discrimination tasks that differed primarily in the similarity of the target and distractor sequences. Unlike training with simple stimuli, long-term training with complex stimuli did not generate target-specific enhancement in any of the groups. Instead, cortical receptive field size decreased, latency decreased, and paired pulse depression decreased in rats trained on the tasks of intermediate difficulty, whereas tasks that were too easy or too difficult either did not alter or degraded cortical responses. These results suggest an inverted-U function relating neural plasticity and task difficulty.
Collapse
Affiliation(s)
- N D Engineer
- School of Behavioral and Brain Sciences, University of Texas at Dallas, 800 W. Campbell Road Richardson, TX 75080, USA.
| | | | | | | | | | | | | |
Collapse
|