1
|
Centanni TM, Gunderson LPK, Parra M. Use of a predictor cue during a speech sound discrimination task in a Cntnap2 knockout rat model of autism. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.12.04.626861. [PMID: 39677787 PMCID: PMC11643114 DOI: 10.1101/2024.12.04.626861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2024]
Abstract
Autism is a common neurodevelopmental disorder that despite its complex etiology, is marked by deficits in prediction that manifest in a variety of domains including social interactions, communication, and movement. The tendency of individuals with autism to focus on predictable schedules and interests that contain patterns and rules highlights the likely involvement of the cerebellum in this disorder. One candidate-autism gene is contact in associated protein 2 (CNTNAP2), and variants in this gene are associated with sensory deficits and anatomical differences. It is unknown, however, whether this gene directly impacts the brain's ability to make and evaluate predictions about future events. The current study was designed to answer this question by training a genetic knockout rat on a rapid speech sound discrimination task. Rats with Cntnap2 knockout (KO) and their littermate wildtype controls (WT) were trained on a validated rapid speech sound discrimination task that contained unpredictable and predictable targets. We found that although both genotype groups learned the task in both unpredictable and predictable conditions, the KO rats responded more often to distractors during training as well as to the target sound during the predictable testing conditions compared to the WT group. There were only minor effects of sex on performance and only in the unpredictable condition. The current results provide preliminary evidence that removal of this candidate-autism gene may interfere with the learning of unpredictable scenarios and enhance reliance on predictability. Future research is needed to probe the neural anatomy and function that drives this effect.
Collapse
Affiliation(s)
- Tracy M. Centanni
- Department of Psychology, Texas Christian University, Fort Worth, TX 76129
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, FL 32610
| | | | - Monica Parra
- Department of Psychology, Texas Christian University, Fort Worth, TX 76129
| |
Collapse
|
2
|
Gunderson LPK, Brice K, Parra M, Engelhart AS, Centanni TM. A novel paradigm for measuring prediction abilities in a rat model using a speech-sound discrimination task. Behav Brain Res 2024; 472:115143. [PMID: 38986956 DOI: 10.1016/j.bbr.2024.115143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 06/17/2024] [Accepted: 07/06/2024] [Indexed: 07/12/2024]
Abstract
The ability to predict and respond to upcoming stimuli is a critical skill for all animals, including humans. Prediction operates largely below conscious awareness to allow an individual to recall previously encountered stimuli and prepare an appropriate response, especially in language. The ability to predict upcoming words within typical speech patterns aids fluent comprehension, as conversational speech occurs quickly. Individuals with certain neurodevelopmental disorders such as autism and dyslexia have deficits in their ability to generate and use predictions. Rodent models are often used to investigate specific aspects of these disorders, but there is no existing behavioral paradigm that can assess prediction capabilities with complex stimuli like speech sounds. Thus, the present study modified an existing rapid speech sound discrimination paradigm to assess whether rats can form predictions of upcoming speech sound stimuli and utilize them to improve task performance. We replicated prior work showing that rats can discriminate between speech sounds presented at rapid rates. We also saw that rats responded exclusively to the target at slow speeds but began responding to the predictive cue in anticipation of the target as the speed increased, suggesting that they learned the predictive value of the cue and adjusted their behavior accordingly. This prediction task will be useful in assessing prediction deficits in rat models of various neurodevelopmental disorders through the manipulation of both genetic and environmental factors.
Collapse
Affiliation(s)
- Logun P K Gunderson
- Department of Psychology, Texas Christian University, Fort Worth, TX 76129, United States
| | - Kelly Brice
- Department of Psychology, Texas Christian University, Fort Worth, TX 76129, United States
| | - Monica Parra
- Department of Psychology, Texas Christian University, Fort Worth, TX 76129, United States
| | - Abby S Engelhart
- Department of Psychology, Texas Christian University, Fort Worth, TX 76129, United States
| | - Tracy M Centanni
- Department of Psychology, Texas Christian University, Fort Worth, TX 76129, United States; Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, FL 32610, United States.
| |
Collapse
|
3
|
Mohn JL, Baese-Berk MM, Jaramillo S. Selectivity to acoustic features of human speech in the auditory cortex of the mouse. Hear Res 2024; 441:108920. [PMID: 38029503 PMCID: PMC10787375 DOI: 10.1016/j.heares.2023.108920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/29/2023] [Accepted: 11/20/2023] [Indexed: 12/01/2023]
Abstract
A better understanding of the neural mechanisms of speech processing can have a major impact in the development of strategies for language learning and in addressing disorders that affect speech comprehension. Technical limitations in research with human subjects hinder a comprehensive exploration of these processes, making animal models essential for advancing the characterization of how neural circuits make speech perception possible. Here, we investigated the mouse as a model organism for studying speech processing and explored whether distinct regions of the mouse auditory cortex are sensitive to specific acoustic features of speech. We found that mice can learn to categorize frequency-shifted human speech sounds based on differences in formant transitions (FT) and voice onset time (VOT). Moreover, neurons across various auditory cortical regions were selective to these speech features, with a higher proportion of speech-selective neurons in the dorso-posterior region. Last, many of these neurons displayed mixed-selectivity for both features, an attribute that was most common in dorsal regions of the auditory cortex. Our results demonstrate that the mouse serves as a valuable model for studying the detailed mechanisms of speech feature encoding and neural plasticity during speech-sound learning.
Collapse
Affiliation(s)
- Jennifer L Mohn
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States of America
| | - Melissa M Baese-Berk
- Department of Linguistics, University of Oregon, Eugene, OR 97403, United States of America; Department of Linguistics, University of Chicago, Chicago, IL 60637, United States of America(1)
| | - Santiago Jaramillo
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States of America.
| |
Collapse
|
4
|
Mohn JL, Baese-Berk MM, Jaramillo S. Selectivity to acoustic features of human speech in the auditory cortex of the mouse. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.20.558699. [PMID: 37790479 PMCID: PMC10542132 DOI: 10.1101/2023.09.20.558699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
A better understanding of the neural mechanisms of speech processing can have a major impact in the development of strategies for language learning and in addressing disorders that affect speech comprehension. Technical limitations in research with human subjects hinder a comprehensive exploration of these processes, making animal models essential for advancing the characterization of how neural circuits make speech perception possible. Here, we investigated the mouse as a model organism for studying speech processing and explored whether distinct regions of the mouse auditory cortex are sensitive to specific acoustic features of speech. We found that mice can learn to categorize frequency-shifted human speech sounds based on differences in formant transitions (FT) and voice onset time (VOT). Moreover, neurons across various auditory cortical regions were selective to these speech features, with a higher proportion of speech-selective neurons in the dorso-posterior region. Last, many of these neurons displayed mixed-selectivity for both features, an attribute that was most common in dorsal regions of the auditory cortex. Our results demonstrate that the mouse serves as a valuable model for studying the detailed mechanisms of speech feature encoding and neural plasticity during speech-sound learning.
Collapse
Affiliation(s)
- Jennifer L. Mohn
- Institute of Neuroscience, University of Oregon. Eugene, OR 97403
| | | | | |
Collapse
|
5
|
Vivaldo CA, Lee J, Shorkey M, Keerthy A, Rothschild G. Auditory cortex ensembles jointly encode sound and locomotion speed to support sound perception during movement. PLoS Biol 2023; 21:e3002277. [PMID: 37651461 PMCID: PMC10499203 DOI: 10.1371/journal.pbio.3002277] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 09/13/2023] [Accepted: 07/26/2023] [Indexed: 09/02/2023] Open
Abstract
The ability to process and act upon incoming sounds during locomotion is critical for survival and adaptive behavior. Despite the established role that the auditory cortex (AC) plays in behavior- and context-dependent sound processing, previous studies have found that auditory cortical activity is on average suppressed during locomotion as compared to immobility. While suppression of auditory cortical responses to self-generated sounds results from corollary discharge, which weakens responses to predictable sounds, the functional role of weaker responses to unpredictable external sounds during locomotion remains unclear. In particular, whether suppression of external sound-evoked responses during locomotion reflects reduced involvement of the AC in sound processing or whether it results from masking by an alternative neural computation in this state remains unresolved. Here, we tested the hypothesis that rather than simple inhibition, reduced sound-evoked responses during locomotion reflect a tradeoff with the emergence of explicit and reliable coding of locomotion velocity. To test this hypothesis, we first used neural inactivation in behaving mice and found that the AC plays a critical role in sound-guided behavior during locomotion. To investigate the nature of this processing, we used two-photon calcium imaging of local excitatory auditory cortical neural populations in awake mice. We found that locomotion had diverse influences on activity of different neurons, with a net suppression of baseline-subtracted sound-evoked responses and neural stimulus detection, consistent with previous studies. Importantly, we found that the net inhibitory effect of locomotion on baseline-subtracted sound-evoked responses was strongly shaped by elevated ongoing activity that compressed the response dynamic range, and that rather than reflecting enhanced "noise," this ongoing activity reliably encoded the animal's locomotion speed. Decoding analyses revealed that locomotion speed and sound are robustly co-encoded by auditory cortical ensemble activity. Finally, we found consistent patterns of joint coding of sound and locomotion speed in electrophysiologically recorded activity in freely moving rats. Together, our data suggest that rather than being suppressed by locomotion, auditory cortical ensembles explicitly encode it alongside sound information to support sound perception during locomotion.
Collapse
Affiliation(s)
- Carlos Arturo Vivaldo
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Joonyeup Lee
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
| | - MaryClaire Shorkey
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Ajay Keerthy
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
- Kresge Hearing Research Institute and Department of Otolaryngology—Head and Neck Surgery, University of Michigan, Ann Arbor, Michigan, United States of America
| |
Collapse
|
6
|
Mahon E, Lachman ME. Voice biomarkers as indicators of cognitive changes in middle and later adulthood. Neurobiol Aging 2022; 119:22-35. [PMID: 35964541 PMCID: PMC9487188 DOI: 10.1016/j.neurobiolaging.2022.06.010] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 06/20/2022] [Accepted: 06/26/2022] [Indexed: 11/20/2022]
Abstract
Voice prosody measures have been linked with Alzheimer's disease (AD), but it is unclear whether they are associated with normal cognitive aging. We assessed relationships between voice measures and 10-year cognitive changes in the MIDUS national sample of middle-aged and older adults ages 42-92, with a mean age of 64.09 (standard deviation = 11.23) at the second wave. Seven cognitive tests were assessed in 2003-2004 (Wave 2) and 2013-2014 (Wave 3). Voice measures were collected at Wave 3 (N = 2585) from audio recordings of the cognitive interviews. Analyses controlled for age, education, depressive symptoms, and health. As predicted, higher jitter was associated with greater declines in episodic memory, verbal fluency, and attention switching. Lower pulse was related to greater decline in episodic memory, and fewer voice breaks were related to greater declines in episodic memory and verbal fluency, although the direction of these effects was contrary to hypotheses. Findings suggest that voice biomarkers may offer a promising approach for early detection of risk factors for cognitive impairment or AD.
Collapse
Affiliation(s)
- Elizabeth Mahon
- Brandeis University, Department of Psychology, Waltham, MA, USA.
| | | |
Collapse
|
7
|
Slonina ZA, Poole KC, Bizley JK. What can we learn from inactivation studies? Lessons from auditory cortex. Trends Neurosci 2021; 45:64-77. [PMID: 34799134 PMCID: PMC8897194 DOI: 10.1016/j.tins.2021.10.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 10/05/2021] [Accepted: 10/11/2021] [Indexed: 11/29/2022]
Abstract
Inactivation experiments in auditory cortex (AC) produce widely varying results that complicate interpretations regarding the precise role of AC in auditory perception and ensuing behaviour. The advent of optogenetic methods in neuroscience offers previously unachievable insight into the mechanisms transforming brain activity into behaviour. With a view to aiding the design and interpretation of future studies in and outside AC, here we discuss the methodological challenges faced in manipulating neural activity. While considering AC’s role in auditory behaviour through the prism of inactivation experiments, we consider the factors that confound the interpretation of the effects of inactivation on behaviour, including the species, the type of inactivation, the behavioural task employed, and the exact location of the inactivation. Wide variation in the outcome of auditory cortex inactivation has been an impediment to clear conclusions regarding the roles of the auditory cortex in behaviour. Inactivation methods differ in their efficacy and specificity. The likelihood of observing a behavioural deficit is additionally influenced by factors such as the species being used, task design and reward. A synthesis of previous results suggests that auditory cortex involvement is critical for tasks that require integrating across multiple stimulus features, and less likely to be critical for simple feature discriminations. New methods of neural silencing provide opportunities for spatially and temporally precise manipulation of activity, allowing perturbation of individual subfields and specific circuits.
Collapse
|
8
|
Yao JD, Sanes DH. Temporal Encoding is Required for Categorization, But Not Discrimination. Cereb Cortex 2021; 31:2886-2897. [PMID: 33429423 DOI: 10.1093/cercor/bhaa396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/26/2020] [Accepted: 11/03/2020] [Indexed: 11/14/2022] Open
Abstract
Core auditory cortex (AC) neurons encode slow fluctuations of acoustic stimuli with temporally patterned activity. However, whether temporal encoding is necessary to explain auditory perceptual skills remains uncertain. Here, we recorded from gerbil AC neurons while they discriminated between a 4-Hz amplitude modulation (AM) broadband noise and AM rates >4 Hz. We found a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task. A population decoder that relied on temporal information outperformed a decoder that relied on spike count alone, but the spike count decoder still remained sufficient to explain average behavioral performance. This leaves open the possibility that more demanding perceptual judgments require temporal information. Thus, we asked whether accurate classification of different AM rates between 4 and 12 Hz required the information contained in AC temporal discharge patterns. Indeed, accurate classification of these AM stimuli depended on the inclusion of temporal information rather than spike count alone. Overall, our results compare two different representations of time-varying acoustic features that can be accessed by downstream circuits required for perceptual judgments.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, USA.,Department of Psychology, New York University, New York, NY 10003, USA.,Department of Biology, New York University, New York, NY 10003, USA.,Neuroscience Institute, NYU Langone Medical Center, New York University, New York, NY 10016, USA
| |
Collapse
|
9
|
O’Sullivan C, Weible AP, Wehr M. Disruption of Early or Late Epochs of Auditory Cortical Activity Impairs Speech Discrimination in Mice. Front Neurosci 2020; 13:1394. [PMID: 31998064 PMCID: PMC6965026 DOI: 10.3389/fnins.2019.01394] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 12/10/2019] [Indexed: 11/22/2022] Open
Abstract
Speech evokes robust activity in auditory cortex, which contains information over a wide range of spatial and temporal scales. It remains unclear which components of these neural representations are causally involved in the perception and processing of speech sounds. Here we compared the relative importance of early and late speech-evoked activity for consonant discrimination. We trained mice to discriminate the initial consonants in spoken words, and then tested the effect of optogenetically suppressing different temporal windows of speech-evoked activity in auditory cortex. We found that both early and late suppression disrupted performance equivalently. These results suggest that mice are impaired at recognizing either type of disrupted representation because it differs from those learned in training.
Collapse
Affiliation(s)
- Conor O’Sullivan
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
- Department of Biology, University of Oregon, Eugene, OR, United States
| | - Aldis P. Weible
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
| | - Michael Wehr
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
- Department of Psychology, University of Oregon, Eugene, OR, United States
- *Correspondence: Michael Wehr,
| |
Collapse
|
10
|
Auditory Cortex Contributes to Discrimination of Pure Tones. eNeuro 2019; 6:ENEURO.0340-19.2019. [PMID: 31591138 PMCID: PMC6795560 DOI: 10.1523/eneuro.0340-19.2019] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Revised: 09/09/2019] [Accepted: 09/11/2019] [Indexed: 11/30/2022] Open
Abstract
The auditory cortex is topographically organized for sound frequency and contains highly selective frequency-tuned neurons, yet the role of auditory cortex in the perception of sound frequency remains unclear. Lesion studies have shown that auditory cortex is not essential for frequency discrimination of pure tones. However, transient pharmacological inactivation has been reported to impair frequency discrimination. This suggests the possibility that successful tone discrimination after recovery from lesion surgery could arise from long-term reorganization or plasticity of compensatory pathways. Here, we compared the effects of lesions and optogenetic suppression of auditory cortex on frequency discrimination in mice. We found that transient bilateral optogenetic suppression partially but significantly impaired discrimination performance. In contrast, bilateral electrolytic lesions of auditory cortex had no effect on performance of the identical task, even when tested only 4 h after lesion. This suggests that when auditory cortex is destroyed, an alternative pathway is almost immediately adequate for mediating frequency discrimination. Yet this alternative pathway is insufficient for task performance when auditory cortex is intact but has its activity suppressed. These results indicate a fundamental difference between the effects of brain lesions and optogenetic suppression, and suggest the existence of a rapid compensatory process possibly induced by injury.
Collapse
|
11
|
de Hoz L, Gierej D, Lioudyno V, Jaworski J, Blazejczyk M, Cruces-Solís H, Beroun A, Lebitko T, Nikolaev T, Knapska E, Nelken I, Kaczmarek L. Blocking c-Fos Expression Reveals the Role of Auditory Cortex Plasticity in Sound Frequency Discrimination Learning. Cereb Cortex 2019; 28:1645-1655. [PMID: 28334281 DOI: 10.1093/cercor/bhx060] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Indexed: 01/03/2023] Open
Abstract
The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.
Collapse
Affiliation(s)
- Livia de Hoz
- Department of Neurogenetics, Max Planck Institute of Experimental Medicine, 37075 Göttingen, Germany
| | - Dorota Gierej
- Department of Molecular and Cellular Neurobiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093 Warsaw, Poland.,Department of Neurophysiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093 Warsaw, Poland
| | - Victoria Lioudyno
- Department of Molecular and Cellular Neurobiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093 Warsaw, Poland
| | - Jacek Jaworski
- Laboratory of Molecular and Cellular Neurobiology, International Institute of Molecular and Cell Biology, 02-109 Warsaw, Poland
| | - Magda Blazejczyk
- Laboratory of Molecular and Cellular Neurobiology, International Institute of Molecular and Cell Biology, 02-109 Warsaw, Poland
| | - Hugo Cruces-Solís
- Department of Neurogenetics, Max Planck Institute of Experimental Medicine, 37075 Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen Graduate School for Neurosciences and Molecular Biosciences, 37077 Göttingen, Germany
| | - Anna Beroun
- Department of Molecular and Cellular Neurobiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093 Warsaw, Poland
| | - Tomasz Lebitko
- Department of Molecular and Cellular Neurobiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093 Warsaw, Poland.,Department of Neurophysiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093 Warsaw, Poland
| | - Tomasz Nikolaev
- Department of Neurophysiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093 Warsaw, Poland
| | - Ewelina Knapska
- Department of Neurophysiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093 Warsaw, Poland
| | - Israel Nelken
- Edmond and Lily Safra Center for Brain Sciences and the Department of Neurobiology, Hebrew University, 9190401 Jerusalem, Israel
| | - Leszek Kaczmarek
- Department of Molecular and Cellular Neurobiology, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093 Warsaw, Poland
| |
Collapse
|
12
|
|
13
|
Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams. J Neurosci 2017; 36:4895-906. [PMID: 27122044 DOI: 10.1523/jneurosci.4202-15.2016] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Accepted: 03/29/2016] [Indexed: 01/04/2023] Open
Abstract
UNLABELLED Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC-) before any behavioral training. A separate group of 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. SIGNIFICANCE STATEMENT Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing. These animals also exhibited abnormal neural plasticity after training, which may be a mechanism for why some children with dyslexia do not respond to intervention. These results are in stark contrast to our previous work with a different candidate gene, which caused a different set of deficits. Our results shed some light on possible neural and genetic mechanisms causing heterogeneity in the dyslexic population.
Collapse
|
14
|
Abstract
A fundamental adaptive mechanism of auditory function is inhibitory gating (IG), which refers to the attenuation of neural responses to repeated sound stimuli. IG is drastically impaired in individuals with emotional and cognitive impairments (i.e. posttraumatic stress disorder). The objective of this study was to test whether chronic stress impairs the IG of the auditory cortex (AC). We used the standard two-tone stimulus paradigm and examined the parametric qualities of IG in the AC of rats by recording the electrophysiological signals of a single-unit and local field potential (LFP) simultaneously. The main results of this study were that most of the AC neurons showed a weaker response to the second tone than to the first tone, reflecting an IG of the repeated input. A fast negative wave of LFP showed consistent IG across the sampled AC sites, whereas a slow positive wave of LFP had less IG effect. IG was diminished following chronic restraint stress at both, the single-unit and LFP level, due to the increase in response to the second tone. This study provided new evidence that chronic stress disrupts the physiological function of the AC. Lay Summary The effects of chronic stress on IG were investigated by recording both, single-unit spike and LFP activities, in the AC of rats. In normal rats, most of the single-unit and N25 LFP activities in the AC showed an IG effect. IG was diminished following chronic restraint stress at both, the single-unit and LFP level.
Collapse
Affiliation(s)
- Lanlan Ma
- a Department of Physiology, College of Basic Medical Science , China Medical University , Shenyang , Liaoning Province , P.R. China
| | - Wai Li
- a Department of Physiology, College of Basic Medical Science , China Medical University , Shenyang , Liaoning Province , P.R. China
| | - Sibin Li
- a Department of Physiology, College of Basic Medical Science , China Medical University , Shenyang , Liaoning Province , P.R. China
| | - Xuejiao Wang
- a Department of Physiology, College of Basic Medical Science , China Medical University , Shenyang , Liaoning Province , P.R. China
| | - Ling Qin
- a Department of Physiology, College of Basic Medical Science , China Medical University , Shenyang , Liaoning Province , P.R. China
| |
Collapse
|
15
|
Bidirectional Regulation of Innate and Learned Behaviors That Rely on Frequency Discrimination by Cortical Inhibitory Neurons. PLoS Biol 2015; 13:e1002308. [PMID: 26629746 PMCID: PMC4668086 DOI: 10.1371/journal.pbio.1002308] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Accepted: 10/26/2015] [Indexed: 12/28/2022] Open
Abstract
The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. Hearing perception relies on our ability to tell apart the spectral content of different sounds, and to learn to use this difference to distinguish behaviorally relevant (such as dangerous and safe) sounds. Recently, we demonstrated that the auditory cortex regulates frequency discrimination acuity following associative learning. However, the neuronal circuits that underlie this modulation remain unknown. In the auditory cortex, excitatory neurons serve the dominant function in transmitting information about the sensory world within and across brain areas, whereas inhibitory interneurons carry a range of modulatory functions, shaping the way information is represented and processed. Our study elucidates the function of a specific inhibitory neuronal population in sound encoding and perception. We find that interneurons in the auditory cortex, belonging to a specific class (parvalbumin-positive), modulate frequency selectivity of excitatory neurons, and regulate frequency discrimination acuity and specificity of discriminative auditory associative learning. These results expand our understanding of how specific cortical circuits contribute to innate and learned auditory behavior. Modulating the activity of a specific type of cortical neuron can either improve or impair the ability to discriminate between tones of different frequencies and to associate danger with specific sounds.
Collapse
|
16
|
Jain S, Dwarkanath VM. Effect of tinnitus location on the psychoacoustic measures of hearing. HEARING BALANCE AND COMMUNICATION 2015. [DOI: 10.3109/21695717.2016.1099885] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
17
|
Gimenez TL, Lorenc M, Jaramillo S. Adaptive categorization of sound frequency does not require the auditory cortex in rats. J Neurophysiol 2015; 114:1137-45. [PMID: 26156379 DOI: 10.1152/jn.00124.2015] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2015] [Accepted: 07/03/2015] [Indexed: 11/22/2022] Open
Abstract
A defining feature of adaptive behavior is our ability to change the way we interpret sensory stimuli depending on context. Rapid adaptation in behavior has been attributed to frontal cortical circuits, but it is not clear if sensory cortexes also play an essential role in such tasks. In this study we tested whether the auditory cortex was necessary for rapid adaptation in the interpretation of sounds. We used a two-alternative choice sound-categorization task for rats in which the boundary that separated two acoustic categories changed several times within a behavioral session. These shifts in the boundary resulted in changes in the rewarded action for a subset of stimuli. We found that extensive lesions of the auditory cortex did not impair the ability of rats to switch between categorization contingencies and sound discrimination performance was minimally impaired. Similar results were obtained after reversible inactivation of the auditory cortex with muscimol. In contrast, lesions of the auditory thalamus largely impaired discrimination performance and, as a result, the ability to modify behavior across contingencies. Thalamic lesions did not impair performance of a visual discrimination task, indicating that the effects were specific to audition and not to motor preparation or execution. These results suggest that subcortical outputs of the auditory thalamus can mediate rapid adaptation in the interpretation of sounds.
Collapse
Affiliation(s)
- Tyler L Gimenez
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon; and
| | - Maja Lorenc
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York
| | - Santiago Jaramillo
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon; and
| |
Collapse
|
18
|
Behavioral and neural discrimination of speech sounds after moderate or intense noise exposure in rats. Ear Hear 2015; 35:e248-61. [PMID: 25072238 DOI: 10.1097/aud.0000000000000062] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Hearing loss is a commonly experienced disability in a variety of populations including veterans and the elderly and can often cause significant impairment in the ability to understand spoken language. In this study, we tested the hypothesis that neural and behavioral responses to speech will be differentially impaired in an animal model after two forms of hearing loss. DESIGN Sixteen female Sprague-Dawley rats were exposed to one of two types of broadband noise which was either moderate or intense. In nine of these rats, auditory cortex recordings were taken 4 weeks after noise exposure (NE). The other seven were pretrained on a speech sound discrimination task prior to NE and were then tested on the same task after hearing loss. RESULTS Following intense NE, rats had few neural responses to speech stimuli. These rats were able to detect speech sounds but were no longer able to discriminate between speech sounds. Following moderate NE, rats had reorganized cortical maps and altered neural responses to speech stimuli but were still able to accurately discriminate between similar speech sounds during behavioral testing. CONCLUSIONS These results suggest that rats are able to adjust to the neural changes after moderate NE and discriminate speech sounds, but they are not able to recover behavioral abilities after intense NE. Animal models could help clarify the adaptive and pathological neural changes that contribute to speech processing in hearing-impaired populations and could be used to test potential behavioral and pharmacological therapies.
Collapse
|
19
|
Abstract
Sensory function is mediated by interactions between external stimuli and intrinsic cortical dynamics that are evident in the modulation of evoked responses by cortical state. A number of recent studies across different modalities have demonstrated that the patterns of activity in neuronal populations can vary strongly between synchronized and desynchronized cortical states, i.e., in the presence or absence of intrinsically generated up and down states. Here we investigated the impact of cortical state on the population coding of tones and speech in the primary auditory cortex (A1) of gerbils, and found that responses were qualitatively different in synchronized and desynchronized cortical states. Activity in synchronized A1 was only weakly modulated by sensory input, and the spike patterns evoked by tones and speech were unreliable and constrained to a small range of patterns. In contrast, responses to tones and speech in desynchronized A1 were temporally precise and reliable across trials, and different speech tokens evoked diverse spike patterns with extremely weak noise correlations, allowing responses to be decoded with nearly perfect accuracy. Restricting the analysis of synchronized A1 to activity within up states yielded similar results, suggesting that up states are not equivalent to brief periods of desynchronization. These findings demonstrate that the representational capacity of A1 depends strongly on cortical state, and suggest that cortical state should be considered as an explicit variable in all studies of sensory processing.
Collapse
|
20
|
Engineer CT, Rahebi KC, Buell EP, Fink MK, Kilgard MP. Speech training alters consonant and vowel responses in multiple auditory cortex fields. Behav Brain Res 2015; 287:256-64. [PMID: 25827927 DOI: 10.1016/j.bbr.2015.03.044] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 03/19/2015] [Accepted: 03/22/2015] [Indexed: 10/23/2022]
Abstract
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States.
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Melyssa K Fink
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| |
Collapse
|
21
|
Engineer CT, Engineer ND, Riley JR, Seale JD, Kilgard MP. Pairing Speech Sounds With Vagus Nerve Stimulation Drives Stimulus-specific Cortical Plasticity. Brain Stimul 2015; 8:637-44. [PMID: 25732785 DOI: 10.1016/j.brs.2015.01.408] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Revised: 12/17/2014] [Accepted: 01/19/2015] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Individuals with communication disorders, such as aphasia, exhibit weak auditory cortex responses to speech sounds and language impairments. Previous studies have demonstrated that pairing vagus nerve stimulation (VNS) with tones or tone trains can enhance both the spectral and temporal processing of sounds in auditory cortex, and can be used to reverse pathological primary auditory cortex (A1) plasticity in a rodent model of chronic tinnitus. OBJECTIVE/HYPOTHESIS We predicted that pairing VNS with speech sounds would strengthen the A1 response to the paired speech sounds. METHODS The speech sounds 'rad' and 'lad' were paired with VNS three hundred times per day for twenty days. A1 responses to both paired and novel speech sounds were recorded 24 h after the last VNS pairing session in anesthetized rats. Response strength, latency and neurometric decoding were compared between VNS speech paired and control rats. RESULTS Our results show that VNS paired with speech sounds strengthened the auditory cortex response to the paired sounds, but did not strengthen the amplitude of the response to novel speech sounds. Responses to the paired sounds were faster and less variable in VNS speech paired rats compared to control rats. Neural plasticity that was specific to the frequency, intensity, and temporal characteristics of the paired speech sounds resulted in enhanced neural detection. CONCLUSION VNS speech sound pairing provides a novel method to enhance speech sound processing in the central auditory system. Delivery of VNS during speech therapy could improve outcomes in individuals with receptive language deficits.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA.
| | - Navzer D Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA; MicroTransponder Inc., 2802 Flintrock Trace Suite 225, Austin, TX 78738, USA
| | - Jonathan R Riley
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA
| | - Jonathan D Seale
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA; Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road EC39, Richardson, TX 75080, USA
| |
Collapse
|
22
|
Engineer CT, Perez CA, Carraway RS, Chang KQ, Roland JL, Kilgard MP. Speech training alters tone frequency tuning in rat primary auditory cortex. Behav Brain Res 2014; 258:166-78. [PMID: 24344364 DOI: 10.1016/j.bbr.2013.10.021] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing.
Collapse
|
23
|
Centanni TM, Chen F, Booker AM, Engineer CT, Sloan AM, Rennaker RL, LoTurco JJ, Kilgard MP. Speech sound processing deficits and training-induced neural plasticity in rats with dyslexia gene knockdown. PLoS One 2014; 9:e98439. [PMID: 24871331 PMCID: PMC4037188 DOI: 10.1371/journal.pone.0098439] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2013] [Accepted: 05/02/2014] [Indexed: 11/18/2022] Open
Abstract
In utero RNAi of the dyslexia-associated gene Kiaa0319 in rats (KIA-) degrades cortical responses to speech sounds and increases trial-by-trial variability in onset latency. We tested the hypothesis that KIA- rats would be impaired at speech sound discrimination. KIA- rats needed twice as much training in quiet conditions to perform at control levels and remained impaired at several speech tasks. Focused training using truncated speech sounds was able to normalize speech discrimination in quiet and background noise conditions. Training also normalized trial-by-trial neural variability and temporal phase locking. Cortical activity from speech trained KIA- rats was sufficient to accurately discriminate between similar consonant sounds. These results provide the first direct evidence that assumed reduced expression of the dyslexia-associated gene KIAA0319 can cause phoneme processing impairments similar to those seen in dyslexia and that intensive behavioral therapy can eliminate these impairments.
Collapse
Affiliation(s)
- Tracy M. Centanni
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Fuyi Chen
- Physiology and Neurobiology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Anne M. Booker
- Physiology and Neurobiology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Crystal T. Engineer
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Andrew M. Sloan
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Robert L. Rennaker
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Joseph J. LoTurco
- Physiology and Neurobiology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Michael P. Kilgard
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| |
Collapse
|
24
|
Degraded speech sound processing in a rat model of fragile X syndrome. Brain Res 2014; 1564:72-84. [PMID: 24713347 DOI: 10.1016/j.brainres.2014.03.049] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Revised: 03/29/2014] [Accepted: 03/31/2014] [Indexed: 12/29/2022]
Abstract
Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies.
Collapse
|
25
|
Engineer CT, Centanni TM, Im KW, Borland MS, Moreno NA, Carraway RS, Wilson LG, Kilgard MP. Degraded auditory processing in a rat model of autism limits the speech representation in non-primary auditory cortex. Dev Neurobiol 2014; 74:972-86. [PMID: 24639033 DOI: 10.1002/dneu.22175] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2013] [Revised: 02/17/2014] [Accepted: 03/07/2014] [Indexed: 01/22/2023]
Abstract
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism.
Collapse
Affiliation(s)
- C T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, 75080
| | | | | | | | | | | | | | | |
Collapse
|
26
|
Centanni TM, Sloan AM, Reed AC, Engineer CT, Rennaker RL, Kilgard MP. Detection and identification of speech sounds using cortical activity patterns. Neuroscience 2014; 258:292-306. [PMID: 24286757 PMCID: PMC3898816 DOI: 10.1016/j.neuroscience.2013.11.030] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Revised: 11/14/2013] [Accepted: 11/15/2013] [Indexed: 10/26/2022]
Abstract
We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance and without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/s), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech-processing disorders.
Collapse
Affiliation(s)
| | - A M Sloan
- University of Texas at Dallas, United States
| | - A C Reed
- University of Texas at Dallas, United States
| | | | | | - M P Kilgard
- University of Texas at Dallas, United States
| |
Collapse
|
27
|
Engineer CT, Perez CA, Carraway RS, Chang KQ, Roland JL, Sloan AM, Kilgard MP. Similarity of cortical activity patterns predicts generalization behavior. PLoS One 2013; 8:e78607. [PMID: 24147140 PMCID: PMC3797841 DOI: 10.1371/journal.pone.0078607] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2013] [Accepted: 09/20/2013] [Indexed: 11/23/2022] Open
Abstract
Humans and animals readily generalize previously learned knowledge to new situations. Determining similarity is critical for assigning category membership to a novel stimulus. We tested the hypothesis that category membership is initially encoded by the similarity of the activity pattern evoked by a novel stimulus to the patterns from known categories. We provide behavioral and neurophysiological evidence that activity patterns in primary auditory cortex contain sufficient information to explain behavioral categorization of novel speech sounds by rats. Our results suggest that category membership might be encoded by the similarity of the activity pattern evoked by a novel speech sound to the patterns evoked by known sounds. Categorization based on featureless pattern matching may represent a general neural mechanism for ensuring accurate generalization across sensory and cognitive systems.
Collapse
Affiliation(s)
- Crystal T. Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
- * E-mail:
| | - Claudia A. Perez
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Ryan S. Carraway
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Kevin Q. Chang
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Jarod L. Roland
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Andrew M. Sloan
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| | - Michael P. Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America
| |
Collapse
|
28
|
Ranasinghe KG, Vrana WA, Matney CJ, Kilgard MP. Increasing diversity of neural responses to speech sounds across the central auditory pathway. Neuroscience 2013; 252:80-97. [PMID: 23954862 DOI: 10.1016/j.neuroscience.2013.08.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Revised: 07/24/2013] [Accepted: 08/03/2013] [Indexed: 10/26/2022]
Abstract
Neurons at higher stations of each sensory system are responsive to feature combinations not present at lower levels. As a result, the activity of these neurons becomes less redundant than lower levels. We recorded responses to speech sounds from the inferior colliculus and the primary auditory cortex neurons of rats, and tested the hypothesis that primary auditory cortex neurons are more sensitive to combinations of multiple acoustic parameters compared to inferior colliculus neurons. We independently eliminated periodicity information, spectral information and temporal information in each consonant and vowel sound using a noise vocoder. This technique made it possible to test several key hypotheses about speech sound processing. Our results demonstrate that inferior colliculus responses are spatially arranged and primarily determined by the spectral energy and the fundamental frequency of speech, whereas primary auditory cortex neurons generate widely distributed responses to multiple acoustic parameters, and are not strongly influenced by the fundamental frequency of speech. We found no evidence that inferior colliculus or primary auditory cortex was specialized for speech features such as voice onset time or formants. The greater diversity of responses in primary auditory cortex compared to inferior colliculus may help explain how the auditory system can identify a wide range of speech sounds across a wide range of conditions without relying on any single acoustic cue.
Collapse
Affiliation(s)
- K G Ranasinghe
- The University of Texas at Dallas, School of Behavioral Brain Sciences, 800 West Campbell Road, GR41, Richardson, TX 75080-3021, United States.
| | | | | | | |
Collapse
|
29
|
Centanni TM, Engineer CT, Kilgard MP. Cortical speech-evoked response patterns in multiple auditory fields are correlated with behavioral discrimination ability. J Neurophysiol 2013; 110:177-89. [PMID: 23596332 DOI: 10.1152/jn.00092.2013] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Different speech sounds evoke unique patterns of activity in primary auditory cortex (A1). Behavioral discrimination by rats is well correlated with the distinctness of the A1 patterns evoked by individual consonants, but only when precise spike timing is preserved. In this study we recorded the speech-evoked responses in the primary, anterior, ventral, and posterior auditory fields of the rat and evaluated whether activity in these fields is better correlated with speech discrimination ability when spike timing information is included or eliminated. Spike timing information improved consonant discrimination in all four of the auditory fields examined. Behavioral discrimination was significantly correlated with neural discrimination in all four auditory fields. The diversity of speech responses across recordings sites was greater in posterior and ventral auditory fields compared with A1 and anterior auditor fields. These results suggest that, while the various auditory fields of the rat process speech sounds differently, neural activity in each field could be used to distinguish between consonant sounds with accuracy that closely parallels behavioral discrimination. Earlier observations in the visual and somatosensory systems that cortical neurons do not rely on spike timing should be reevaluated with more complex natural stimuli to determine whether spike timing contributes to sensory encoding.
Collapse
Affiliation(s)
- T M Centanni
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas 75080, USA.
| | | | | |
Collapse
|
30
|
Perez CA, Engineer CT, Jakkamsetti V, Carraway RS, Perry MS, Kilgard MP. Different timescales for the neural coding of consonant and vowel sounds. Cereb Cortex 2013; 23:670-83. [PMID: 22426334 PMCID: PMC3563339 DOI: 10.1093/cercor/bhs045] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders.
Collapse
Affiliation(s)
- Claudia A Perez
- Cognition and Neuroscience Program, School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX 75080, USA
| | | | | | | | | | | |
Collapse
|
31
|
Centanni TM, Booker AB, Sloan AM, Chen F, Maher BJ, Carraway RS, Khodaparast N, Rennaker R, LoTurco JJ, Kilgard MP. Knockdown of the dyslexia-associated gene Kiaa0319 impairs temporal responses to speech stimuli in rat primary auditory cortex. Cereb Cortex 2013; 24:1753-66. [PMID: 23395846 DOI: 10.1093/cercor/bht028] [Citation(s) in RCA: 80] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
One in 15 school age children have dyslexia, which is characterized by phoneme-processing problems and difficulty learning to read. Dyslexia is associated with mutations in the gene KIAA0319. It is not known whether reduced expression of KIAA0319 can degrade the brain's ability to process phonemes. In the current study, we used RNA interference (RNAi) to reduce expression of Kiaa0319 (the rat homolog of the human gene KIAA0319) and evaluate the effect in a rat model of phoneme discrimination. Speech discrimination thresholds in normal rats are nearly identical to human thresholds. We recorded multiunit neural responses to isolated speech sounds in primary auditory cortex (A1) of rats that received in utero RNAi of Kiaa0319. Reduced expression of Kiaa0319 increased the trial-by-trial variability of speech responses and reduced the neural discrimination ability of speech sounds. Intracellular recordings from affected neurons revealed that reduced expression of Kiaa0319 increased neural excitability and input resistance. These results provide the first evidence that decreased expression of the dyslexia-associated gene Kiaa0319 can alter cortical responses and impair phoneme processing in auditory cortex.
Collapse
Affiliation(s)
- T M Centanni
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | | | - A M Sloan
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | - F Chen
- University of Connecticut
| | | | - R S Carraway
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | - N Khodaparast
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | - R Rennaker
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| | | | - M P Kilgard
- School of Behavioral and Brain Sciences, University of Texas at Dallas
| |
Collapse
|
32
|
Zhang X, Yang P, Dong C, Sato Y, Qin L. Correlation between neural discharges in cat primary auditory cortex and tone-detection behaviors. Behav Brain Res 2012; 232:114-23. [PMID: 22808521 DOI: 10.1016/j.bbr.2012.03.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Understanding the physiological role of the auditory cortex (AC) in acoustic perception is an essential issue in auditory neuroscience. By comparing sound discrimination behaviors in animals before and after AC lesion, many studies have demonstrated that AC is necessary for the perceptual process of human vowels and animal vocalizations, but is not necessary to discriminate simple acoustic parameters such as sound onset, intensity and duration. Because a lesion study cannot fully reveal the function of AC under normal conditions, in this study, we combined electrophysiological recording and psychophysical experiments on the same animal to investigate whether AC is involved in a simple auditory task. We recorded the neural activities of the primary auditory cortex (A1) using implanted electrodes, while freely-moving cats performed a tone-detection task in which they were required to lick a metal tube to obtain a food reward after hearing a tone pip. The performance of the cats' behavioral response increased with the increase of tone intensity, and the neural activities of A1 covaried with the behavioral performance. Also, whether the tone-detection behavior was interfered by a wideband noise was dependent on whether the tone-evoked neural response was masked by the noise-evoked response. Our results did not support that A1 neurons directly associate with the cat's behavioral decision; instead, they may mainly generate a neural representation of stimulus amplitude for further processing to determine whether a tone occurred or not.
Collapse
Affiliation(s)
- Xinan Zhang
- Department of Physiology, China Medical University, Shenyang, 110001, People’s Republic of China.
| | | | | | | | | |
Collapse
|
33
|
Miyashita T, Feldman DE. Behavioral detection of passive whisker stimuli requires somatosensory cortex. ACTA ACUST UNITED AC 2012; 23:1655-62. [PMID: 22661403 DOI: 10.1093/cercor/bhs155] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Rodent whisker sensation occurs both actively, as whiskers move rhythmically across objects, and in a passive mode in which externally applied deflections are sensed by static, non-moving whiskers. Passive whisker stimuli are robustly encoded in the somatosensory (S1) cortex, and provide a potentially powerful means of studying cortical processing. However, whether S1 contributes to passive sensation is debated. We developed 2 new behavioral tasks to assay passive whisker sensation in freely moving rats: Detection of unilateral whisker deflections and discrimination of right versus left whisker deflections. Stimuli were simple, simultaneous multi-whisker deflections. Local muscimol inactivation of S1 reversibly and robustly abolished sensory performance on these tasks. Thus, S1 is required for the detection and discrimination of simple stimuli by passive whiskers, in addition to its known role in active whisker sensation.
Collapse
Affiliation(s)
- Toshio Miyashita
- Department of Molecular and Cellular Biology, Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA 94720-3200, USA
| | | |
Collapse
|