1
|
Mori F, Sugino M, Kabashima K, Nara T, Jimbo Y, Kotani K. Limiting parameter range for cortical-spherical mapping improves activated domain estimation for attention modulated auditory response. J Neurosci Methods 2024; 402:110032. [PMID: 38043853 DOI: 10.1016/j.jneumeth.2023.110032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 11/21/2023] [Accepted: 11/29/2023] [Indexed: 12/05/2023]
Abstract
BACKGROUND Attention is one of the factors involved in selecting input information for the brain. We applied a method for estimating domains with clear boundaries using magnetoencephalography (the domain estimation method) for auditory-evoked responses (N100m) to evaluate the effects of attention in milliseconds. However, because the surface around the auditory cortex is folded in a complicated manner, it is unknown whether the activity in the auditory cortex can be estimated. NEW METHOD The parameter range to express current sources was set to include the auditory cortex. Their search region was expressed as a direct product of the parameter ranges used in the adaptive diagonal curves. RESULTS Without a limitation of the range, activity was estimated in regions other than the auditory cortex in all cases. However, with the limitation of the range, the activity was estimated in the primary or higher auditory cortex. Further analysis of the limitation of the range showed that the domains activated during attention included the regions activated during no attention for the participants whose amplitudes of N100m were higher during attention. COMPARISON WITH EXISTING METHOD We proposed a method for effectively limiting the search region to evaluate the extent of the activated domain in regions with complex folded structures. CONCLUSION To evaluate the extent of activated domains in regions with complex folded structures, it is necessary to limit the parameter search range. The area of the activated domains in the auditory cortex may increase by attention on the millisecond timescale.
Collapse
Affiliation(s)
- Fumina Mori
- School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Masato Sugino
- School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Kenta Kabashima
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Takaaki Nara
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Yasuhiko Jimbo
- School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Kiyoshi Kotani
- The Graduate School of Frontier Science, The University of Tokyo, Chiba, Japan
| |
Collapse
|
2
|
Yu L, Huang D, Wang S, Zhang Y. Reduced Neural Specialization for Word-level Linguistic Prosody in Children with Autism. J Autism Dev Disord 2023; 53:4351-4367. [PMID: 36038793 DOI: 10.1007/s10803-022-05720-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2022] [Indexed: 10/15/2022]
Abstract
Children with autism often show atypical brain lateralization for speech and language processing, however, it is unclear what linguistic component contributes to this phenomenon. Here we measured event-related potential (ERP) responses in 21 school-age autistic children and 25 age-matched neurotypical (NT) peers during listening to word-level prosodic stimuli. We found that both groups displayed larger late negative response (LNR) amplitude to native prosody than to nonnative prosody; however, unlike the NT group exhibiting left-lateralized LNR distinction of prosodic phonology, the autism group showed no evidence of LNR lateralization. Moreover, in both groups, the LNR effects were only present for prosodic phonology but not for phoneme-free prosodic acoustics. These results extended the findings of inadequate neural specialization for language in autism to sub-lexical prosodic structures.
Collapse
Affiliation(s)
- Luodi Yu
- Center for Autism Research, School of Education, Guangzhou University, Wenyi Bldg, Guangzhou, China.
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University) , Ministry of Education, Guangzhou, China.
| | - Dan Huang
- Guangzhou Rehabilitation & Research Center for Children with ASD, Guangzhou Cana School, Guangzhou, China
| | - Suiping Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University) , Ministry of Education, Guangzhou, China.
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
3
|
Beach SD, Ozernov-Palchik O, May SC, Centanni TM, Gabrieli JDE, Pantazis D. Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:254-279. [PMID: 34396148 PMCID: PMC8360503 DOI: 10.1162/nol_a_00034] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 02/21/2021] [Indexed: 06/13/2023]
Abstract
Robust and efficient speech perception relies on the interpretation of acoustically variable phoneme realizations, yet prior neuroimaging studies are inconclusive regarding the degree to which subphonemic detail is maintained over time as categorical representations arise. It is also unknown whether this depends on the demands of the listening task. We addressed these questions by using neural decoding to quantify the (dis)similarity of brain response patterns evoked during two different tasks. We recorded magnetoencephalography (MEG) as adult participants heard isolated, randomized tokens from a /ba/-/da/ speech continuum. In the passive task, their attention was diverted. In the active task, they categorized each token as ba or da. We found that linear classifiers successfully decoded ba vs. da perception from the MEG data. Data from the left hemisphere were sufficient to decode the percept early in the trial, while the right hemisphere was necessary but not sufficient for decoding at later time points. We also decoded stimulus representations and found that they were maintained longer in the active task than in the passive task; however, these representations did not pattern more like discrete phonemes when an active categorical response was required. Instead, in both tasks, early phonemic patterns gave way to a representation of stimulus ambiguity that coincided in time with reliable percept decoding. Our results suggest that the categorization process does not require the loss of subphonemic detail, and that the neural representation of isolated speech sounds includes concurrent phonemic and subphonemic information.
Collapse
Affiliation(s)
- Sara D. Beach
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA
| | - Ola Ozernov-Palchik
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sidney C. May
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Lynch School of Education and Human Development, Boston College, Chestnut Hill, MA, USA
| | - Tracy M. Centanni
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Psychology, Texas Christian University, Fort Worth, TX, USA
| | - John D. E. Gabrieli
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dimitrios Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
4
|
Miller SE, Graham J, Schafer E. Auditory Sensory Gating of Speech and Nonspeech Stimuli. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1404-1412. [PMID: 33755510 DOI: 10.1044/2020_jslhr-20-00535] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Auditory sensory gating is a neural measure of inhibition and is typically measured with a click or tonal stimulus. This electrophysiological study examined if stimulus characteristics and the use of speech stimuli affected auditory sensory gating indices. Method Auditory event-related potentials were elicited using natural speech, synthetic speech, and nonspeech stimuli in a traditional auditory gating paradigm in 15 adult listeners with normal hearing. Cortical responses were recorded at 64 electrode sites, and peak amplitudes and latencies to the different stimuli were extracted. Individual data were analyzed using repeated-measures analysis of variance. Results Significant gating of P1-N1-P2 peaks was observed for all stimulus types. N1-P2 cortical responses were affected by stimulus type, with significantly less neural inhibition of the P2 response observed for natural speech compared to nonspeech and synthetic speech. Conclusions Auditory sensory gating responses can be measured using speech and nonspeech stimuli in listeners with normal hearing. The results of the study indicate the amount of gating and neural inhibition observed is affected by the spectrotemporal characteristics of the stimuli used to evoke the neural responses.
Collapse
Affiliation(s)
- Sharon E Miller
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton
| | - Jessica Graham
- Division of Audiology, St. Louis Children's Hospital, MO
| | - Erin Schafer
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton
| |
Collapse
|
5
|
Chen F, Zhang H, Ding H, Wang S, Peng G, Zhang Y. Neural coding of formant-exaggerated speech and nonspeech in children with and without autism spectrum disorders. Autism Res 2021; 14:1357-1374. [PMID: 33792205 DOI: 10.1002/aur.2509] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 03/09/2021] [Accepted: 03/16/2021] [Indexed: 12/15/2022]
Abstract
The presence of vowel exaggeration in infant-directed speech (IDS) may adapt to the age-appropriate demands in speech and language acquisition. Previous studies have provided behavioral evidence of atypical auditory processing towards IDS in children with autism spectrum disorders (ASD), while the underlying neurophysiological mechanisms remain unknown. This event-related potential (ERP) study investigated the neural coding of formant-exaggerated speech and nonspeech in 24 4- to 11-year-old children with ASD and 24 typically-developing (TD) peers. The EEG data were recorded using an alternating block design, in which each stimulus type (exaggerated/non-exaggerated sound) was presented with equal probability. ERP waveform analysis revealed an enhanced P1 for vowel formant exaggeration in the TD group but not in the ASD group. This speech-specific atypical processing in ASD was not found for the nonspeech stimuli which showed similar P1 enhancement in both ASD and TD groups. Moreover, the time-frequency analysis indicated that children with ASD showed differences in neural synchronization in the delta-theta bands for processing acoustic formant changes embedded in nonspeech. Collectively, the results add substantiating neurophysiological evidence (i.e., a lack of neural enhancement effect of vowel exaggeration) for atypical auditory processing of IDS in children with ASD, which may exert a negative effect on phonetic encoding and language learning. LAY SUMMARY: Atypical responses to motherese might act as a potential early marker of risk for children with ASD. This study investigated the neural responses to such socially relevant stimuli in the ASD brain, and the results suggested a lack of neural enhancement responding to the motherese even in individuals without intellectual disability.
Collapse
Affiliation(s)
- Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China.,Research Centre for Language, Cognition, and Neuroscience & Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China.,Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Twin Cities, Minnesota, USA
| | - Hao Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai, China
| | - Suiping Wang
- School of Psychology, South China Normal University, Guangzhou, China
| | - Gang Peng
- Research Centre for Language, Cognition, and Neuroscience & Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Twin Cities, Minnesota, USA
| |
Collapse
|
6
|
Shorter P1m Response in Children with Autism Spectrum Disorder without Intellectual Disabilities. Int J Mol Sci 2021; 22:ijms22052611. [PMID: 33807635 PMCID: PMC7961676 DOI: 10.3390/ijms22052611] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/01/2021] [Accepted: 03/01/2021] [Indexed: 12/01/2022] Open
Abstract
(1) Background: Atypical auditory perception has been reported in individuals with autism spectrum disorder (ASD). Altered auditory evoked brain responses are also associated with childhood ASD. They are likely to be associated with atypical brain maturation. (2) Methods: This study examined children aged 5–8 years old: 29 with ASD but no intellectual disability and 46 age-matched typically developed (TD) control participants. Using magnetoencephalography (MEG) data obtained while participants listened passively to sinusoidal pure tones, bilateral auditory cortical response (P1m) was examined. (3) Results: Significantly shorter P1m latency in the left hemisphere was found for children with ASD without intellectual disabilities than for children with TD. Significant correlation between P1m latency and language conceptual ability was found in children with ASD, but not in children with TD. (4) Conclusions: These findings demonstrated atypical brain maturation in the auditory processing area in children with ASD without intellectual disability. Findings also suggest that ASD has a common neural basis for pure-tone sound processing and language development. Development of brain networks involved in language concepts in early childhood ASD might differ from that in children with TD.
Collapse
|
7
|
Meaning before grammar: A review of ERP experiments on the neurodevelopmental origins of semantic processing. Psychon Bull Rev 2020; 27:441-464. [PMID: 31950458 DOI: 10.3758/s13423-019-01677-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
According to traditional linguistic theories, the construction of complex meanings relies firmly on syntactic structure-building operations. Recently, however, new models have been proposed in which semantics is viewed as being partly autonomous from syntax. In this paper, we discuss some of the developmental implications of syntax-based and autonomous models of semantics. We review event-related brain potential (ERP) studies on semantic processing in infants and toddlers, focusing on experiments reporting modulations of N400 amplitudes using visual or auditory stimuli and different temporal structures of trials. Our review suggests that infants can relate or integrate semantic information from temporally overlapping stimuli across modalities by 6 months of age. The ability to relate or integrate semantic information over time, within and across modalities, emerges by 9 months. The capacity to relate or integrate information from spoken words in sequences and sentences appears by 18 months. We also review behavioral and ERP studies showing that grammatical and syntactic processing skills develop only later, between 18 and 32 months. These results provide preliminary evidence for the availability of some semantic processes prior to the full developmental emergence of syntax: non-syntactic meaning-building operations are available to infants, albeit in restricted ways, months before the abstract machinery of grammar is in place. We discuss this hypothesis in light of research on early language acquisition and human brain development.
Collapse
|
8
|
Yoshimura Y, Hasegawa C, Ikeda T, Saito DN, Hiraishi H, Takahashi T, Kumazaki H, Kikuchi M. The maturation of the P1m component in response to voice from infancy to 3 years of age: A longitudinal study in young children. Brain Behav 2020; 10:e01706. [PMID: 32573987 PMCID: PMC7428512 DOI: 10.1002/brb3.1706] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 04/06/2020] [Accepted: 05/17/2020] [Indexed: 11/09/2022] Open
Abstract
INTRODUCTION In the early development of human infants and toddlers, remarkable changes in brain cortical function for auditory processing have been reported. Knowing the maturational trajectory of auditory cortex responses to human voice in typically developing young children is crucial for identifying voice processing abnormalities in children at risk for neurodevelopmental disorders and language impairment. An early prominent positive component in the cerebral auditory response in newborns has been reported in previous electroencephalography and magnetoencephalography (MEG) studies. However, it is not clear whether this prominent component in infants less than 1 year of age corresponds to the auditory P1m component that has been reported in young children over 2 years of age. METHODS To test the hypothesis that the early prominent positive component in infants aged 0 years is an immature manifestation of P1m that we previously reported in children over 2 years of age, we performed a longitudinal MEG study that focused on this early component and examined the maturational changes over three years starting from age 0. Five infants participated in this 3-year longitudinal study. RESULTS This research revealed that the early prominent component in infants aged 3 month corresponded to the auditory P1m component in young children over 2 years old, which we had previously reported to be related to language development and/or autism spectrum disorders. CONCLUSION Our data revealed the development of the auditory-evoked field in the left and right hemispheres from 0- to 3-year-old children. These results contribute to the elucidation of the development of brain functions in infants.
Collapse
Affiliation(s)
- Yuko Yoshimura
- Institute of Human and Social Sciences, Kanazawa University, Kanazawa, Japan.,Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Chiaki Hasegawa
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Takashi Ikeda
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Daisuke N Saito
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Hirotoshi Hiraishi
- Institute for Medical Photonics research, Hamamatsu University school of medicine, Hamamatsu, Japan
| | | | - Hirokazu Kumazaki
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Mitsuru Kikuchi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan.,Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University, Ishikawa, Japan
| |
Collapse
|
9
|
In Spoken Word Recognition, the Future Predicts the Past. J Neurosci 2018; 38:7585-7599. [PMID: 30012695 DOI: 10.1523/jneurosci.0065-18.2018] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 06/06/2018] [Accepted: 07/09/2018] [Indexed: 11/21/2022] Open
Abstract
Speech is an inherently noisy and ambiguous signal. To fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. Although many studies have demonstrated the influence of prior context on speech perception, the neural mechanisms supporting the integration of subsequent context remain unknown. Using MEG to record from human auditory cortex, we analyzed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point. Fifty participants (both male and female) were recruited across two MEG experiments. Our findings suggest that primary auditory cortex is sensitive to phonological ambiguity very early during processing at just 50 ms after onset. Subphonemic detail is preserved in auditory cortex over long timescales and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter timescale of ∼450 ms. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be integrated with top-down lexical information.SIGNIFICANCE STATEMENT The perception of a speech sound is determined by its surrounding context in the form of words, sentences, and other speech sounds. Often, such contextual information becomes available later than the sensory input. The present study is the first to unveil how the brain uses this subsequent information to aid speech comprehension. Concretely, we found that the auditory system actively maintains the acoustic signal in auditory cortex while concurrently making guesses about the identity of the words being said. Such a processing strategy allows the content of the message to be accessed quickly while also permitting reanalysis of the acoustic signal to minimize parsing mistakes.
Collapse
|
10
|
Spatiotemporal differentiation in auditory and motor regions during auditory phoneme discrimination. Acta Neurol Belg 2017; 117:477-491. [PMID: 28214927 DOI: 10.1007/s13760-017-0761-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Accepted: 02/09/2017] [Indexed: 10/20/2022]
Abstract
Auditory phoneme discrimination (APD) is supported by both auditory and motor regions through a sensorimotor interface embedded in a fronto-temporo-parietal cortical network. However, the specific spatiotemporal organization of this network during APD with respect to different types of phonemic contrasts is still unclear. Here, we use source reconstruction, applied to event-related potentials in a group of 47 participants, to uncover a potential spatiotemporal differentiation in these brain regions during a passive and active APD task with respect to place of articulation (PoA), voicing and manner of articulation (MoA). Results demonstrate that in an early stage (50-110 ms), auditory, motor and sensorimotor regions elicit more activation during the passive and active APD task with MoA and active APD task with voicing compared to PoA. In a later stage (130-175 ms), the same auditory and motor regions elicit more activation during the APD task with PoA compared to MoA and voicing, yet only in the active condition, implying important timing differences. Degree of attention influences a frontal network during the APD task with PoA, whereas auditory regions are more affected during the APD task with MoA and voicing. Based on these findings, it can be carefully suggested that APD is supported by the integration of early activation of auditory-acoustic properties in superior temporal regions, more perpetuated for MoA and voicing, and later auditory-to-motor integration in sensorimotor areas, more perpetuated for PoA.
Collapse
|
11
|
Schuerman WL, Meyer AS, McQueen JM. Mapping the Speech Code: Cortical Responses Linking the Perception and Production of Vowels. Front Hum Neurosci 2017; 11:161. [PMID: 28439232 PMCID: PMC5383703 DOI: 10.3389/fnhum.2017.00161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 03/17/2017] [Indexed: 11/13/2022] Open
Abstract
The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation.
Collapse
Affiliation(s)
- William L. Schuerman
- Psychology of Language, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| | - Antje S. Meyer
- Psychology of Language, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - James M. McQueen
- Psychology of Language, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| |
Collapse
|
12
|
Silva DMR, Melges DB, Rothe-Neves R. N1 response attenuation and the mismatch negativity (MMN) to within- and across-category phonetic contrasts. Psychophysiology 2017; 54:591-600. [PMID: 28169421 DOI: 10.1111/psyp.12824] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2016] [Accepted: 12/07/2016] [Indexed: 11/29/2022]
Abstract
According to the neural adaptation model of the mismatch negativity (MMN), the sensitivity of this event-related response to both acoustic and categorical information in speech sounds can be accounted for by assuming that (a) the degree of overlapping between neural representations of two sounds depends on both the acoustic difference between them and whether or not they belong to distinct phonetic categories, and (b) a release from stimulus-specific adaptation causes an enhanced N1 obligatory response to infrequent deviant stimuli. On the basis of this view, we tested in Experiment 1 whether the N1 response to the second sound of a pair (S2 ) would be more attenuated in pairs of identical vowels compared with pairs of different vowels, and in pairs of exemplars of the same vowel category compared with pairs of exemplars of different categories. The psychoacoustic distance between S1 and S2 was the same for all within-category and across-category pairs. While N1 amplitudes decreased markedly from S1 to S2 , responses to S2 were quite similar across pair types, indicating that the attenuation effect in such conditions is not stimulus specific. In Experiment 2, a pronounced MMN was elicited by a deviant vowel sound in an across-category oddball sequence, but not when the exact same deviant vowel was presented in a within-category oddball sequence. This adds evidence that MMN reflects categorical phonetic processing. Taken together, the results suggest that different neural processes underlie the attenuation of the N1 response to S2 and the MMN to vowels.
Collapse
Affiliation(s)
- Daniel M R Silva
- Graduate Program in Neuroscience, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Danilo B Melges
- Graduate Program in Electrical Engineering, Department of Electrical Engineering, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Rui Rothe-Neves
- Phonetics Lab, Faculty of Letters, Federal University of Minas Gerais, Belo Horizonte, Brazil
| |
Collapse
|
13
|
Kikuchi M, Yoshimura Y, Mutou K, Minabe Y. Magnetoencephalography in the study of children with autism spectrum disorder. Psychiatry Clin Neurosci 2016; 70:74-88. [PMID: 26256564 DOI: 10.1111/pcn.12338] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/07/2015] [Indexed: 12/29/2022]
Abstract
Magnetoencephalography (MEG) is a non-invasive neuroimaging technique that provides a measure of cortical neural activity on a millisecond timescale with high spatial resolution. MEG has been clinically applied to various neurological diseases, including epilepsy and cognitive dysfunction. In the past decade, MEG has also emerged as an important investigatory tool in neurodevelopmental studies. It is therefore an opportune time to review how MEG is able to contribute to the study of atypical brain development. We limit this review to autism spectrum disorder (ASD). The relevant published work for children was accessed using PubMed on 5 January 2015. Case reports, case series, and papers on epilepsy were excluded. Owing to their accurate separation of brain activity in the right and left hemispheres and the higher accuracy of source localization, MEG studies have added new information related to auditory-evoked brain responses to findings from previous electroencephalography studies of children with ASD. In addition, evidence of atypical brain connectivity in children with ASD has accumulated over the past decade. MEG is well suited for the study of neural activity with high time resolution even in young children. Although further studies are still necessary, the detailed findings provided by neuroimaging methods may aid clinical diagnosis and even contribute to the refinement of diagnostic categories for neurodevelopmental disorders in the future.
Collapse
Affiliation(s)
- Mitsuru Kikuchi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan.,Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Yuko Yoshimura
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Kouhei Mutou
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Yoshio Minabe
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan.,Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| |
Collapse
|
14
|
Aerts A, van Mierlo P, Hartsuiker RJ, Santens P, De Letter M. Sex Differences in Neurophysiological Activation Patterns During Phonological Input Processing: An Influencing Factor for Normative Data. ARCHIVES OF SEXUAL BEHAVIOR 2015; 44:2207-2218. [PMID: 26014826 DOI: 10.1007/s10508-015-0560-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2013] [Revised: 06/20/2014] [Accepted: 10/16/2014] [Indexed: 06/04/2023]
Abstract
In the context of neurophysiological normative data, it has been established that aging has a significant impact on neurophysiological correlates of auditory phonological input processes, such as phoneme discrimination (PD) and word recognition (WR). Besides age, sex is another demographic factor that influences several language processes. We aimed to disentangle whether sex has a similar effect on PD and WR. Event-related potentials (ERPs) were recorded in 20 men and 24 women. During PD, three phonemic contrasts (place and manner of articulation and voicing) were compared using the attentive P300 and pre-attentive Mismatch Negativity. To investigate WR, real words were contrasted with pseudowords in a pre-attentive oddball task. Women demonstrated a larger sensitivity to spectrotemporal differences, as evidenced by larger P300 responses to the place of articulation (PoA) contrast and larger P300 and MMN responses than men in PoA-based PD. Men did not display such sensitivity. Attention played an important role, considering that women needed more attentional resources to differentiate between PoA and the other phonemic contrasts. During WR, pseudowords evoked larger amplitudes already 100 ms post-stimulus independent of sex. However, women had decreased P200 latencies, but longer N400 latencies in response to pseudowords, whereas men showed increased N400 latencies compared to women in response to real words. The current results demonstrate significant sex-related influences on phonological input processes. Therefore, existing neurophysiological normative data for age should be complemented for the factor sex.
Collapse
Affiliation(s)
- Annelies Aerts
- Department of Internal Medicine, Ghent University Hospital, De Pintelaan 185 (1K12-IA), 9000, Ghent, Belgium.
- Department of Neurology, Ghent University Hospital, Ghent, Belgium.
| | - Pieter van Mierlo
- Department of Electronics and Information Systems, Medical Image and Signal Processing Group, Ghent University, Ghent, Belgium
| | | | - Patrick Santens
- Department of Internal Medicine, Ghent University Hospital, De Pintelaan 185 (1K12-IA), 9000, Ghent, Belgium
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Miet De Letter
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
- Department of Speech, Language and Hearing Sciences, Ghent University, Ghent, Belgium
| |
Collapse
|
15
|
|
16
|
Yoshimura Y, Kikuchi M, Ueno S, Shitamichi K, Remijn GB, Hiraishi H, Hasegawa C, Furutani N, Oi M, Munesue T, Tsubokawa T, Higashida H, Minabe Y. A longitudinal study of auditory evoked field and language development in young children. Neuroimage 2014; 101:440-7. [PMID: 25067819 DOI: 10.1016/j.neuroimage.2014.07.034] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Revised: 06/16/2014] [Accepted: 07/18/2014] [Indexed: 10/25/2022] Open
Abstract
The relationship between language development in early childhood and the maturation of brain functions related to the human voice remains unclear. Because the development of the auditory system likely correlates with language development in young children, we investigated the relationship between the auditory evoked field (AEF) and language development using non-invasive child-customized magnetoencephalography (MEG) in a longitudinal design. Twenty typically developing children were recruited (aged 36-75 months old at the first measurement). These children were re-investigated 11-25 months after the first measurement. The AEF component P1m was examined to investigate the developmental changes in each participant's neural brain response to vocal stimuli. In addition, we examined the relationships between brain responses and language performance. P1m peak amplitude in response to vocal stimuli significantly increased in both hemispheres in the second measurement compared to the first measurement. However, no differences were observed in P1m latency. Notably, our results reveal that children with greater increases in P1m amplitude in the left hemisphere performed better on linguistic tests. Thus, our results indicate that P1m evoked by vocal stimuli is a neurophysiological marker for language development in young children. Additionally, MEG is a technique that can be used to investigate the maturation of the auditory cortex based on auditory evoked fields in young children. This study is the first to demonstrate a significant relationship between the development of the auditory processing system and the development of language abilities in young children.
Collapse
Affiliation(s)
- Yuko Yoshimura
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Mitsuru Kikuchi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan.
| | - Sanae Ueno
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Kiyomi Shitamichi
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Gerard B Remijn
- International Education Center, Kyushu University, Fukuoka, Japan
| | - Hirotoshi Hiraishi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Chiaki Hasegawa
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Naoki Furutani
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Manabu Oi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Toshio Munesue
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Tsunehisa Tsubokawa
- Department of Anesthesiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Haruhiro Higashida
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Yoshio Minabe
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| |
Collapse
|
17
|
Abstract
The earliest stages of cortical processing of speech sounds take place in the auditory cortex. Transcranial magnetic stimulation (TMS) studies have provided evidence that the human articulatory motor cortex contributes also to speech processing. For example, stimulation of the motor lip representation influences specifically discrimination of lip-articulated speech sounds. However, the timing of the neural mechanisms underlying these articulator-specific motor contributions to speech processing is unknown. Furthermore, it is unclear whether they depend on attention. Here, we used magnetoencephalography and TMS to investigate the effect of attention on specificity and timing of interactions between the auditory and motor cortex during processing of speech sounds. We found that TMS-induced disruption of the motor lip representation modulated specifically the early auditory-cortex responses to lip-articulated speech sounds when they were attended. These articulator-specific modulations were left-lateralized and remarkably early, occurring 60-100 ms after sound onset. When speech sounds were ignored, the effect of this motor disruption on auditory-cortex responses was nonspecific and bilateral, and it started later, 170 ms after sound onset. The findings indicate that articulatory motor cortex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks and when the sounds are not in the focus of attention. Importantly, the findings also show that attention can selectively facilitate the interaction of the auditory cortex with specific articulator representations during speech processing.
Collapse
|
18
|
Steinschneider M, Nourski KV, Fishman YI. Representation of speech in human auditory cortex: is it special? Hear Res 2013; 305:57-73. [PMID: 23792076 PMCID: PMC3818517 DOI: 10.1016/j.heares.2013.05.013] [Citation(s) in RCA: 74] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2013] [Revised: 05/13/2013] [Accepted: 05/28/2013] [Indexed: 11/20/2022]
Abstract
Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Mitchell Steinschneider
- Department of Neurology, Rose F. Kennedy Center, Room 322, 1300 Morris Park Avenue, Albert Einstein College of Medicine, Bronx, NY 10461, USA
- Department of Neuroscience, Rose F. Kennedy Center, Room 322, 1300 Morris Park Avenue, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa 52242, USA
| | - Yonatan I. Fishman
- Department of Neurology, Rose F. Kennedy Center, Room 322, 1300 Morris Park Avenue, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| |
Collapse
|
19
|
Aerts A, van Mierlo P, Hartsuiker RJ, Hallez H, Santens P, De Letter M. Neurophysiological investigation of phonological input: aging effects and development of normative data. BRAIN AND LANGUAGE 2013; 125:253-263. [PMID: 23542728 DOI: 10.1016/j.bandl.2013.02.010] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2012] [Revised: 01/16/2013] [Accepted: 02/03/2013] [Indexed: 06/02/2023]
Abstract
The current study investigated attended and unattended auditory phoneme discrimination using the P300 and Mismatch Negativity event-related potentials (ERPs). Three phonemic contrasts present in the Dutch language were compared. Additionally, auditory word recognition was investigated by presenting rare pseudowords among frequent words. Two main goals were: (1) obtain normative data for ERP latencies (ms) and amplitudes (μV) and (2) examine aging influences. Seventy-one healthy subjects (21-83 years) were included. During phoneme discrimination aging was associated with increased latencies and decreased amplitudes. However, a discrepancy between attended and unattended processing, as well as between phonemic contrasts, was found. During word recognition aging only had an impact on ERPs elicited by real words, indicating that mainly semantic processes were altered leaving lexical processes unharmed. Early sensory-perceptual processes, reflected by N100 and P50, were free from aging influences. In future, neurophysiological normative data can be applied in the evaluation of acquired language disorders.
Collapse
Affiliation(s)
- Annelies Aerts
- Faculty of Medicine and Health Sciences, Ghent University, De Pintelaan 185, B-9000 Ghent, Belgium.
| | | | | | | | | | | |
Collapse
|
20
|
Herrmann B, Henry MJ, Obleser J. Frequency-specific adaptation in human auditory cortex depends on the spectral variance in the acoustic stimulation. J Neurophysiol 2013; 109:2086-96. [DOI: 10.1152/jn.00907.2012] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In auditory cortex, activation and subsequent adaptation is strongest for regions responding best to a stimulated tone frequency and less for regions responding best to other frequencies. Previous attempts to characterize the spread of neural adaptation in humans investigated the auditory cortex N1 component of the event-related potentials. Importantly, however, more recent studies in animals show that neural response properties are not independent of the stimulation context. To link these findings in animals to human scalp potentials, we investigated whether contextual factors of the acoustic stimulation, namely, spectral variance, affect the spread of neural adaptation. Electroencephalograms were recorded while human participants listened to random tone sequences varying in spectral variance (narrow vs. wide). Spread of adaptation was investigated by modeling single-trial neural adaptation and subsequent recovery based on the spectro-temporal stimulation history. Frequency-specific neural responses were largest on the N1 component, and the modeled neural adaptation indices were strongly predictive of trial-by-trial amplitude variations. Yet the spread of adaption varied depending on the spectral variance in the stimulation, such that adaptation spread was broadened for tone sequences with wide spectral variance. Thus the present findings reveal context-dependent auditory cortex adaptation and point toward a flexibly adjusting auditory system that changes its response properties with the spectral requirements of the acoustic environment.
Collapse
Affiliation(s)
- Björn Herrmann
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Molly J. Henry
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
21
|
Hertrich I, Ackermann H. Neurophonetics. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2013; 4:191-200. [PMID: 26304195 DOI: 10.1002/wcs.1211] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Neurophonetics aims at the elucidation of the brain mechanisms underlying speech communication in our species. Clinical observations in patients with speech impairments following cerebral disorders provided the initial vantage point of this research area and indicated distinct functional-neuroanatomic systems to support human speaking and listening. Subsequent approaches-considering speech production a motor skill-investigated vocal tract movements associated with spoken language by means of kinematic and electromyographic techniques-allowing, among other things, for the evaluation of computational models suggesting elementary phonological gestures or a mental syllabary as basic units of speech motor control. As concerns speech perception, the working characteristics of auditory processing were first investigated based upon psychoacoustic techniques such as dichotic listening and categorical perception designs. More recently, functional hemodynamic neuroimaging and electrophysiological methods opened the door to the delineation of multiple stages of central auditory processing related to signal detection, classification, sensory memory processes, and, finally, lexical access. Beyond the control mechanisms in a stricter sense, both speech articulation and auditory processing represent examples of 'grounded cognition'. For example, both domains cannot be restricted to text-to-speech translation processes, but are intimately interwoven with neuropsychological aspects of speech prosody, including the vocal expression of affects and the actual performance of speech acts, transforming propositional messages to 'real' utterances. Furthermore, during language acquisition, the periphery of language-i.e., hearing and speaking behavior-plays a dominant role for the construction of a language-specific mental lexicon as well as language-specific action plans for the production of a speech message. WIREs Cogn Sci 2013, 4:191-200. doi: 10.1002/wcs.1211 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Hermann Ackermann
- Department of General Neurology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| |
Collapse
|
22
|
Bien H, Zwitserlood P. Processing Nasals with and without Consecutive Context Phonemes: Evidence from Explicit Categorization and the N100. Front Psychol 2013; 4:21. [PMID: 23372561 PMCID: PMC3557416 DOI: 10.3389/fpsyg.2013.00021] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2012] [Accepted: 01/09/2013] [Indexed: 11/13/2022] Open
Abstract
With neurophysiological (N100) and explicit behavioral measures (two-alternative forced-choice categorization), we investigated how the processing of nasal segments of German is affected by following context phonemes and their place of articulation. We investigated pre-lexical processing, with speech stimuli excised from naturally spoken utterances. Participants heard nasals (/n/, /m/, and place-assimilated /n′/), both with and without a subsequent context phoneme. Context phonemes were voiced or voiceless, and either shared or did not share their place of articulation with the nasals. The explicit forced-choice categorization of the isolated nasals showed /n′/ to be in-between the clear categorizations for /n/ and /m/. In early, implicit processing, /m/ had a significantly higher N100 amplitude than both /n/ and /n′/, with, most importantly, no difference between the latter two. When presented in context (e.g., /nb/, /mt/), explicit categorizations were affected by both the nasal and the context phoneme: a consecutive labial led to more M-categorizations, a following alveolar to more N-categorizations. The early processing of the nasal/+context stimuli in the N100 showed strong effects of context, modulated by the type of preceding nasal. Crucially, the context effects on assimilated nasals /n′/ were clearly different to effects on /m/, and indistinguishable from effects on /n/. The grouping of the isolated nasals in the N100 replicates previous findings, using magnetoencephalography and a different set of stimuli. Importantly, the same grouping was observed in the nasal/+context stimuli. Most models that deal with assimilation are either challenged by the mere existence of phonemic context effects, and/or use mechanisms that rely on lexical information. Our results support the existence, and early activation, of pre-lexical categories for phonemic segments. We suggest that due to experience with assimilation, specific speech-sound categories are flexible enough to accept (or even ignore) inappropriate place cues, in particular when the appropriate place information is still present in the signal.
Collapse
Affiliation(s)
- Heidrun Bien
- Institute for Psychology, Otto-Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster Münster, Germany
| | | |
Collapse
|
23
|
Schepers IM, Schneider TR, Hipp JF, Engel AK, Senkowski D. Noise alters beta-band activity in superior temporal cortex during audiovisual speech processing. Neuroimage 2012; 70:101-12. [PMID: 23274182 DOI: 10.1016/j.neuroimage.2012.11.066] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2012] [Revised: 11/13/2012] [Accepted: 11/21/2012] [Indexed: 10/27/2022] Open
Abstract
Speech recognition is improved when complementary visual information is available, especially under noisy acoustic conditions. Functional neuroimaging studies have suggested that the superior temporal sulcus (STS) plays an important role for this improvement. The spectrotemporal dynamics underlying audiovisual speech processing in the STS, and how these dynamics are affected by auditory noise, are not well understood. Using electroencephalography, we investigated how auditory noise affects audiovisual speech processing in event-related potentials (ERPs) and oscillatory activity. Spoken syllables were presented in audiovisual (AV) and auditory only (A) trials at three different auditory noise levels (no, low, and high). Responses to A stimuli were subtracted from responses to AV stimuli, separately for each noise level, and these responses were subjected to the statistical analysis. Central ERPs differed between the no noise and the two noise conditions from 130 to 150 ms and 170 to 210 ms after auditory stimulus onset. Source localization using the local autoregressive average procedure revealed an involvement of the lateral temporal lobe, encompassing the superior and middle temporal gyrus. Neuronal activity in the beta-band (16 to 32 Hz) was suppressed at central channels around 100 to 400 ms after auditory stimulus onset in the averaged AV minus A signal over the three noise levels. This suppression was smaller in the high noise compared to the no noise and low noise condition, possibly reflecting disturbed recognition or altered processing of multisensory speech stimuli. Source analysis of the beta-band effect using linear beamforming demonstrated an involvement of the STS. Our study shows that auditory noise alters audiovisual speech processing in ERPs localized to lateral temporal lobe and provides evidence that beta-band activity in the STS plays a role for audiovisual speech processing under regular and noisy acoustic conditions.
Collapse
Affiliation(s)
- Inga M Schepers
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany.
| | | | | | | | | |
Collapse
|
24
|
Yoshimura Y, Kikuchi M, Shitamichi K, Ueno S, Remijn GB, Haruta Y, Oi M, Munesue T, Tsubokawa T, Higashida H, Minabe Y. Language performance and auditory evoked fields in 2- to 5-year-old children. Eur J Neurosci 2012; 35:644-50. [PMID: 22321133 DOI: 10.1111/j.1460-9568.2012.07998.x] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Language development progresses at a dramatic rate in preschool children. As rapid temporal processing of speech signals is important in daily colloquial environments, we performed magnetoencephalography (MEG) to investigate the linkage between speech-evoked responses during rapid-rate stimulus presentation (interstimulus interval < 1 s) and language performance in 2- to 5-year-old children (n = 59). Our results indicated that syllables with this short stimulus interval evoked detectable P50m, but not N100m, in most participants, indicating a marked influence of longer neuronal refractory period for stimulation. The results of equivalent dipole estimation showed that the intensity of the P50m component in the left hemisphere was positively correlated with language performance (conceptual inference ability). The observed positive correlations were suggested to reflect the maturation of synaptic organisation or axonal maturation and myelination underlying the acquisition of linguistic abilities. The present study is among the first to use MEG to study brain maturation pertaining to language abilities in preschool children.
Collapse
Affiliation(s)
- Yuko Yoshimura
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
25
|
Schild U, Röder B, Friedrich CK. Neuronal spoken word recognition: The time course of processing variation in the speech signal. ACTA ACUST UNITED AC 2012. [DOI: 10.1080/01690965.2010.503532] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
26
|
Sjerps MJ, Mitterer H, McQueen JM. Listening to different speakers: On the time-course of perceptual compensation for vocal-tract characteristics. Neuropsychologia 2011; 49:3831-46. [DOI: 10.1016/j.neuropsychologia.2011.09.044] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2011] [Revised: 09/22/2011] [Accepted: 09/27/2011] [Indexed: 11/26/2022]
|
27
|
Seol J, Oh M, Kim JS, Jin SH, Kim SI, Chung CK. Discrimination of timbre in early auditory responses of the human brain. PLoS One 2011; 6:e24959. [PMID: 21949807 PMCID: PMC3174256 DOI: 10.1371/journal.pone.0024959] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2011] [Accepted: 08/25/2011] [Indexed: 12/03/2022] Open
Abstract
Background The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Methodology/Principal Findings Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1) – testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Conclusions/Significances Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Collapse
Affiliation(s)
- Jaeho Seol
- Interdisciplinary Program in Cognitive Science, Seoul National University College of Humanities, Seoul, Korea
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
| | - MiAe Oh
- Department of Statistics, Seoul National University College of Natural Sciences, Seoul, Korea
| | - June Sic Kim
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
| | - Seung-Hyun Jin
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
| | - Sun Il Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Cognitive Science, Seoul National University College of Humanities, Seoul, Korea
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
28
|
Steinschneider M, Nourski KV, Kawasaki H, Oya H, Brugge JF, Howard MA. Intracranial study of speech-elicited activity on the human posterolateral superior temporal gyrus. ACTA ACUST UNITED AC 2011; 21:2332-47. [PMID: 21368087 DOI: 10.1093/cercor/bhr014] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
To clarify speech-elicited response patterns within auditory-responsive cortex of the posterolateral superior temporal (PLST) gyrus, time-frequency analyses of event-related band power in the high gamma frequency range (75-175 Hz) were performed on the electrocorticograms recorded from high-density subdural grid electrodes in 8 patients undergoing evaluation for medically intractable epilepsy. Stimuli were 6 stop consonant-vowel (CV) syllables that varied in their consonant place of articulation (POA) and voice onset time (VOT). Initial augmentation was maximal over several centimeters of PLST, lasted about 400 ms, and was often followed by suppression and a local outward expansion of activation. Maximal gamma power overlapped either the Nα or Pβ deflections of the average evoked potential (AEP). Correlations were observed between the relative magnitudes of gamma band responses elicited by unvoiced stop CV syllables (/pa/, /ka/, /ta/) and their corresponding voiced stop CV syllables (/ba/, /ga/, /da/), as well as by the VOT of the stimuli. VOT was also represented in the temporal patterns of the AEP. These findings, obtained in the passive awake state, indicate that PLST discriminates acoustic features associated with POA and VOT and serve as a benchmark upon which task-related speech activity can be compared.
Collapse
|
29
|
Scharinger M, Merickel J, Riley J, Idsardi WJ. Neuromagnetic evidence for a featural distinction of English consonants: sensor- and source-space data. BRAIN AND LANGUAGE 2011; 116:71-82. [PMID: 21185073 PMCID: PMC3031676 DOI: 10.1016/j.bandl.2010.11.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2009] [Revised: 10/31/2010] [Accepted: 11/15/2010] [Indexed: 05/30/2023]
Abstract
Speech sounds can be classified on the basis of their underlying articulators or on the basis of the acoustic characteristics resulting from particular articulatory positions. Research in speech perception suggests that distinctive features are based on both articulatory and acoustic information. In recent years, neuroelectric and neuromagnetic investigations provided evidence for the brain's early sensitivity to distinctive features and their acoustic consequences, particularly for place of articulation distinctions. Here, we compare English consonants in a Mismatch Field design across two broad and distinct places of articulation - labial and coronal - and provide further evidence that early evoked auditory responses are sensitive to these features. We further add to the findings of asymmetric consonant processing, although we do not find support for coronal underspecification. Labial glides (Experiment 1) and fricatives (Experiment 2) elicited larger Mismatch responses than their coronal counterparts. Interestingly, their M100 dipoles differed along the anterior/posterior dimension in the auditory cortex that has previously been found to spatially reflect place of articulation differences. Our results are discussed with respect to acoustic and articulatory bases of featural speech sound classifications and with respect to a model that maps distinctive phonetic features onto long-term representations of speech sounds.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Linguistics, University of Maryland, College Park, MD 20742-7505, USA.
| | | | | | | |
Collapse
|
30
|
Obleser J, Eisner F. Pre-lexical abstraction of speech in the auditory cortex. Trends Cogn Sci 2009; 13:14-9. [PMID: 19070534 DOI: 10.1016/j.tics.2008.09.005] [Citation(s) in RCA: 99] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2008] [Revised: 09/10/2008] [Accepted: 09/11/2008] [Indexed: 10/21/2022]
|
31
|
Engineer CT, Perez CA, Chen YH, Carraway RS, Reed AC, Shetake JA, Jakkamsetti V, Chang KQ, Kilgard MP. Cortical activity patterns predict speech discrimination ability. Nat Neurosci 2008; 11:603-8. [PMID: 18425123 DOI: 10.1038/nn.2109] [Citation(s) in RCA: 157] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2008] [Accepted: 03/17/2008] [Indexed: 11/09/2022]
Abstract
Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50-500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1-10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, University of Texas at Dallas, 800 W. Campbell Road, Richardson, Texas 75080, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Eulitz C, Obleser J. Perception of acoustically complex phonological features in vowels is reflected in the induced brain-magnetic activity. Behav Brain Funct 2007; 3:26. [PMID: 17543108 PMCID: PMC1892031 DOI: 10.1186/1744-9081-3-26] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2006] [Accepted: 06/01/2007] [Indexed: 11/17/2022] Open
Abstract
A central issue in speech recognition is which basic units of speech are extracted by the auditory system and used for lexical access. One suggestion is that complex acoustic-phonetic information is mapped onto abstract phonological representations of speech and that a finite set of phonological features is used to guide speech perception. Previous studies analyzing the N1m component of the auditory evoked field have shown that this holds for the acoustically simple feature place of articulation. Brain magnetic correlates indexing the extraction of acoustically more complex features, such as lip rounding (ROUND) in vowels, have not been unraveled yet. The present study uses magnetoencephalography (MEG) to describe the spatial-temporal neural dynamics underlying the extraction of phonological features. We examined the induced electromagnetic brain response to German vowels and found the event-related desynchronization in the upper beta-band to be prolonged for those vowels that exhibit the lip rounding feature (ROUND). It was the presence of that feature rather than circumscribed single acoustic parameters, such as their formant frequencies, which explained the differences between the experimental conditions. We conclude that the prolonged event-related desynchronization in the upper beta-band correlates with the computational effort for the extraction of acoustically complex phonological features from the speech signal. The results provide an additional biomagnetic parameter to study mechanisms of speech perception.
Collapse
Affiliation(s)
- Carsten Eulitz
- Department of Linguistics, University of Konstanz, Germany
| | - Jonas Obleser
- Department of Linguistics, University of Konstanz, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Germany
| |
Collapse
|