1
|
Mueller JL, Weyers I, Friederici AD, Männel C. Individual differences in auditory perception predict learning of non-adjacent tone sequences in 3-year-olds. Front Hum Neurosci 2024; 18:1358380. [PMID: 38638804 PMCID: PMC11024384 DOI: 10.3389/fnhum.2024.1358380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 03/15/2024] [Indexed: 04/20/2024] Open
Abstract
Auditory processing of speech and non-speech stimuli oftentimes involves the analysis and acquisition of non-adjacent sound patterns. Previous studies using speech material have demonstrated (i) children's early emerging ability to extract non-adjacent dependencies (NADs) and (ii) a relation between basic auditory perception and this ability. Yet, it is currently unclear whether children show similar sensitivities and similar perceptual influences for NADs in the non-linguistic domain. We conducted an event-related potential study with 3-year-old children using a sine-tone-based oddball task, which simultaneously tested for NAD learning and auditory perception by means of varying sound intensity. Standard stimuli were A × B sine-tone sequences, in which specific A elements predicted specific B elements after variable × elements. NAD deviants violated the dependency between A and B and intensity deviants were reduced in amplitude. Both elicited similar frontally distributed positivities, suggesting successful deviant detection. Crucially, there was a predictive relationship between the amplitude of the sound intensity discrimination effect and the amplitude of the NAD learning effect. These results are taken as evidence that NAD learning in the non-linguistic domain is functional in 3-year-olds and that basic auditory processes are related to the learning of higher-order auditory regularities also outside the linguistic domain.
Collapse
Affiliation(s)
- Jutta L. Mueller
- Department of Linguistics, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Research HUB, Vienna, Austria
| | - Ivonne Weyers
- Department of Linguistics, University of Vienna, Vienna, Austria
| | - Angela D. Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Claudia Männel
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Audiology and Phoniatrics, Charité – Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
2
|
ERP Indicators of Phonological Awareness Development in Children: A Systematic Review. Brain Sci 2023; 13:brainsci13020290. [PMID: 36831833 PMCID: PMC9954228 DOI: 10.3390/brainsci13020290] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/04/2023] [Accepted: 02/05/2023] [Indexed: 02/11/2023] Open
Abstract
Phonological awareness is the ability to correctly recognize and manipulate phonological structures. The role of phonological awareness in reading development has become evident in behavioral research showing that it is inherently tied to measures of phonological processing and reading ability. This has also been shown with ERP research that examined how phonological processing training can benefit reading skills. However, there have not been many attempts to systematically review how phonological awareness itself is developed neurocognitively. In the present review, we screened 224 papers and systematically reviewed 40 papers that have explored phonological awareness and phonological processing using ERP methodology with both typically developing and children with reading problems. This review highlights ERP components that can be used as neurocognitive predictors of early developmental dyslexia and reading disorders in young children. Additionally, we have presented how phonological processing is developed neurocognitively throughout childhood, as well as which phonological tasks can be used to predict the development of phonological awareness prior to developing reading skills. Neurocognitive measures of early phonological processing can serve as supplemental diagnostic sources to behavioral measures of reading abilities because they show different aspects of phonological sensitivity when compared to behavioral measures.
Collapse
|
3
|
Yang Y, Li Q, Xiao Y, Liu Y, Sun K, Li B, Zheng Q. Auditory Discrimination Elicited by Nonspeech and Speech Stimuli in Children With Congenital Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3981-3995. [PMID: 36095326 PMCID: PMC9927627 DOI: 10.1044/2022_jslhr-22-00008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 04/15/2022] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Congenital deafness not only delays auditory development but also hampers the ability to perceive nonspeech and speech signals. This study aimed to use auditory event-related potentials to explore the mismatch negativity (MMN), P3a, negative wave (Nc), and late discriminative negativity (LDN) components in children with and without hearing loss. METHOD Nineteen children with normal hearing (CNH) and 17 children with hearing loss (CHL) participated in this study. Two sets of pure tones (1 kHz vs. 1.1 kHz) and lexical tones (/ba2/ vs. /ba4/) were used to examine the auditory discrimination process. RESULTS MMN could be elicited by the pure tone and the lexical tone in both groups. The MMN latency elicited by nonspeech and speech was later in CHL than in CNH. Additionally, the MMN latency induced by speech occurred later in the left than in the right hemisphere in CNH, and the MMN amplitude elicited by speech in CHL produced a discriminative deficiency compared with that in CNH. Although the P3a latency and amplitude elicited by nonspeech in CHL and CNH were not significantly different, the Nc amplitude elicited by speech performed much lower in CHL than in CNH. Furthermore, the LDN latency elicited by nonspeech was later in CHL than in CNH, and the LDN amplitude induced by speech showed higher dominance in the right hemisphere in both CNH and CHL. CONCLUSION By incorporating nonspeech and speech auditory conditions, we propose using MMN, Nc, and LDN as potential indices to investigate auditory perception, memory, and discrimination.
Collapse
Affiliation(s)
- Ying Yang
- Department of Hearing and Speech Rehabilitation, Binzhou Medical University, Yantai, China
| | - Qiong Li
- Department of Hearing and Speech Rehabilitation, Binzhou Medical University, Yantai, China
| | - Yanan Xiao
- Department of Hearing and Speech Rehabilitation, Binzhou Medical University, Yantai, China
| | - Yulu Liu
- Department of Hearing and Speech Rehabilitation, Binzhou Medical University, Yantai, China
| | - Kangning Sun
- Department of Hearing and Speech Rehabilitation, Binzhou Medical University, Yantai, China
| | - Bo Li
- Department of Hearing and Speech Rehabilitation, Binzhou Medical University, Yantai, China
| | - Qingyin Zheng
- Department of Otolaryngology, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
4
|
Deocampo JA, Smith GNL, Kronenberger WG, Pisoni DB, Conway CM. The Role of Statistical Learning in Understanding and Treating Spoken Language Outcomes in Deaf Children With Cochlear Implants. Lang Speech Hear Serv Sch 2019; 49:723-739. [PMID: 30120449 DOI: 10.1044/2018_lshss-stlt1-17-0138] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 03/11/2018] [Indexed: 11/09/2022] Open
Abstract
Purpose Statistical learning-the ability to learn patterns in environmental input-is increasingly recognized as a foundational mechanism necessary for the successful acquisition of spoken language. Spoken language is a complex, serially presented signal that contains embedded statistical relations among linguistic units, such as phonemes, morphemes, and words, which represent the phonotactic and syntactic rules of language. In this review article, we first review recent work that demonstrates that, in typical language development, individuals who display better nonlinguistic statistical learning abilities also show better performance on different measures of language. We next review research findings that suggest that children who are deaf and use cochlear implants may have difficulties learning sequential input patterns, possibly due to auditory and/or linguistic deprivation early in development, and that the children who show better sequence learning abilities also display improved spoken language outcomes. Finally, we present recent findings suggesting that it may be possible to improve core statistical learning abilities with specialized training and interventions and that such improvements can potentially impact and facilitate the acquisition and processing of spoken language. Method We conducted a literature search through various online databases including PsychINFO and PubMed, as well as including relevant review articles gleaned from the reference sections of other review articles used in this review. Search terms included various combinations of the following: sequential learning, sequence learning, statistical learning, sequence processing, procedural learning, procedural memory, implicit learning, language, computerized training, working memory training, statistical learning training, deaf, deafness, hearing impairment, hearing impaired, DHH, hard of hearing, cochlear implant(s), hearing aid(s), and auditory deprivation. To keep this review concise and clear, we limited inclusion to the foundational and most recent (2005-2018) relevant studies that explicitly included research or theoretical perspectives on statistical or sequential learning. We here summarize and synthesize the most recent and relevant literature to understanding and treating language delays in children using cochlear implants through the lens of statistical learning. Conclusions We suggest that understanding how statistical learning contributes to spoken language development is important for understanding some of the difficulties that children who are deaf and use cochlear implants might face and argue that it may be beneficial to develop novel language interventions that focus specifically on improving core foundational statistical learning skills.
Collapse
Affiliation(s)
| | - Gretchen N L Smith
- Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, Indianapolis
| | - William G Kronenberger
- Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, Indianapolis.,Department of Psychiatry, Indiana University School of Medicine, Indianapolis
| | - David B Pisoni
- Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, Indianapolis.,Department of Psychological and Brain Sciences, Indiana University,Bloomington
| | - Christopher M Conway
- Department of Psychology, Georgia State University, Atlanta.,The Neuroscience Institute, Georgia State University, Atlanta
| |
Collapse
|
5
|
Giustolisi B, Emmorey K. Visual Statistical Learning With Stimuli Presented Sequentially Across Space and Time in Deaf and Hearing Adults. Cogn Sci 2018; 42:3177-3190. [PMID: 30320454 DOI: 10.1111/cogs.12691] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 09/05/2018] [Accepted: 09/10/2018] [Indexed: 11/27/2022]
Abstract
This study investigated visual statistical learning (VSL) in 24 deaf signers and 24 hearing non-signers. Previous research with hearing individuals suggests that SL mechanisms support literacy. Our first goal was to assess whether VSL was associated with reading ability in deaf individuals, and whether this relation was sustained by a link between VSL and sign language skill. Our second goal was to test the Auditory Scaffolding Hypothesis, which makes the prediction that deaf people should be impaired in sequential processing tasks. For the VSL task, we adopted a modified version of the triplet learning paradigm, with stimuli presented sequentially across space and time. Results revealed that measures of sign language skill (sentence comprehension/repetition) did not correlate with VSL scores, possibly due to the sequential nature of our VSL task. Reading comprehension scores (PIAT-R) were a significant predictor of VSL accuracy in hearing but not deaf people. This finding might be due to the sequential nature of the VSL task and to a less salient role of the sequential orthography-to-phonology mapping in deaf readers compared to hearing readers. The two groups did not differ in VSL scores. However, when reading ability was taken into account, VSL scores were higher for the deaf group than the hearing group. Overall, this evidence is inconsistent with the Auditory Scaffolding Hypothesis, suggesting that humans can develop efficient sequencing abilities even in the absence of sound.
Collapse
Affiliation(s)
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| |
Collapse
|
6
|
Daikoku T. Neurophysiological Markers of Statistical Learning in Music and Language: Hierarchy, Entropy, and Uncertainty. Brain Sci 2018; 8:E114. [PMID: 29921829 PMCID: PMC6025354 DOI: 10.3390/brainsci8060114] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 06/14/2018] [Accepted: 06/18/2018] [Indexed: 01/07/2023] Open
Abstract
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of intention to learn and awareness of what has been learned. SL is an interdisciplinary notion that incorporates information technology, artificial intelligence, musicology, and linguistics, as well as psychology and neuroscience. A body of recent study has suggested that SL can be reflected in neurophysiological responses based on the framework of information theory. This paper reviews a range of work on SL in adults and children that suggests overlapping and independent neural correlations in music and language, and that indicates disability of SL. Furthermore, this article discusses the relationships between the order of transitional probabilities (TPs) (i.e., hierarchy of local statistics) and entropy (i.e., global statistics) regarding SL strategies in human's brains; claims importance of information-theoretical approaches to understand domain-general, higher-order, and global SL covering both real-world music and language; and proposes promising approaches for the application of therapy and pedagogy from various perspectives of psychology, neuroscience, computational studies, musicology, and linguistics.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany.
| |
Collapse
|
7
|
Torkildsen JVK, Arciuli J, Haukedal CL, Wie OB. Does a lack of auditory experience affect sequential learning? Cognition 2018; 170:123-129. [PMID: 28988151 DOI: 10.1016/j.cognition.2017.09.017] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 09/04/2017] [Accepted: 09/27/2017] [Indexed: 11/25/2022]
|
8
|
Speech-evoked auditory brainstem responses in children with hearing loss. Int J Pediatr Otorhinolaryngol 2017; 99:24-29. [PMID: 28688560 DOI: 10.1016/j.ijporl.2017.05.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Revised: 05/18/2017] [Accepted: 05/22/2017] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The main objective of the present study was to investigate subcortical auditory processing in children with sensorineural hearing loss. METHODS Auditory Brainstem Responses (ABRs) were recorded using click and speech/da/stimuli. Twenty-five children, aged 6-14 years old, participated in the study: 13 with normal hearing acuity and 12 with sensorineural hearing loss. RESULTS No significant differences were observed for the click-evoked ABRs between normal hearing and hearing-impaired groups. For the speech-evoked ABRs, no significant differences were found for the latencies of the following responses between the two groups: onset (V and A), transition (C), one of the steady-state wave (F), and offset (O). However, the latency of the steady-state waves (D and E) was significantly longer for the hearing-impaired compared to the normal hearing group. Furthermore, the amplitude of the offset wave O and of the envelope frequency response (EFR) of the speech-evoked ABRs was significantly larger for the hearing-impaired compared to the normal hearing group. CONCLUSIONS Results obtained from the speech-evoked ABRs suggest that children with a mild to moderately-severe sensorineural hearing loss have a specific pattern of subcortical auditory processing. Our results show differences for the speech-evoked ABRs in normal hearing children compared to hearing-impaired children. These results add to the body of the literature on how children with hearing loss process speech at the brainstem level.
Collapse
|
9
|
Fu M, Wang L, Zhang M, Yang Y, Sun X. A mismatch negativity study in Mandarin-speaking children with sensorineural hearing loss. Int J Pediatr Otorhinolaryngol 2016; 91:128-140. [PMID: 27863627 DOI: 10.1016/j.ijporl.2016.10.020] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 10/16/2016] [Accepted: 10/21/2016] [Indexed: 01/02/2023]
Abstract
OBJECTIVE a) To examine the effects of sensorineural hearing loss on the discriminability of linguistic and non-linguistic stimuli at the cortical level, and b) to examine whether the cortical responses differ based on the chronological age at intervention, the degree of hearing loss, or the acoustic stimulation mode in children with severe and profound hearing loss. METHODS Mismatch negativity (MMN) responses were collected from 43 children with severe and profound bilateral sensorineural hearing loss, and 20 children with normal hearing (age: 3-6 years). In the non-verbal stimulation condition, pure tones with frequencies of 1 kHz and 1.1 kHz were used as the standard and the deviant respectively. In the verbal stimulation condition, the Chinese mandarin tokens/ba2/and/ba4/were used as the standard and the deviant respectively. Latency and amplitude of the MMN responses were collected and analyzed. RESULTS Overall, children with hearing loss showed longer latencies and lower amplitudes of the MMN responses to both non-verbal and verbal stimulations. The latency of the verbal/ba2/-/ba4/pair was longer than that of the nonverbal 1 kHz-1.1 kHz pair in both groups of children. CONCLUSIONS Children with hearing loss, especially those who received intervention after 2 years of age, showed substantial weakness in the neural responses to lexical tones and pure tones. Thus, the chronological age when the children receive hearing intervention may have an impact on the effectiveness of discriminating between verbal and non-verbal signals.
Collapse
Affiliation(s)
- Mingfu Fu
- Department of Orthopaedics, Yantaishan Hospital, Yantai, Shandong 264000, China
| | - Liyan Wang
- Department of Auditory Center, China Rehabilitation Research Center for Deaf Children, Beijing 100029, China
| | - Mengchao Zhang
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA 15260, United States
| | - Ying Yang
- Department of Special Education, Binzhou Medical University, Yantai, Shandong 264003, China.
| | - Xibin Sun
- Department of Auditory Center, China Rehabilitation Research Center for Deaf Children, Beijing 100029, China.
| |
Collapse
|