1
|
Li Z, Zhang D. How does the human brain process noisy speech in real life? Insights from the second-person neuroscience perspective. Cogn Neurodyn 2024; 18:371-382. [PMID: 38699619 PMCID: PMC11061069 DOI: 10.1007/s11571-022-09924-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/20/2022] [Accepted: 12/19/2022] [Indexed: 01/07/2023] Open
Abstract
Comprehending speech with the existence of background noise is of great importance for human life. In the past decades, a large number of psychological, cognitive and neuroscientific research has explored the neurocognitive mechanisms of speech-in-noise comprehension. However, as limited by the low ecological validity of the speech stimuli and the experimental paradigm, as well as the inadequate attention on the high-order linguistic and extralinguistic processes, there remains much unknown about how the brain processes noisy speech in real-life scenarios. A recently emerging approach, i.e., the second-person neuroscience approach, provides a novel conceptual framework. It measures both of the speaker's and the listener's neural activities, and estimates the speaker-listener neural coupling with regarding of the speaker's production-related neural activity as a standardized reference. The second-person approach not only promotes the use of naturalistic speech but also allows for free communication between speaker and listener as in a close-to-life context. In this review, we first briefly review the previous discoveries about how the brain processes speech in noise; then, we introduce the principles and advantages of the second-person neuroscience approach and discuss its implications to unravel the linguistic and extralinguistic processes during speech-in-noise comprehension; finally, we conclude by proposing some critical issues and calls for more research interests in the second-person approach, which would further extend the present knowledge about how people comprehend speech in noise.
Collapse
Affiliation(s)
- Zhuoran Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Room 334, Mingzhai Building, Beijing, 100084 China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, 100084 China
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Room 334, Mingzhai Building, Beijing, 100084 China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, 100084 China
| |
Collapse
|
2
|
Levin M, Zaltz Y. Voice Discrimination in Quiet and in Background Noise by Simulated and Real Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:5169-5186. [PMID: 37992412 DOI: 10.1044/2023_jslhr-23-00019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
PURPOSE Cochlear implant (CI) users demonstrate poor voice discrimination (VD) in quiet conditions based on the speaker's fundamental frequency (fo) and formant frequencies (i.e., vocal-tract length [VTL]). Our purpose was to examine the effect of background noise at levels that allow good speech recognition thresholds (SRTs) on VD via acoustic CI simulations and CI hearing. METHOD Forty-eight normal-hearing (NH) listeners who listened via noise-excited (n = 20) or sinewave (n = 28) vocoders and 10 prelingually deaf CI users (i.e., whose hearing loss began before language acquisition) participated in the study. First, the signal-to-noise ratio (SNR) that yields 70.7% correct SRT was assessed using an adaptive sentence-in-noise test. Next, the CI simulation listeners performed 12 adaptive VDs: six in quiet conditions, two with each cue (fo, VTL, fo + VTL), and six amid speech-shaped noise. The CI participants performed six VDs: one with each cue, in quiet and amid noise. SNR at VD testing was 5 dB higher than the individual's SRT in noise (SRTn +5 dB). RESULTS Results showed the following: (a) Better VD was achieved via the noise-excited than the sinewave vocoder, with the noise-excited vocoder better mimicking CI VD; (b) background noise had a limited negative effect on VD, only for the CI simulation listeners; and (c) there was a significant association between SNR at testing and VTL VD only for the CI simulation listeners. CONCLUSIONS For NH listeners who listen to CI simulations, noise that allows good SRT can nevertheless impede VD, probably because VD depends more on bottom-up sensory processing. Conversely, for prelingually deaf CI users, noise that allows good SRT hardly affects VD, suggesting that they rely strongly on bottom-up processing for both VD and speech recognition.
Collapse
Affiliation(s)
- Michal Levin
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Faculty of Medicine, Tel Aviv University, Israel
| | - Yael Zaltz
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Faculty of Medicine, Tel Aviv University, Israel
- Sagol School of Neuroscience, Tel Aviv University, Israel
| |
Collapse
|
3
|
Natarajan N, Batts S, Stankovic KM. Noise-Induced Hearing Loss. J Clin Med 2023; 12:2347. [PMID: 36983347 PMCID: PMC10059082 DOI: 10.3390/jcm12062347] [Citation(s) in RCA: 32] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/10/2023] [Accepted: 03/14/2023] [Indexed: 03/22/2023] Open
Abstract
Noise-induced hearing loss (NIHL) is the second most common cause of sensorineural hearing loss, after age-related hearing loss, and affects approximately 5% of the world's population. NIHL is associated with substantial physical, mental, social, and economic impacts at the patient and societal levels. Stress and social isolation in patients' workplace and personal lives contribute to quality-of-life decrements which may often go undetected. The pathophysiology of NIHL is multifactorial and complex, encompassing genetic and environmental factors with substantial occupational contributions. The diagnosis and screening of NIHL are conducted by reviewing a patient's history of noise exposure, audiograms, speech-in-noise test results, and measurements of distortion product otoacoustic emissions and auditory brainstem response. Essential aspects of decreasing the burden of NIHL are prevention and early detection, such as implementation of educational and screening programs in routine primary care and specialty clinics. Additionally, current research on the pharmacological treatment of NIHL includes anti-inflammatory, antioxidant, anti-excitatory, and anti-apoptotic agents. Although there have been substantial advances in understanding the pathophysiology of NIHL, there remain low levels of evidence for effective pharmacotherapeutic interventions. Future directions should include personalized prevention and targeted treatment strategies based on a holistic view of an individual's occupation, genetics, and pathology.
Collapse
Affiliation(s)
- Nirvikalpa Natarajan
- Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, Palo Alto, CA 94304, USA
| | - Shelley Batts
- Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, Palo Alto, CA 94304, USA
| | - Konstantina M. Stankovic
- Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, Palo Alto, CA 94304, USA
- Department of Neurosurgery, Stanford University School of Medicine, Palo Alto, CA 94304, USA
- Wu Tsai Neuroscience Institute, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
4
|
Intensive Training of Spatial Hearing Promotes Auditory Abilities of Bilateral Cochlear Implant Adults: A Pilot Study. Ear Hear 2023; 44:61-76. [PMID: 35943235 DOI: 10.1097/aud.0000000000001256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the feasibility of a virtual reality-based spatial hearing training protocol in bilateral cochlear implant (CI) users and to provide pilot data on the impact of this training on different qualities of hearing. DESIGN Twelve bilateral CI adults aged between 19 and 69 followed an intensive 10-week rehabilitation program comprised eight virtual reality training sessions (two per week) interspersed with several evaluation sessions (2 weeks before training started, after four and eight training sessions, and 1 month after the end of training). During each 45-minute training session, participants localized a sound source whose position varied in azimuth and/or in elevation. At the start of each trial, CI users received no information about sound location, but after each response, feedback was given to enable error correction. Participants were divided into two groups: a multisensory feedback group (audiovisual spatial cue) and an unisensory group (visual spatial cue) who only received feedback in a wholly intact sensory modality. Training benefits were measured at each evaluation point using three tests: 3D sound localization in virtual reality, the French Matrix test, and the Speech, Spatial and other Qualities of Hearing questionnaire. RESULTS The training was well accepted and all participants attended the whole rehabilitation program. Four training sessions spread across 2 weeks were insufficient to induce significant performance changes, whereas performance on all three tests improved after eight training sessions. Front-back confusions decreased from 32% to 14.1% ( p = 0.017); speech recognition threshold score from 1.5 dB to -0.7 dB signal-to-noise ratio ( p = 0.029) and eight CI users successfully achieved a negative signal-to-noise ratio. One month after the end of structured training, these performance improvements were still present, and quality of life was significantly improved for both self-reports of sound localization (from 5.3 to 6.7, p = 0.015) and speech understanding (from 5.2 to 5.9, p = 0.048). CONCLUSIONS This pilot study shows the feasibility and potential clinical relevance of this type of intervention involving a sensorial immersive environment and could pave the way for more systematic rehabilitation programs after cochlear implantation.
Collapse
|
5
|
Santurtún M, García Tárrago MJ, Fdez-Arroyabe P, Zarrabeitia MT. Noise Disturbance and Well-Being in the North of Spain. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:16457. [PMID: 36554336 PMCID: PMC9778707 DOI: 10.3390/ijerph192416457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/02/2022] [Accepted: 12/04/2022] [Indexed: 06/17/2023]
Abstract
Environmental noise is considered one of the main risks for physical and mental health and well-being, with a significant associated burden of disease in Europe. This work aims to explore the main sources of noise exposure at home and its effect on well-being in northern Spain. A transversal opinion study has been performed through a closed questionnaire. The questionnaire included three different parts: sociodemographic data, noise disturbance, and the 5-item World Health Organization Well-Being Index (WHO-5). A Binary Logistics Regression model was performed to analyze the relationship between noise exposure and well-being. Overall, 16.6% of the participants consider that the noise isolation of their homes is bad or very bad. The noise generated by the neighbors (air and impact noise) is considered the most disturbing indoor noise source, while street works are the most disturbing outdoor noise source in urban areas and road traffic is the most disturbing in rural areas. People who indicate that noise interferes with their life at home have a worse score on the WHO-5 (decreased perception of well-being). The exposure to outdoor noise (specifically the noise coming from the street and trains), internal impact noise produced by neighbors, and in general, the noise that wakes you up, is related to receiving a worse score in the WHO-5 (p < 0.05). Administrative bodies must ensure that laws regulating at-home noise levels, which are continually being updated with stricter restrictions, are enforced.
Collapse
Affiliation(s)
- Maite Santurtún
- Centro Hospitalario Padre Menni, 39012 Santander, Spain
- Nursery Department, University of Cantabria, 39005 Santander, Spain
| | | | - Pablo Fdez-Arroyabe
- Department of Geography, Urban Planning and Territorial Planning, University of Cantabria, 39005 Santander, Spain
| | - María T. Zarrabeitia
- Unit of Legal Medicine, Department of Physiology and Pharmacology, University of Cantabria—IDIVAL, 39005 Santander, Spain
| |
Collapse
|
6
|
Deshpande P, Brandt C, Debener S, Neher T. Comparing Clinically Applicable Behavioral and Electrophysiological Measures of Speech Detection, Discrimination, and Comprehension. Trends Hear 2022; 26:23312165221139733. [PMID: 36423251 PMCID: PMC9703531 DOI: 10.1177/23312165221139733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Effective communication requires good speech perception abilities. Speech perception can be assessed with behavioral and electrophysiological methods. Relating these two types of measures to each other can provide a basis for new clinical tests. In audiological practice, speech detection and discrimination are routinely assessed, whereas comprehension-related aspects are ignored. The current study compared behavioral and electrophysiological measures of speech detection, discrimination, and comprehension. Thirty young normal-hearing native Danish speakers participated. All measurements were carried out with digits and stationary speech-shaped noise as the stimuli. The behavioral measures included speech detection thresholds (SDTs), speech recognition thresholds (SRTs), and speech comprehension scores (i.e., response times). For the electrophysiological measures, multichannel electroencephalography (EEG) recordings were performed. N100 and P300 responses were evoked using an active auditory oddball paradigm. N400 and Late Positive Complex (LPC) responses were evoked using a paradigm based on congruent and incongruent digit triplets, with the digits presented either all acoustically or first visually (digits 1-2) and then acoustically (digit 3). While no correlations between the SDTs and SRTs and the N100 and P300 responses were found, the response times were correlated with the EEG responses to the congruent and incongruent triplets. Furthermore, significant differences between the response times (but not EEG responses) obtained with auditory and visual-then-auditory stimulus presentation were observed. This pattern of results could reflect a faster recall mechanism when the first two digits are presented visually rather than acoustically. The visual-then-auditory condition may facilitate the assessment of comprehension-related processes in hard-of-hearing individuals.
Collapse
Affiliation(s)
- Pushkar Deshpande
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark,Pushkar Deshpande, Institute of Clinical Research, University of Southern Denmark, Odense, Denmark; Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark.
| | - Christian Brandt
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | - Stefan Debener
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Tobias Neher
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| |
Collapse
|
7
|
Hussain RO, Kumar P, Singh NK. Subcortical and Cortical Electrophysiological Measures in Children With Speech-in-Noise Deficits Associated With Auditory Processing Disorders. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4454-4468. [PMID: 36279585 DOI: 10.1044/2022_jslhr-22-00094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The aim of this study was to analyze the subcortical and cortical auditory evoked potentials for speech stimuli in children with speech-in-noise (SIN) deficits associated with auditory processing disorder (APD) without any reading or language deficits. METHOD The study included 20 children in the age range of 9-13 years. Ten children were recruited to the APD group; they had below-normal scores on the speech-perception-in-noise test and were diagnosed as having APD. The remaining 10 were typically developing (TD) children and were recruited to the TD group. Speech-evoked subcortical (brainstem) and cortical (auditory late latency) responses were recorded and compared across both groups. RESULTS The results showed a statistically significant reduction in the amplitudes of the subcortical potentials (both for stimulus in quiet and in noise) and the magnitudes of the spectral components (fundamental frequency and the second formant) in children with SIN deficits in the APD group compared to the TD group. In addition, the APD group displayed enhanced amplitudes of the cortical potentials compared to the TD group. CONCLUSION Children with SIN deficits associated with APD exhibited impaired coding/processing of the auditory information at the level of the brainstem and the auditory cortex. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21357735.
Collapse
Affiliation(s)
| | - Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore
| | - Niraj Kumar Singh
- Department of Audiology, All India Institute of Speech and Hearing, Mysore
| |
Collapse
|
8
|
Wei Y, Gan L, Huang X. A Review of Research on the Neurocognition for Timbre Perception. Front Psychol 2022; 13:869475. [PMID: 35422736 PMCID: PMC9001888 DOI: 10.3389/fpsyg.2022.869475] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 03/07/2022] [Indexed: 11/26/2022] Open
Abstract
As one of the basic elements in acoustic events, timbre influences the brain collectively with other factors such as pitch and loudness. Research on timbre perception involve interdisciplinary fields, including physical acoustics, auditory psychology, neurocognitive science and music theory, etc. From the perspectives of psychology and physiology, this article summarizes the features and functions of timbre perception as well as their correlation, among which the multi-dimensional scaling modeling methods to define timbre are the focus; the neurocognition and perception of timbre (including sensitivity, adaptability, memory capability, etc.) are outlined; related experiment findings (by using EEG/ERP, fMRI, etc.) on the deeper level of timbre perception in terms of neural cognition are summarized. In the meantime, potential problems in the process of experiments on timbre perception and future possibilities are also discussed. Thought sorting out the existing research contents, methods and findings of timbre perception, this article aims to provide heuristic guidance for researchers in related fields of timbre perception psychology, physiology and neural mechanism. It is believed that the study of timbre perception will be essential in various fields in the future, including neuroaesthetics, psychological intervention, artistic creation, rehabilitation, etc.
Collapse
Affiliation(s)
- Yuyan Wei
- Department of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Lin Gan
- Department of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China
| | - Xiangdong Huang
- Department of Electrical and Information Engineering, Tianjin University, Tianjin, China
| |
Collapse
|
9
|
Schelinski S, Tabas A, von Kriegstein K. Altered processing of communication signals in the subcortical auditory sensory pathway in autism. Hum Brain Mapp 2022; 43:1955-1972. [PMID: 35037743 PMCID: PMC8933247 DOI: 10.1002/hbm.25766] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 11/24/2021] [Accepted: 12/19/2021] [Indexed: 12/17/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterised by social communication difficulties. These difficulties have been mainly explained by cognitive, motivational, and emotional alterations in ASD. The communication difficulties could, however, also be associated with altered sensory processing of communication signals. Here, we assessed the functional integrity of auditory sensory pathway nuclei in ASD in three independent functional magnetic resonance imaging experiments. We focused on two aspects of auditory communication that are impaired in ASD: voice identity perception, and recognising speech-in-noise. We found reduced processing in adults with ASD as compared to typically developed control groups (pairwise matched on sex, age, and full-scale IQ) in the central midbrain structure of the auditory pathway (inferior colliculus [IC]). The right IC responded less in the ASD as compared to the control group for voice identity, in contrast to speech recognition. The right IC also responded less in the ASD as compared to the control group when passively listening to vocal in contrast to non-vocal sounds. Within the control group, the left and right IC responded more when recognising speech-in-noise as compared to when recognising speech without additional noise. In the ASD group, this was only the case in the left, but not the right IC. The results show that communication signal processing in ASD is associated with reduced subcortical sensory functioning in the midbrain. The results highlight the importance of considering sensory processing alterations in explaining communication difficulties, which are at the core of ASD.
Collapse
Affiliation(s)
- Stefanie Schelinski
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Alejandro Tabas
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Katharina von Kriegstein
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
10
|
Neal K, McMahon CM, Hughes SE, Boisvert I. Listening-Based Communication Ability in Adults With Hearing Loss: A Scoping Review of Existing Measures. Front Psychol 2022; 13:786347. [PMID: 35360643 PMCID: PMC8960922 DOI: 10.3389/fpsyg.2022.786347] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 01/31/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Hearing loss in adults has a pervasive impact on health and well-being. Its effects on everyday listening and communication can directly influence participation across multiple spheres of life. These impacts, however, remain poorly assessed within clinical settings. Whilst various tests and questionnaires that measure listening and communication abilities are available, there is a lack of consensus about which measures assess the factors that are most relevant to optimising auditory rehabilitation. This study aimed to map current measures used in published studies to evaluate listening skills needed for oral communication in adults with hearing loss. Methods A scoping review was conducted using systematic searches in Medline, EMBASE, Web of Science and Google Scholar to retrieve peer-reviewed articles that used one or more linguistic-based measure necessary to oral communication in adults with hearing loss. The range of measures identified and their frequency where charted in relation to auditory hierarchies, linguistic domains, health status domains, and associated neuropsychological and cognitive domains. Results 9121 articles were identified and 2579 articles that reported on 6714 discrete measures were included for further analysis. The predominant linguistic-based measure reported was word or sentence identification in quiet (65.9%). In contrast, discourse-based measures were used in 2.7% of the articles included. Of the included studies, 36.6% used a self-reported instrument purporting to measures of listening for communication. Consistent with previous studies, a large number of self-reported measures were identified (n = 139), but 60.4% of these measures were used in only one study and 80.7% were cited five times or fewer. Discussion Current measures used in published studies to assess listening abilities relevant to oral communication target a narrow set of domains. Concepts of communicative interaction have limited representation in current measurement. The lack of measurement consensus and heterogeneity amongst the assessments limit comparisons across studies. Furthermore, extracted measures rarely consider the broader linguistic, cognitive and interactive elements of communication. Consequently, existing measures may have limited clinical application if assessing the listening-related skills required for communication in daily life, as experienced by adults with hearing loss.
Collapse
Affiliation(s)
- Katie Neal
- Department of Lingustics, Macquarie University, Sydney, NSW, Australia
| | - Catherine M. McMahon
- Department of Lingustics, Macquarie University, Sydney, NSW, Australia
- Hearing, Macquarie University, Sydney, NSW, Australia
| | - Sarah E. Hughes
- Centre for Patient Reported Outcome Research, Institute of Applied Health Research, University of Birmingham, Birmingham, United Kingdom
- National Institute of Health Research (NIHR), Applied Research Collaboration (ARC), West Midlands, United Kingdom
- Faculty of Medicine, Health and Life Science, Swansea University, Swansea, United Kingdom
| | - Isabelle Boisvert
- Hearing, Macquarie University, Sydney, NSW, Australia
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
11
|
Bsharat-Maalouf D, Karawani H. Bilinguals' speech perception in noise: Perceptual and neural associations. PLoS One 2022; 17:e0264282. [PMID: 35196339 PMCID: PMC8865662 DOI: 10.1371/journal.pone.0264282] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 02/07/2022] [Indexed: 01/26/2023] Open
Abstract
The current study characterized subcortical speech sound processing among monolinguals and bilinguals in quiet and challenging listening conditions and examined the relation between subcortical neural processing and perceptual performance. A total of 59 normal-hearing adults, ages 19–35 years, participated in the study: 29 native Hebrew-speaking monolinguals and 30 Arabic-Hebrew-speaking bilinguals. Auditory brainstem responses to speech sounds were collected in a quiet condition and with background noise. The perception of words and sentences in quiet and background noise conditions was also examined to assess perceptual performance and to evaluate the perceptual-physiological relationship. Perceptual performance was tested among bilinguals in both languages (first language (L1-Arabic) and second language (L2-Hebrew)). The outcomes were similar between monolingual and bilingual groups in quiet. Noise, as expected, resulted in deterioration in perceptual and neural responses, which was reflected in lower accuracy in perceptual tasks compared to quiet, and in more prolonged latencies and diminished neural responses. However, a mixed picture was observed among bilinguals in perceptual and physiological outcomes in noise. In the perceptual measures, bilinguals were significantly less accurate than their monolingual counterparts. However, in neural responses, bilinguals demonstrated earlier peak latencies compared to monolinguals. Our results also showed that perceptual performance in noise was related to subcortical resilience to the disruption caused by background noise. Specifically, in noise, increased brainstem resistance (i.e., fewer changes in the fundamental frequency (F0) representations or fewer shifts in the neural timing) was related to better speech perception among bilinguals. Better perception in L1 in noise was correlated with fewer changes in F0 representations, and more accurate perception in L2 was related to minor shifts in auditory neural timing. This study delves into the importance of using neural brainstem responses to speech sounds to differentiate individuals with different language histories and to explain inter-subject variability in bilinguals’ perceptual abilities in daily life situations.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
- * E-mail:
| |
Collapse
|
12
|
Walia A, Shew MA, Kallogjeri D, Wick CC, Durakovic N, Lefler SM, Ortmann AJ, Herzog JA, Buchman CA. Electrocochleography and cognition are important predictors of speech perception outcomes in noise for cochlear implant recipients. Sci Rep 2022; 12:3083. [PMID: 35197556 PMCID: PMC8866505 DOI: 10.1038/s41598-022-07175-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 02/10/2022] [Indexed: 11/15/2022] Open
Abstract
Although significant progress has been made in understanding outcomes following cochlear implantation, predicting performance remains a challenge. Duration of hearing loss, age at implantation, and electrode positioning within the cochlea together explain ~ 25% of the variability in speech-perception scores in quiet using the cochlear implant (CI). Electrocochleography (ECochG) responses, prior to implantation, account for 47% of the variance in the same speech-perception measures. No study to date has explored CI performance in noise, a more realistic measure of natural listening. This study aimed to (1) validate ECochG total response (ECochG-TR) as a predictor of performance in quiet and (2) evaluate whether ECochG-TR explained variability in noise performance. Thirty-five adult CI recipients were enrolled with outcomes assessed at 3-months post-implantation. The results confirm previous studies showing a strong correlation of ECochG-TR with speech-perception in quiet (r = 0.77). ECochG-TR independently explained 34% of the variability in noise performance. Multivariate modeling using ECochG-TR and Montreal Cognitive Assessment (MoCA) scores explained 60% of the variability in speech-perception in noise. Thus, ECochG-TR, a measure of the cochlear substrate prior to implantation, is necessary but not sufficient for explaining performance in noise. Rather, a cognitive measure is also needed to improve prediction of noise performance.
Collapse
Affiliation(s)
- Amit Walia
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave, Campus Box 8115, St. Louis, MO, 63110, USA.
| | - Matthew A Shew
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave, Campus Box 8115, St. Louis, MO, 63110, USA
| | - Dorina Kallogjeri
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave, Campus Box 8115, St. Louis, MO, 63110, USA
| | - Cameron C Wick
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave, Campus Box 8115, St. Louis, MO, 63110, USA
| | - Nedim Durakovic
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave, Campus Box 8115, St. Louis, MO, 63110, USA
| | - Shannon M Lefler
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave, Campus Box 8115, St. Louis, MO, 63110, USA
| | - Amanda J Ortmann
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave, Campus Box 8115, St. Louis, MO, 63110, USA
| | - Jacques A Herzog
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave, Campus Box 8115, St. Louis, MO, 63110, USA
| | - Craig A Buchman
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave, Campus Box 8115, St. Louis, MO, 63110, USA
| |
Collapse
|
13
|
Niemczak CE, Lichtenstein JD, Magohe A, Amato JT, Fellows AM, Gui J, Huang M, Rieke CC, Massawe ER, Boivin MJ, Moshi N, Buckey JC. The Relationship Between Central Auditory Tests and Neurocognitive Domains in Adults Living With HIV. Front Neurosci 2021; 15:696513. [PMID: 34658754 PMCID: PMC8517794 DOI: 10.3389/fnins.2021.696513] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 09/07/2021] [Indexed: 12/21/2022] Open
Abstract
Objective: Tests requiring central auditory processing, such as speech perception-in-noise, are simple, time efficient, and correlate with cognitive processing. These tests may be useful for tracking brain function. Doing this effectively requires information on which tests correlate with overall cognitive function and specific cognitive domains. This study evaluated the relationship between selected central auditory focused tests and cognitive domains in a cohort of normal hearing adults living with HIV and HIV- controls. The long-term aim is determining the relationships between auditory processing and neurocognitive domains and applying this to analyzing cognitive function in HIV and other neurocognitive disorders longitudinally. Method: Subjects were recruited from an ongoing study in Dar es Salaam, Tanzania. Central auditory measures included the Gap Detection Test (Gap), Hearing in Noise Test (HINT), and Triple Digit Test (TDT). Cognitive measures included variables from the Test of Variables of Attention (TOVA), Cogstate neurocognitive battery, and Kiswahili Montreal Cognitive Assessment (MoCA). The measures represented three cognitive domains: processing speed, learning, and working memory. Bootstrap resampling was used to calculate the mean and standard deviation of the proportion of variance explained by the individual central auditory tests for each cognitive measure. The association of cognitive measures with central auditory variables taking HIV status and age into account was determined using regression models. Results: Hearing in Noise Tests and TDT were significantly associated with Cogstate learning and working memory tests. Gap was not significantly associated with any cognitive measure with age in the model. TDT explained the largest mean proportion of variance and had the strongest relationship to the MoCA and Cogstate tasks. With age in the model, HIV status did not affect the relationship between central auditory tests and cognitive measures. Age was strongly associated with multiple cognitive tests. Conclusion: Central auditory tests were associated with measures of learning and working memory. Compared to the other central auditory tests, TDT was most strongly related to cognitive function. These findings expand on the association between auditory processing and cognitive domains seen in other studies and support evaluating these tests for tracking brain health in HIV and other neurocognitive disorders.
Collapse
Affiliation(s)
- Christopher E. Niemczak
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Jonathan D. Lichtenstein
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, United States
- Department of Psychiatry, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Albert Magohe
- Department of Otorhinolaryngology, Muhimibili University of Health and Allied Sciences, Dar es Salaam, Tanzania
| | - Jennifer T. Amato
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, United States
- Department of Psychiatry, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Abigail M. Fellows
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Jiang Gui
- Department of Data Science, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Michael Huang
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Catherine C. Rieke
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, United States
| | - Enica R. Massawe
- Department of Otorhinolaryngology, Muhimibili University of Health and Allied Sciences, Dar es Salaam, Tanzania
| | - Michael J. Boivin
- Department of Psychiatry, Michigan State University, East Lansing, MI, United States
| | - Ndeserua Moshi
- Department of Otorhinolaryngology, Muhimibili University of Health and Allied Sciences, Dar es Salaam, Tanzania
| | - Jay C. Buckey
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, United States
| |
Collapse
|
14
|
Mihai PG, Tschentscher N, von Kriegstein K. Modulation of the Primary Auditory Thalamus When Recognizing Speech with Background Noise. J Neurosci 2021; 41:7136-7147. [PMID: 34244362 PMCID: PMC8372015 DOI: 10.1523/jneurosci.2902-20.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 05/18/2021] [Accepted: 05/20/2021] [Indexed: 11/21/2022] Open
Abstract
Recognizing speech in background noise is a strenuous daily activity, yet most humans can master it. An explanation of how the human brain deals with such sensory uncertainty during speech recognition is to-date missing. Previous work has shown that recognition of speech without background noise involves modulation of the auditory thalamus (medial geniculate body; MGB): there are higher responses in left MGB for speech recognition tasks that require tracking of fast-varying stimulus properties in contrast to relatively constant stimulus properties (e.g., speaker identity tasks) despite the same stimulus input. Here, we tested the hypotheses that (1) this task-dependent modulation for speech recognition increases in parallel with the sensory uncertainty in the speech signal, i.e., the amount of background noise; and that (2) this increase is present in the ventral MGB, which corresponds to the primary sensory part of the auditory thalamus. In accordance with our hypothesis, we show, by using ultra-high-resolution functional magnetic resonance imaging (fMRI) in male and female human participants, that the task-dependent modulation of the left ventral MGB (vMGB) for speech is particularly strong when recognizing speech in noisy listening conditions in contrast to situations where the speech signal is clear. The results imply that speech in noise recognition is supported by modifications at the level of the subcortical sensory pathway providing driving input to the auditory cortex.SIGNIFICANCE STATEMENT Speech recognition in noisy environments is a challenging everyday task. One reason why humans can master this task is the recruitment of additional cognitive resources as reflected in recruitment of non-language cerebral cortex areas. Here, we show that also modulation in the primary sensory pathway is specifically involved in speech in noise recognition. We found that the left primary sensory thalamus (ventral medial geniculate body; vMGB) is more involved when recognizing speech signals as opposed to a control task (speaker identity recognition) when heard in background noise versus when the noise was absent. This finding implies that the brain optimizes sensory processing in subcortical sensory pathway structures in a task-specific manner to deal with speech recognition in noisy environments.
Collapse
Affiliation(s)
- Paul Glad Mihai
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden 01187, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Nadja Tschentscher
- Research Unit Biological Psychology, Department of Psychology, Ludwig-Maximilians-University Munich, Munich 80802, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden 01187, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
15
|
Auditory processing in normally hearing individuals with and without tinnitus: assessment with four psychoacoustic tests. Eur Arch Otorhinolaryngol 2021; 279:275-283. [PMID: 34363504 PMCID: PMC8739298 DOI: 10.1007/s00405-021-07023-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 07/27/2021] [Indexed: 11/06/2022]
Abstract
Purpose In most cases, tinnitus co-exists with hearing loss, suggesting that poorer speech understanding is simply due to a lack of acoustic information reaching the central nervous system (CNS). However, it also happens that patients with tinnitus who have normal hearing also report problems with speech understanding, and it is possible to suppose that tinnitus is to blame for difficulties in perceptual processing of auditory information. The purpose of the study was to evaluate the auditory processing abilities of normally hearing subjects with and without tinnitus. Methods The study group comprised 97 adults, 54 of whom had normal hearing and chronic tinnitus (the study group) and 43 who had normal hearing and no tinnitus (the control group). The audiological assessment comprised pure-tone audiometry and high-frequency pure-tone audiometry, impedance audiometry, and distortion product oto-acoustic emission assessment. To evaluate possible auditory processing deficits, the Frequency Pattern Test (FPT), Duration Pattern Test (DPT), Dichotic Listening Test (DLT), and Gap Detection Threshold (GDT) tests were performed. Results The tinnitus subjects had significantly lower scores than the controls in the gap detection test (p < 0.01) and in the dichotic listening test (p < 0.001), but only for the right ear. The results for both groups were similar in the temporal ordering tests (FPT and DPT). Right-ear advantage (REA) was found for the controls, but not for the tinnitus subjects. Conclusion In normally hearing patients, the presence of tinnitus may be accompanied with auditory processing difficulties.
Collapse
|
16
|
Segal O, Kligler N, Kishon-Rabin L. Infants' Preference for Child-Directed Speech Over Time-Reversed Speech in On-Channel and Off-Channel Masking. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2897-2908. [PMID: 34157233 DOI: 10.1044/2021_jslhr-20-00279] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study aims to examine the development of auditory selective attention to speech in noise by examining the ability of infants to prefer child-directed speech (CDS) over time-reversed speech (TRS) presented in "on-channel" and "off-channel" noise. Method A total of 32 infants participated in the study. Sixteen typically developing infants were tested at 7 and 11 months of age using the central fixation procedure with CDS and TRS in two types of noise at +10 dB signal-to-noise ratio. One type of noise was an "on-channel" masker with a spectrum overlapping that of the CDS (energetic masking), and the second was an "off-channel" masker with frequencies that were outside the spectrum of the CDS (distractive masking). An additional group of sixteen 11-month-old infants were tested in quiet and served as controls for the "off-frequency" masker condition. Results Infants preferred CDS over TRS in both age groups, but this preference was more pronounced with "off-channel" masker regardless of age. Also, older infants demonstrated longer looking time for the target stimuli when presented with an "off-channel" masker compared to the "on-channel" masker. Looking time in quiet was similar to looking time in the "off-channel" condition, and looking time for CDS was longer in quiet compared to the "on-channel" condition. Conclusions These findings support the notion that (a) infants as young as 7 months of age are already showing preference for speech in noise, regardless of type of masker; (b) by 11 months of age, listening with the "off-channel" condition did not yield different results than in quiet. Thus, by 11 months of age, infants' cognitive-attentional abilities may be more developed.
Collapse
Affiliation(s)
- Osnat Segal
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Israel
| | - Nitzan Kligler
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Israel
| | - Liat Kishon-Rabin
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Israel
| |
Collapse
|
17
|
Qian M, Wang Q, Yang L, Wang Z, Hu D, Li B, Li Y, Wu H, Huang Z. The effects of aging on peripheral and central auditory function in adults with normal hearing. Am J Transl Res 2021; 13:549-564. [PMID: 33594309 PMCID: PMC7868840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 12/15/2020] [Indexed: 06/12/2023]
Abstract
This study was designed to investigate the effects of the aging process on peripheral and central auditory functions in adults with normal hearing. In this study, 149 participants with normal hearing were divided into four groups: aged 20-29, 30-39, 40-49 and 50-59 years for statistical purposes. Electrocochleography (EcochG), transient evoked otoacoustic emissions (TEOAE), Mandarin Hearing in Noise Test (MHINT) and the Gap Detection Test (GDT) were used. Our study found: (1) MHINT is significantly associated with aging (left ear R2=0.29, right ear R2=0.35). (2) TEOAE amplitude, TEOAE contralateral acoustic stimulation (CS) amplitude, EcochG action potential (AP), EcochG AP latency, EcochG summating potential (SP) and GDT progressively declined with age. (3) The EcochG SP/AP has no statistically significant difference among different age groups. (4) The peripheral auditory function of the right ear declines more slowly than that of the left ear. (5) Hypofunction of the central auditory system accelerates after age 40. The results demonstrate: (1) The age-related decline in the ability of speech recognition in a noisy environment may be the most sensitive indicator that reflects auditory function. (2) The decline of central auditory function is independent of peripheral auditory function, according to the auditory characteristics of the right ear. (3) Auditory function needs to be assessed individually to allow early prevention before age 40.
Collapse
Affiliation(s)
- Minfei Qian
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Qixuan Wang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Lu Yang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Zhongying Wang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Difei Hu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Bei Li
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Yun Li
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Hao Wu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Zhiwu Huang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| |
Collapse
|
18
|
Kegler M, Reichenbach T. Modelling the effects of transcranial alternating current stimulation on the neural encoding of speech in noise. Neuroimage 2020; 224:117427. [PMID: 33038540 DOI: 10.1016/j.neuroimage.2020.117427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 09/11/2020] [Accepted: 10/01/2020] [Indexed: 11/29/2022] Open
Abstract
Transcranial alternating current stimulation (tACS) can non-invasively modulate neuronal activity in the cerebral cortex, in particular at the frequency of the applied stimulation. Such modulation can matter for speech processing, since the latter involves the tracking of slow amplitude fluctuations in speech by cortical activity. tACS with a current signal that follows the envelope of a speech stimulus has indeed been found to influence the cortical tracking and to modulate the comprehension of the speech in background noise. However, how exactly tACS influences the speech-related cortical activity, and how it causes the observed effects on speech comprehension, remains poorly understood. A computational model for cortical speech processing in a biophysically plausible spiking neural network has recently been proposed. Here we extended the model to investigate the effects of different types of stimulation waveforms, similar to those previously applied in experimental studies, on the processing of speech in noise. We assessed in particular how well speech could be decoded from the neural network activity when paired with the exogenous stimulation. We found that, in the absence of current stimulation, the speech-in-noise decoding accuracy was comparable to the comprehension of speech in background noise of human listeners. We further found that current stimulation could alter the speech decoding accuracy by a few percent, comparable to the effects of tACS on speech-in-noise comprehension. Our simulations further allowed us to identify the parameters for the stimulation waveforms that yielded the largest enhancement of speech-in-noise encoding. Our model thereby provides insight into the potential neural mechanisms by which weak alternating current stimulation may influence speech comprehension and allows to screen a large range of stimulation waveforms for their effect on speech processing.
Collapse
Affiliation(s)
- Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2BU London, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2BU London, United Kingdom.
| |
Collapse
|
19
|
Musical Experience Offsets Age-Related Decline in Understanding Speech-in-Noise: Type of Training Does Not Matter, Working Memory Is the Key. Ear Hear 2020; 42:258-270. [PMID: 32826504 DOI: 10.1097/aud.0000000000000921] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
OBJECTIVES Speech comprehension under "cocktail party" scenarios deteriorates with age even in the absence of measurable hearing loss. Musical training is suggested to counteract the age-related decline in speech-in-noise (SIN) perception, yet which aspect of musical plasticity contributes to this compensation remains unclear. This study aimed to investigate the effects of musical experience and aging on SIN perception ability. We hypothesized a key mediation role of auditory working memory in ameliorating deficient SIN perception in older adults by musical training. DESIGN Forty-eight older musicians, 29 older nonmusicians, 48 young musicians, and 24 young nonmusicians all with (near) normal peripheral hearing were recruited. The SIN task was recognizing nonsense speech sentences either perceptually colocated or separated with a noise masker (energetic masking) or a two-talker speech masker (informational masking). Auditory working memory was measured by auditory digit span. Path analysis was used to examine the direct and indirect effects of musical expertise and age on SIN perception performance. RESULTS Older musicians outperformed older nonmusicians in auditory working memory and all SIN conditions (noise separation, noise colocation, speech separation, speech colocation), but such musician advantages were absent in young adults. Path analysis showed that age and musical training had opposite effects on auditory working memory, which played a significant mediation role in SIN perception. In addition, the type of musical training did not differentiate SIN perception regardless of age. CONCLUSIONS These results provide evidence that musical training offsets age-related speech perception deficit at adverse listening conditions by preserving auditory working memory. Our findings highlight auditory working memory in supporting speech perception amid competing noise in older adults, and underline musical training as a means of "cognitive reserve" against declines in speech comprehension and cognition in aging populations.
Collapse
|
20
|
Destoky F, Bertels J, Niesen M, Wens V, Vander Ghinst M, Leybaert J, Lallier M, Ince RAA, Gross J, De Tiège X, Bourguignon M. Cortical tracking of speech in noise accounts for reading strategies in children. PLoS Biol 2020; 18:e3000840. [PMID: 32845876 PMCID: PMC7478533 DOI: 10.1371/journal.pbio.3000840] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 09/08/2020] [Accepted: 08/12/2020] [Indexed: 11/29/2022] Open
Abstract
Humans' propensity to acquire literacy relates to several factors, including the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (1) reading abilities, (2) classical behavioral predictors of reading (phonological awareness, phonological memory, and rapid automatized naming), and (3) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that, in typical readers, cortical representation of the phrasal content of SiN relates to the degree of development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). In individuals with dyslexia, we found preserved integration of visual speech information to optimize processing of syntactic information but not to sustain acoustic/phonemic processing. Finally, within children with dyslexia, measures of cortical representation of the phrasal content of SiN were negatively related to reading speed and positively related to the compromise between reading precision and reading speed, potentially owing to compensatory attentional mechanisms. These results clarify the nature of the relation between SiN perception and reading abilities in typical child readers and children with dyslexia and identify novel electrophysiological markers of emergent literacy.
Collapse
Affiliation(s)
- Florian Destoky
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Julie Bertels
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Consciousness, Cognition and Computation group, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Maxime Niesen
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Service d'ORL et de chirurgie cervico-faciale, ULB-Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Vincent Wens
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Marc Vander Ghinst
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Jacqueline Leybaert
- Laboratoire Cognition Langage et Développement, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Marie Lallier
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| | - Robin A. A. Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- Institute for Biomagnetism and Biosignal analysis, University of Muenster, Muenster, Germany
| | - Xavier De Tiège
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Mathieu Bourguignon
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Laboratoire Cognition Langage et Développement, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| |
Collapse
|
21
|
Zaltz Y, Bugannim Y, Zechoval D, Kishon-Rabin L, Perez R. Listening in Noise Remains a Significant Challenge for Cochlear Implant Users: Evidence from Early Deafened and Those with Progressive Hearing Loss Compared to Peers with Normal Hearing. J Clin Med 2020; 9:jcm9051381. [PMID: 32397101 PMCID: PMC7290476 DOI: 10.3390/jcm9051381] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 04/28/2020] [Accepted: 05/05/2020] [Indexed: 01/17/2023] Open
Abstract
Cochlear implants (CIs) are the state-of-the-art therapy for individuals with severe to profound hearing loss, providing them with good functional hearing. Nevertheless, speech understanding in background noise remains a significant challenge. The purposes of this study were to: (1) conduct a novel within-study comparison of speech-in-noise performance across ages in different populations of CI and normal hearing (NH) listeners using an adaptive sentence-in-noise test, and (2) examine the relative contribution of sensory information and cognitive–linguistic factors to performance. Forty CI users (mean age 20 years) were divided into “early-implanted” <4 years (n = 16) and “late-implanted” >6 years (n = 11), all prelingually deafened, and “progressively deafened” (n = 13). The control group comprised 136 NH subjects (80 children, 56 adults). Testing included the Hebrew Matrix test, word recognition in quiet, and linguistic and cognitive tests. Results show poorer performance in noise for CI users across populations and ages compared to NH peers, and age at implantation and word recognition in quiet were found to be contributing factors. For those recognizing 50% or more of the words in quiet (n = 27), non-verbal intelligence and receptive vocabulary explained 63% of the variance in noise. This information helps delineate the relative contribution of top-down and bottom-up skills for speech recognition in noise and can help set expectations in CI counseling.
Collapse
Affiliation(s)
- Yael Zaltz
- The Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo 6997801, Israel; (Y.B.); (D.Z.); (L.K.-R.)
- Correspondence:
| | - Yossi Bugannim
- The Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo 6997801, Israel; (Y.B.); (D.Z.); (L.K.-R.)
| | - Doreen Zechoval
- The Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo 6997801, Israel; (Y.B.); (D.Z.); (L.K.-R.)
| | - Liat Kishon-Rabin
- The Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo 6997801, Israel; (Y.B.); (D.Z.); (L.K.-R.)
| | - Ronen Perez
- Department of Otolaryngology and Head and Neck Surgery, Shaare Zedek Medical Center Affiliated to The Hebrew University Medical School, Jerusalem 9190501, Israel;
| |
Collapse
|
22
|
Brief Report: Speech-in-Noise Recognition and the Relation to Vocal Pitch Perception in Adults with Autism Spectrum Disorder and Typical Development. J Autism Dev Disord 2020; 50:356-363. [PMID: 31583624 DOI: 10.1007/s10803-019-04244-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
We tested the ability to recognise speech-in-noise and its relation to the ability to discriminate vocal pitch in adults with high-functioning autism spectrum disorder (ASD) and typically developed adults (matched pairwise on age, sex, and IQ). Typically developed individuals understood speech in higher noise levels as compared to the ASD group. Within the control group but not within the ASD group, better speech-in-noise recognition abilities were significantly correlated with better vocal pitch discrimination abilities. Our results show that speech-in-noise recognition is restricted in people with ASD. We speculate that perceptual impairments such as difficulties in vocal pitch perception might be relevant in explaining these difficulties in ASD.
Collapse
|
23
|
Sofokleous V, Marmara M, Panagiotopoulos GK, Mouza S, Tsofidou M, Sereti A, Grigoriadi I, Petridis Ε, Sidiras C, Tsiourdas M, Iliadou VV. Test-retest reliability of the Greek Speech-in-babble test (SinB) as a potential screening tool for auditory processing disorder. Int J Pediatr Otorhinolaryngol 2020; 131:109848. [PMID: 31927150 DOI: 10.1016/j.ijporl.2019.109848] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Revised: 11/16/2019] [Accepted: 12/22/2019] [Indexed: 11/24/2022]
Abstract
INTRODUCTION There seems to exist a specific group of people considered to be at higher risk of having Auditory Processing Disorders (APD). These patients are frequently initially referred to, or managed by various professionals such as Otolaryngologists, Speech Therapists, and Occupational Therapists. It is, therefore, essential to retain a low threshold of when to refer such individuals for a formal APD diagnostic evaluation. Under these circumstances, there might be a role for the Greek Speech-in-Babble (SinB) recognition test as a screening tool for abnormal auditory processing competency. OBJECTIVE To explore the test-retest reliability of a diagnostically validated speech-in-babble test, the Greek SinB, as a potential screening tool. METHODS Ten health professionals coming from various disciplines administered the SinB test twice, under conditions similar to those encountered when using it as a screening tool, and test-retest reliability was assessed. 93 Greek-speaking individuals, of whom 27 adults and 66 children or young adolescents aged five years old or more, served as our study sample. RESULTS For the right ear, the Intraclass Correlation Coefficient (ICC) was 0.858 with a 95% confidence interval (CI) = 0.786-0.906. Slightly better conditions apply for the left ear, as the ICC was 0.873 with 95% CI = 0.809-0.916. These 95% CIs indicate a 'good' to 'excellent' level of reliability for both ears. Spearman's rho was 0.86 and 0.71 for the right and left ear, respectively. CONCLUSION Our results suggest that the test possesses the required reliability to evaluate a subject's hearing abilities under screening conditions. On these terms, it could be used to screen populations considered as being at risk for Auditory Processing Disorders. Forthcoming research should focus on establishing its efficiency by comparing the results of the screening test with that of diagnostic tests and on fine-tuning SinB as a screening tool.
Collapse
Affiliation(s)
- Valentinos Sofokleous
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece; Department of Pediatric Otorhinolaryngology, Athens Children's Hospital "P. & A. Kyriacou", Athens, Greece.
| | - Maria Marmara
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece
| | | | - Stellina Mouza
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece
| | - Maria Tsofidou
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece
| | - Afroditi Sereti
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece
| | - Ioanna Grigoriadi
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece
| | - Εleftherios Petridis
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece
| | - Christos Sidiras
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece
| | - Michael Tsiourdas
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece
| | - Vasiliki Vivian Iliadou
- Psychoacoustics Laboratory, Aristotle University of Thessaloniki - Medical School, Thessaloniki, Greece
| |
Collapse
|
24
|
Di Lorenzo G, Riccioni A, Ribolsi M, Siracusano M, Curatolo P, Mazzone L. Auditory Mismatch Negativity in Youth Affected by Autism Spectrum Disorder With and Without Attenuated Psychosis Syndrome. Front Psychiatry 2020; 11:555340. [PMID: 33329094 PMCID: PMC7732489 DOI: 10.3389/fpsyt.2020.555340] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 10/20/2020] [Indexed: 12/25/2022] Open
Abstract
The present study investigates the differences in auditory mismatch negativity (MMN) parameters given in a sample of young subjects with autism spectrum disorder (ASD, n = 37) with or without co-occurrent attenuated psychosis syndrome (APS). Our results show that ASD individuals present an MMN decreased amplitude and prolonged latency, without being influenced by concurrent APS. Additionally, when correlating the MMN indexes to clinical features, in the ASD + APS group, we found a negative correlation between the severity of autistic symptoms and the MMN latency in both frequency (f-MMN r = -0.810; p < 0.0001) and duration (d-MMN r = -0.650; p = 0.006) deviants. Thus, our results may provide a more informative characterization of the ASD sub-phenotype when associated with APS, highlighting the need for further longitudinal investigations.
Collapse
Affiliation(s)
- Giorgio Di Lorenzo
- Laboratory of Psychophysiology and Cognitive Neuroscience, Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy.,IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Assia Riccioni
- Child Neurology and Psychiatry Unit, Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| | - Michele Ribolsi
- Psychiatry Unit, Campus Bio-Medico University of Rome, Rome, Italy
| | - Martina Siracusano
- Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy.,Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, L'Aquila, Italy
| | - Paolo Curatolo
- Child Neurology and Psychiatry Unit, Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| | - Luigi Mazzone
- Child Neurology and Psychiatry Unit, Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| |
Collapse
|
25
|
McClaskey CM, Dias JW, Harris KC. Sustained envelope periodicity representations are associated with speech-in-noise performance in difficult listening conditions for younger and older adults. J Neurophysiol 2019; 122:1685-1696. [PMID: 31365323 PMCID: PMC6843096 DOI: 10.1152/jn.00845.2018] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 07/31/2019] [Accepted: 07/31/2019] [Indexed: 12/31/2022] Open
Abstract
Temporal modulations are an important part of speech signals. An accurate perception of these time-varying qualities of sound is necessary for successful communication. The current study investigates the relationship between sustained envelope encoding and speech-in-noise perception in a cohort of normal-hearing younger (ages 18-30 yr, n = 22) and older adults (ages 55-90+ yr, n = 35) using the subcortical auditory steady-state response (ASSR). ASSRs were measured in response to the envelope of 400-ms amplitude-modulated (AM) tones with 3,000-Hz carrier frequencies and 80-Hz modulation frequencies. AM tones had modulation depths of 0, -4, and -8 dB relative to m = 1 (m = 1, 0.631, and 0.398, respectively). The robustness, strength at modulation frequency, and synchrony of subcortical envelope encoding were quantified via time-domain correlations, spectral amplitude, and phase-locking value, respectively. Speech-in-noise ability was quantified via the QuickSIN test in the 0- and 5-dB signal-to-noise (SNR) conditions. All ASSR metrics increased with increasing modulation depth and there were no effects of age group. ASSR metrics in response to shallow modulation depths predicted 0-dB speech scores. Results demonstrate that sustained amplitude envelope processing in the brainstem relates to speech-in-noise abilities, but primarily in difficult listening conditions at low SNRs. These findings furthermore highlight the utility of shallow modulation depths for studying temporal processing. The absence of age effects in these data demonstrate that individual differences in the robustness, strength, and specificity of subcortical envelope processing, and not age, predict speech-in-noise performance in the most difficult listening conditions.NEW & NOTEWORTHY Failure to correctly understand speech in the presence of background noise is a significant problem for many normal-hearing adults and may impede healthy communication. The relationship between sustained envelope encoding in the brainstem and speech-in-noise perception remains to be clarified. The present study demonstrates that the strength, specificity, and robustness of the brainstem's representations of sustained stimulus periodicity relates to speech-in-noise perception in older and younger normal-hearing adults, but only in highly challenging listening environments.
Collapse
Affiliation(s)
- Carolyn M McClaskey
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - James W Dias
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Kelly C Harris
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| |
Collapse
|
26
|
Raymer AM, Sandberg HM, Schwartz KS, Watson GS, Ringleb SI. Treatment of auditory processing in noise in individuals with mild aphasia: pilot study. ACTA ACUST UNITED AC 2019. [DOI: 10.21849/cacd.2019.00087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
27
|
McLaughlin SA, Thorne JC, Jirikowic T, Waddington T, Lee AKC, Astley Hemingway SJ. Listening Difficulties in Children With Fetal Alcohol Spectrum Disorders: More Than a Problem of Audibility. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1532-1548. [PMID: 31039324 DOI: 10.1044/2018_jslhr-h-18-0359] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Data from standardized caregiver questionnaires indicate that children with fetal alcohol spectrum disorders (FASDs) frequently exhibit atypical auditory behaviors, including reduced responsivity to spoken stimuli. Another body of evidence suggests that prenatal alcohol exposure may result in auditory dysfunction involving loss of audibility (i.e., hearing loss) and/or impaired processing of clearly audible, "suprathreshold" sounds necessary for sound-in-noise listening. Yet, the nexus between atypical auditory behavior and underlying auditory dysfunction in children with FASDs remains largely unexplored. Method To investigate atypical auditory behaviors in FASDs and explore their potential physiological bases, we examined clinical data from 325 children diagnosed with FASDs at the University of Washington using the FASD 4-Digit Diagnostic Code. Atypical behaviors reported on the "auditory filtering" domain of the Short Sensory Profile were assessed to document their prevalence across FASD diagnoses and explore their relationship to reported hearing loss and/or central nervous system measures of cognition, attention, and language function that may indicate suprathreshold processing deficits. Results Atypical auditory behavior was reported among 80% of children with FASDs, a prevalence that did not vary by FASD diagnostic severity or hearing status but was positively correlated with attention-deficit/hyperactivity disorder. In contrast, hearing loss was documented in the clinical records of 40% of children with fetal alcohol syndrome (FAS; a diagnosis on the fetal alcohol spectrum characterized by central nervous system dysfunction, facial dysmorphia, and growth deficiency), 16-fold more prevalent than for those with less severe FASDs (2.4%). Reported hearing loss was significantly associated with physical features characteristic of FAS. Conclusion Children with FAS but not other FASDs may be at a particular risk for hearing loss. However, listening difficulties in the absence of hearing loss-presumably related to suprathreshold processing deficits-are prevalent across the entire fetal alcohol spectrum. The nature and impact of both listening difficulties and hearing loss in FASDs warrant further investigation.
Collapse
Affiliation(s)
- Susan A McLaughlin
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| | - John C Thorne
- Department of Speech & Hearing Sciences, University of Washington, Seattle
| | - Tracy Jirikowic
- Division of Occupational Therapy, Department of Rehabilitation Medicine, School of Medicine, University of Washington, Seattle
| | - Tiffany Waddington
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| | - Adrian K C Lee
- Institute for Learning & Brain Sciences, University of Washington, Seattle
- Department of Speech & Hearing Sciences, University of Washington, Seattle
| | - Susan J Astley Hemingway
- Department of Epidemiology, University of Washington, Seattle
- Department of Pediatrics, University of Washington, Seattle
| |
Collapse
|
28
|
Yellamsetty A, Bidelman GM. Brainstem correlates of concurrent speech identification in adverse listening conditions. Brain Res 2019; 1714:182-192. [PMID: 30796895 DOI: 10.1016/j.brainres.2019.02.025] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 01/07/2019] [Accepted: 02/19/2019] [Indexed: 01/20/2023]
Abstract
When two voices compete, listeners can segregate and identify concurrent speech sounds using pitch (fundamental frequency, F0) and timbre (harmonic) cues. Speech perception is also hindered by the signal-to-noise ratio (SNR). How clear and degraded concurrent speech sounds are represented at early, pre-attentive stages of the auditory system is not well understood. To this end, we measured scalp-recorded frequency-following responses (FFR) from the EEG while human listeners heard two concurrently presented, steady-state (time-invariant) vowels whose F0 differed by zero or four semitones (ST) presented diotically in either clean (no noise) or noise-degraded (+5dB SNR) conditions. Listeners also performed a speeded double vowel identification task in which they were required to identify both vowels correctly. Behavioral results showed that speech identification accuracy increased with F0 differences between vowels, and this perceptual F0 benefit was larger for clean compared to noise degraded (+5dB SNR) stimuli. Neurophysiological data demonstrated more robust FFR F0 amplitudes for single compared to double vowels and considerably weaker responses in noise. F0 amplitudes showed speech-on-speech masking effects, along with a non-linear constructive interference at 0ST, and suppression effects at 4ST. Correlations showed that FFR F0 amplitudes failed to predict listeners' identification accuracy. In contrast, FFR F1 amplitudes were associated with faster reaction times, although this correlation was limited to noise conditions. The limited number of brain-behavior associations suggests subcortical activity mainly reflects exogenous processing rather than perceptual correlates of concurrent speech perception. Collectively, our results demonstrate that FFRs reflect pre-attentive coding of concurrent auditory stimuli that only weakly predict the success of identifying concurrent speech.
Collapse
Affiliation(s)
- Anusha Yellamsetty
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Department of Communication Sciences & Disorders, University of South Florida, USA.
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
29
|
Jacobi I, Sheikh Rashid M, de Laat JAPM, Dreschler WA. Age Dependence of Thresholds for Speech in Noise in Normal-Hearing Adolescents. Trends Hear 2019; 21:2331216517743641. [PMID: 29212433 PMCID: PMC5724638 DOI: 10.1177/2331216517743641] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Previously found effects of age on thresholds for speech reception thresholds in noise in adolescents as measured by an online screening survey require further study in a well-controlled teenage sample. Speech reception thresholds (SRT) of 72 normal-hearing adolescent students were analyzed by means of the online speech-in-noise screening tool Earcheck (In Dutch: Oorcheck). Screening was performed at school and included pure-tone audiometry to ensure normal-hearing thresholds. The students’ ages ranged from 12 to 17 years. A group of young adults was included as a control group. Data were controlled for effects of gender and level of education. SRT scores within the controlled teenage sample revealed an effect of age on the order of an improvement of −0.2 dB per year. Effects of level of education and gender were not significant. Hearing screening tools that are based on SRT for speech in noise should control for an effect of age when assessing adolescents. Based on the present data, a correction factor of −0.2 dB per year between the ages of 12 and 17 is proposed. The proposed age-corrected SRT cut-off scores need to be evaluated in a larger sample including hearing-impaired adolescents.
Collapse
Affiliation(s)
- Irene Jacobi
- 1 Department of Clinical and Experimental Audiology, 26066 Academic Medical Centre , Amsterdam, The Netherlands
| | - Marya Sheikh Rashid
- 1 Department of Clinical and Experimental Audiology, 26066 Academic Medical Centre , Amsterdam, The Netherlands
| | - Jan A P M de Laat
- 2 Department of Audiology, 4501 Leiden University Medical Centre , Leiden, The Netherlands
| | - Wouter A Dreschler
- 1 Department of Clinical and Experimental Audiology, 26066 Academic Medical Centre , Amsterdam, The Netherlands
| |
Collapse
|
30
|
Theta Coherence Asymmetry in the Dorsal Stream of Musicians Facilitates Word Learning. Sci Rep 2018; 8:4565. [PMID: 29545619 PMCID: PMC5854697 DOI: 10.1038/s41598-018-22942-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Accepted: 03/01/2018] [Indexed: 01/19/2023] Open
Abstract
Word learning constitutes a human faculty which is dependent upon two anatomically distinct processing streams projecting from posterior superior temporal (pST) and inferior parietal (IP) brain regions toward the prefrontal cortex (dorsal stream) and the temporal pole (ventral stream). The ventral stream is involved in mapping sensory and phonological information onto lexical-semantic representations, whereas the dorsal stream contributes to sound-to-motor mapping, articulation, complex sequencing in the verbal domain, and to how verbal information is encoded, stored, and rehearsed from memory. In the present source-based EEG study, we evaluated functional connectivity between the IP lobe and Broca's area while musicians and non-musicians learned pseudowords presented in the form of concatenated auditory streams. Behavioral results demonstrated that musicians outperformed non-musicians, as reflected by a higher sensitivity index (d'). This behavioral superiority was paralleled by increased left-hemispheric theta coherence in the dorsal stream, whereas non-musicians showed stronger functional connectivity in the right hemisphere. Since no between-group differences were observed in a passive listening control condition nor during rest, results point to a task-specific intertwining between musical expertise, functional connectivity, and word learning.
Collapse
|
31
|
Musical training sharpens and bonds ears and tongue to hear speech better. Proc Natl Acad Sci U S A 2017; 114:13579-13584. [PMID: 29203648 DOI: 10.1073/pnas.1712223114] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The idea that musical training improves speech perception in challenging listening environments is appealing and of clinical importance, yet the mechanisms of any such musician advantage are not well specified. Here, using functional magnetic resonance imaging (fMRI), we found that musicians outperformed nonmusicians in identifying syllables at varying signal-to-noise ratios (SNRs), which was associated with stronger activation of the left inferior frontal and right auditory regions in musicians compared with nonmusicians. Moreover, musicians showed greater specificity of phoneme representations in bilateral auditory and speech motor regions (e.g., premotor cortex) at higher SNRs and in the left speech motor regions at lower SNRs, as determined by multivoxel pattern analysis. Musical training also enhanced the intrahemispheric and interhemispheric functional connectivity between auditory and speech motor regions. Our findings suggest that improved speech in noise perception in musicians relies on stronger recruitment of, finer phonological representations in, and stronger functional connectivity between auditory and frontal speech motor cortices in both hemispheres, regions involved in bottom-up spectrotemporal analyses and top-down articulatory prediction and sensorimotor integration, respectively.
Collapse
|
32
|
The Right Temporoparietal Junction Supports Speech Tracking During Selective Listening: Evidence from Concurrent EEG-fMRI. J Neurosci 2017; 37:11505-11516. [PMID: 29061698 DOI: 10.1523/jneurosci.1007-17.2017] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Revised: 08/28/2017] [Accepted: 09/05/2017] [Indexed: 11/21/2022] Open
Abstract
Listening selectively to one out of several competing speakers in a "cocktail party" situation is a highly demanding task. It relies on a widespread cortical network, including auditory sensory, but also frontal and parietal brain regions involved in controlling auditory attention. Previous work has shown that, during selective listening, ongoing neural activity in auditory sensory areas is dominated by the attended speech stream, whereas competing input is suppressed. The relationship between these attentional modulations in the sensory tracking of the attended speech stream and frontoparietal activity during selective listening is, however, not understood. We studied this question in young, healthy human participants (both sexes) using concurrent EEG-fMRI and a sustained selective listening task, in which one out of two competing speech streams had to be attended selectively. An EEG-based speech envelope reconstruction method was applied to assess the strength of the cortical tracking of the to-be-attended and the to-be-ignored stream during selective listening. Our results show that individual speech envelope reconstruction accuracies obtained for the to-be-attended speech stream were positively correlated with the amplitude of sustained BOLD responses in the right temporoparietal junction, a core region of the ventral attention network. This brain region further showed task-related functional connectivity to secondary auditory cortex and regions of the frontoparietal attention network, including the intraparietal sulcus and the inferior frontal gyrus. This suggests that the right temporoparietal junction is involved in controlling attention during selective listening, allowing for a better cortical tracking of the attended speech stream.SIGNIFICANCE STATEMENT Listening selectively to one out of several simultaneously talking speakers in a "cocktail party" situation is a highly demanding task. It activates a widespread network of auditory sensory and hierarchically higher frontoparietal brain regions. However, how these different processing levels interact during selective listening is not understood. Here, we investigated this question using fMRI and concurrently acquired scalp EEG. We found that activation levels in the right temporoparietal junction correlate with the sensory representation of a selectively attended speech stream. In addition, this region showed significant functional connectivity to both auditory sensory and other frontoparietal brain areas during selective listening. This suggests that the right temporoparietal junction contributes to controlling selective auditory attention in "cocktail party" situations.
Collapse
|
33
|
Coffey EBJ, Chepesiuk AMP, Herholz SC, Baillet S, Zatorre RJ. Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception. Front Neurosci 2017; 11:479. [PMID: 28890684 PMCID: PMC5575455 DOI: 10.3389/fnins.2017.00479] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Accepted: 08/11/2017] [Indexed: 01/05/2023] Open
Abstract
Speech-in-noise (SIN) perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response) contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions.
Collapse
Affiliation(s)
- Emily B J Coffey
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill UniversityMontréal, QC, Canada.,Laboratory for Brain, Music and Sound ResearchMontréal, QC, Canada.,Centre for Research on Brain, Language and MusicMontréal, QC, Canada
| | - Alexander M P Chepesiuk
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill UniversityMontréal, QC, Canada
| | - Sibylle C Herholz
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill UniversityMontréal, QC, Canada.,Laboratory for Brain, Music and Sound ResearchMontréal, QC, Canada.,Centre for Research on Brain, Language and MusicMontréal, QC, Canada.,German Center for Neurodegenerative DiseasesBonn, Germany
| | - Sylvain Baillet
- Centre for Research on Brain, Language and MusicMontréal, QC, Canada.,McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill UniversityMontréal, QC, Canada
| | - Robert J Zatorre
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill UniversityMontréal, QC, Canada.,Laboratory for Brain, Music and Sound ResearchMontréal, QC, Canada.,Centre for Research on Brain, Language and MusicMontréal, QC, Canada
| |
Collapse
|
34
|
de Carvalho NG, Novelli CVL, Colella-Santos MF. Evaluation of speech in noise abilities in school children. Int J Pediatr Otorhinolaryngol 2017; 99:66-72. [PMID: 28688568 DOI: 10.1016/j.ijporl.2017.05.019] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 05/23/2017] [Accepted: 05/25/2017] [Indexed: 11/25/2022]
Abstract
This study aimed to analyze the perception of speech in noise in children with poor school performance and to compare them with children with good school performance, considering gender, age and ear side as variables. The intelligibility of speech was evaluated in school children utilizing the Brazilian Hearing in Noise Test (HINT) in the situations of quiet (Q), Left ear competitive noise (NL), Right Ear Competitive Noise (NR), as well as the global average of other hearing situations, denominated Noise Composite (NC). Ninety seven school children between the ages of 8 and 10 were recruited in five schools of São Paulo-Brazil; the control group (CG) consisted of 54 students (23 male/ 31 female) without language and/or speech difficulties and good school performance, and the study group (SG) consisted of 43 students (28 male/ 15 female) identified by their teachers as having poor school performance. The variables gender and ear side did not interfere in speech perception. The age variable influenced only the CG. The SG had worse performance than the CG in the Q, NF and NC conditions. NF was the most difficult for both groups. The perception of speech in noise was the worst in children with poor school performance. The variables gender and ear side did not interfere in speech perception. The age group variable influenced the performance of the group of children with good school performance, demonstrating a better ability in older children. The speech perception in noise ability is more difficult for both groups when the noise affects both ears.
Collapse
|
35
|
Abstract
OBJECTIVE Review and critique of the clinical value of the AAA CAPD guidance document in light of criteria for credible and useful guidance documents, as discussed by Field and Lohr. DESIGN A qualitative review of the of the AAA CAPD guidelines using a framework by Field and Lohr to assess their relative value in supporting the assessment and management of CAPD referrals. STUDY SAMPLE Relevant literature available through electronic search tools and published texts were used along with the AAA CAPD guidance document and the chapter by Field and Lohr. RESULTS The AAA document does not meet many of the key requirements discussed by Field and Lohr. It does not reflect the current literature, fails to help clinicians understand for whom auditory processing testing and intervention would be most useful, includes contradictory suggestions which reduce clarity and appears to avoid conclusions that might cast the CAPD construct in a negative light. It also does not include input from diverse affected groups. All of these reduce the document's credibility. CONCLUSIONS The AAA CAPD guidance document will need to be updated and re-conceptualised in order to provide meaningful guidance for clinicians.
Collapse
Affiliation(s)
- David A DeBonis
- a School of Education , The College of Saint Rose , Albany , NY , USA
| |
Collapse
|
36
|
Speech-in-noise perception in musicians: A review. Hear Res 2017; 352:49-69. [PMID: 28213134 DOI: 10.1016/j.heares.2017.02.006] [Citation(s) in RCA: 83] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 02/01/2017] [Accepted: 02/05/2017] [Indexed: 11/23/2022]
Abstract
The ability to understand speech in the presence of competing sound sources is an important neuroscience question in terms of how the nervous system solves this computational problem. It is also a critical clinical problem that disproportionally affects the elderly, children with language-related learning disorders, and those with hearing loss. Recent evidence that musicians have an advantage on this multifaceted skill has led to the suggestion that musical training might be used to improve or delay the decline of speech-in-noise (SIN) function. However, enhancements have not been universally reported, nor have the relative contributions of different bottom-up versus top-down processes, and their relation to preexisting factors been disentangled. This information that would be helpful to establish whether there is a real effect of experience, what exactly is its nature, and how future training-based interventions might target the most relevant components of cognitive processes. These questions are complicated by important differences in study design and uneven coverage of neuroimaging modality. In this review, we aim to systematize recent results from studies that have specifically looked at musician-related differences in SIN by their study design properties, to summarize the findings, and to identify knowledge gaps for future work.
Collapse
|
37
|
McCullagh J, Palmer SB. The effects of auditory training on dichotic listening: a neurological case study. HEARING BALANCE AND COMMUNICATION 2017. [DOI: 10.1080/21695717.2016.1269453] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Jennifer McCullagh
- Department of Communication Disorders, Southern Connecticut State University, New Haven, CT, USA
| | - Shannon B. Palmer
- Department of Communication Disorders, Central Michigan University, Mt. Pleasant, MI, USA
| |
Collapse
|
38
|
Jamison C, Aiken SJ, Kiefte M, Newman AJ, Bance M, Sculthorpe-Petley L. Preliminary Investigation of the Passively Evoked N400 as a Tool for Estimating Speech-in-Noise Thresholds. Am J Audiol 2016; 25:344-358. [PMID: 27814664 DOI: 10.1044/2016_aja-15-0080] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Accepted: 05/20/2016] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Speech-in-noise testing relies on a number of factors beyond the auditory system, such as cognitive function, compliance, and motor function. It may be possible to avoid these limitations by using electroencephalography. The present study explored this possibility using the N400. METHOD Eleven adults with typical hearing heard high-constraint sentences with congruent and incongruent terminal words in the presence of speech-shaped noise. Participants ignored all auditory stimulation and watched a video. The signal-to-noise ratio (SNR) was varied around each participant's behavioral threshold during electroencephalography recording. Speech was also heard in quiet. RESULTS The amplitude of the N400 effect exhibited a nonlinear relationship with SNR. In the presence of background noise, amplitude decreased from high (+4 dB) to low (+1 dB) SNR but increased dramatically at threshold before decreasing again at subthreshold SNR (-2 dB). CONCLUSIONS The SNR of speech in noise modulates the amplitude of the N400 effect to semantic anomalies in a nonlinear fashion. These results are the first to demonstrate modulation of the passively evoked N400 by SNR in speech-shaped noise and represent a first step toward the end goal of developing an N400-based physiological metric for speech-in-noise testing.
Collapse
Affiliation(s)
- Caroline Jamison
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Steve J. Aiken
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
- School of Psychology, Dalhousie University, Halifax, Nova Scotia, Canada
- Division of Otolaryngology, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
| | - Michael Kiefte
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Aaron J. Newman
- School of Psychology, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Manohar Bance
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
- Division of Otolaryngology, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
| | - Lauren Sculthorpe-Petley
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
- School of Psychology, Dalhousie University, Halifax, Nova Scotia, Canada
- Division of Otolaryngology, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
- Biomedical Translational Imaging Centre, IWK Health Centre, Halifax, Nova Scotia, Canada
| |
Collapse
|
39
|
Dromey C, Scott S. The effects of noise on speech movements in young, middle-aged, and older adults. SPEECH, LANGUAGE AND HEARING 2016. [DOI: 10.1080/2050571x.2015.1133757] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
40
|
Slater J, Skoe E, Strait DL, O’Connell S, Thompson E, Kraus N. Music training improves speech-in-noise perception: Longitudinal evidence from a community-based music program. Behav Brain Res 2015; 291:244-252. [DOI: 10.1016/j.bbr.2015.05.026] [Citation(s) in RCA: 98] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Revised: 05/09/2015] [Accepted: 05/13/2015] [Indexed: 02/01/2023]
|
41
|
Swaminathan J, Mason CR, Streeter TM, Best V, Kidd G, Patel AD. Musical training, individual differences and the cocktail party problem. Sci Rep 2015; 5:11628. [PMID: 26112910 PMCID: PMC4481518 DOI: 10.1038/srep11628] [Citation(s) in RCA: 88] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Accepted: 06/02/2015] [Indexed: 11/09/2022] Open
Abstract
Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical 'cocktail party problem' in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of 'informational masking' (IM) while keeping the amount of 'energetic masking' (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker "cocktail party" environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced "speech-in-noise" perception by musicians.
Collapse
Affiliation(s)
| | - Christine R Mason
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Timothy M Streeter
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Gerald Kidd
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | | |
Collapse
|
42
|
DeBonis DA. It Is Time to Rethink Central Auditory Processing Disorder Protocols for School-Aged Children. Am J Audiol 2015; 24:124-36. [PMID: 25652246 DOI: 10.1044/2015_aja-14-0037] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Accepted: 01/11/2015] [Indexed: 01/02/2023] Open
Abstract
PURPOSE The purpose of this article is to review the literature that pertains to ongoing concerns regarding the central auditory processing construct among school-aged children and to assess whether the degree of uncertainty surrounding central auditory processing disorder (CAPD) warrants a change in current protocols. METHOD Methodology on this topic included a review of relevant and recent literature through electronic search tools (e.g., ComDisDome, PsycINFO, Medline, and Cochrane databases); published texts; as well as published articles from the Journal of the American Academy of Audiology; the American Journal of Audiology; the Journal of Speech, Language, and Hearing Research; and Language, Speech, and Hearing Services in Schools. RESULTS This review revealed strong support for the following: (a) Current testing of CAPD is highly influenced by nonauditory factors, including memory, attention, language, and executive function; (b) the lack of agreement regarding the performance criteria for diagnosis is concerning; (c) the contribution of auditory processing abilities to language, reading, and academic and listening abilities, as assessed by current measures, is not significant; and (d) the effectiveness of auditory interventions for improving communication abilities has not been established. CONCLUSIONS Routine use of CAPD test protocols cannot be supported, and strong consideration should be given to redirecting focus on assessing overall listening abilities. Also, intervention needs to be contextualized and functional. A suggested protocol is provided for consideration. All of these issues warrant ongoing research.
Collapse
|
43
|
Engineer CT, Rahebi KC, Buell EP, Fink MK, Kilgard MP. Speech training alters consonant and vowel responses in multiple auditory cortex fields. Behav Brain Res 2015; 287:256-64. [PMID: 25827927 DOI: 10.1016/j.bbr.2015.03.044] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 03/19/2015] [Accepted: 03/22/2015] [Indexed: 10/23/2022]
Abstract
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States.
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Melyssa K Fink
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| |
Collapse
|
44
|
Jain C, Mohamed H, Kumar AU. The Effect of Short-Term Musical Training on Speech Perception in Noise. Audiol Res 2015; 5:111. [PMID: 26557359 PMCID: PMC4627120 DOI: 10.4081/audiores.2015.111] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2014] [Revised: 11/20/2014] [Accepted: 11/20/2014] [Indexed: 11/22/2022] Open
Abstract
The aim of the study was to assess the effect of short-term musical training on speech perception in noise. In the present study speech perception in noise was measured pre- and post- short-term musical training. The musical training involved auditory perceptual training for raga identification of two Carnatic ragas. The training was given for eight sessions. A total of 18 normal hearing adults in the age range of 18-25 years participated in the study wherein group 1 consisted of ten individuals who underwent musical training and group 2 consisted of eight individuals who did not undergo any training. Results revealed that post training, speech perception in noise improved significantly in group 1, whereas group 2 did not show any changes in speech perception scores. Thus, short-term musical training shows an enhancement of speech perception in the presence of noise. However, generalization and long-term maintenance of these benefits needs to be evaluated.
Collapse
Affiliation(s)
- Chandni Jain
- Department of Audiology, All India Institute of Speech and Hearing , Manasagangothri, Mysore, India
| | - Hijas Mohamed
- Hero Electronic, Siemens Best Sound Centre , Coimbatore, Tamil Nadu, India
| | - Ajith U Kumar
- Department of Audiology, All India Institute of Speech and Hearing , Manasagangothri, Mysore, India
| |
Collapse
|
45
|
Carey D, Mercure E, Pizzioli F, Aydelott J. Auditory semantic processing in dichotic listening: Effects of competing speech, ear of presentation, and sentential bias on N400s to spoken words in context. Neuropsychologia 2014; 65:102-12. [DOI: 10.1016/j.neuropsychologia.2014.10.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Revised: 08/29/2014] [Accepted: 10/13/2014] [Indexed: 11/16/2022]
|
46
|
Osman H, Sullivan JR. Children's auditory working memory performance in degraded listening conditions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2014; 57:1503-1511. [PMID: 24686855 DOI: 10.1044/2014_jslhr-h-13-0286] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
PURPOSE The objectives of this study were to determine (a) whether school-age children with typical hearing demonstrate poorer auditory working memory performance in multitalker babble at degraded signal-to-noise ratios than in quiet; and (b) whether the amount of cognitive demand of the task contributed to differences in performance in noise. It was hypothesized that stressing the working memory system with the presence of noise would impede working memory processes in real time and result in poorer working memory performance in degraded conditions. METHOD Twenty children with typical hearing between 8 and 10 years old were tested using 4 auditory working memory tasks (Forward Digit Recall, Backward Digit Recall, Listening Recall Primary, and Listening Recall Secondary). Stimuli were from the standardized Working Memory Test Battery for Children. Each task was administered in quiet and in 4-talker babble noise at 0 dB and -5 dB signal-to-noise ratios. RESULTS Children's auditory working memory performance was systematically decreased in the presence of multitalker babble noise compared with quiet. Differences between low-complexity and high-complexity tasks were observed, with children performing more poorly on tasks with greater storage and processing demands. There was no interaction between noise and complexity of task. All tasks were negatively impacted similarly by the addition of noise. CONCLUSIONS Auditory working memory performance was negatively impacted by the presence of multitalker babble noise. Regardless of complexity of task, noise had a similar effect on performance. These findings suggest that the addition of noise inhibits auditory working memory processes in real time for school-age children.
Collapse
|
47
|
Campbell J, Sharma A. Cross-modal re-organization in adults with early stage hearing loss. PLoS One 2014; 9:e90594. [PMID: 24587400 PMCID: PMC3938766 DOI: 10.1371/journal.pone.0090594] [Citation(s) in RCA: 115] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2013] [Accepted: 02/01/2014] [Indexed: 11/19/2022] Open
Abstract
Cortical cross-modal re-organization, or recruitment of auditory cortical areas for visual processing, has been well-documented in deafness. However, the degree of sensory deprivation necessary to induce such cortical plasticity remains unclear. We recorded visual evoked potentials (VEP) using high-density electroencephalography in nine persons with adult-onset mild-moderate hearing loss and eight normal hearing control subjects. Behavioral auditory performance was quantified using a clinical measure of speech perception-in-noise. Relative to normal hearing controls, adults with hearing loss showed significantly larger P1, N1, and P2 VEP amplitudes, decreased N1 latency, and a novel positive component (P2') following the P2 VEP. Current source density reconstruction of VEPs revealed a shift toward ventral stream processing including activation of auditory temporal cortex in hearing-impaired adults. The hearing loss group showed worse than normal speech perception performance in noise, which was strongly correlated with a decrease in the N1 VEP latency. Overall, our findings provide the first evidence that visual cross-modal re-organization not only begins in the early stages of hearing impairment, but may also be an important factor in determining behavioral outcomes for listeners with hearing loss, a finding which demands further investigation.
Collapse
Affiliation(s)
- Julia Campbell
- University of Colorado at Boulder, Department of Speech, Language and Hearing Sciences, Boulder, Colorado, United States of America
| | - Anu Sharma
- University of Colorado at Boulder, Department of Speech, Language and Hearing Sciences, Boulder, Colorado, United States of America
- University of Colorado at Boulder, Institute of Cognitive Science, Boulder, Colorado, United States of America
- * E-mail:
| |
Collapse
|
48
|
Guediche S, Blumstein SE, Fiez JA, Holt LL. Speech perception under adverse conditions: insights from behavioral, computational, and neuroscience research. Front Syst Neurosci 2014; 7:126. [PMID: 24427119 PMCID: PMC3879477 DOI: 10.3389/fnsys.2013.00126] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2013] [Accepted: 12/16/2013] [Indexed: 01/06/2023] Open
Abstract
Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. Specifically, we highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning. To this end, we review behavioral studies, computational accounts, and neuroimaging findings related to adaptive plasticity in speech perception. Already, a few studies have alluded to a potential role of these mechanisms in adaptive plasticity in speech perception. Furthermore, we consider research topics in neuroscience that offer insight into how perception can be adaptively tuned to short-term deviations while balancing the need to maintain stability in the perception of learned long-term regularities. Consideration of the application and limitations of these algorithms in characterizing flexible speech perception under adverse conditions promises to inform theoretical models of speech.
Collapse
Affiliation(s)
- Sara Guediche
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown UniversityProvidence, RI, USA
| | - Sheila E. Blumstein
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown UniversityProvidence, RI, USA
- Department of Cognitive, Linguistic, and Psychological Sciences, Brain Institute, Brown UniversityProvidence, RI, USA
| | - Julie A. Fiez
- Department of Neuroscience, Center for Neuroscience at the University of Pittsburgh, University of PittsburghPittsburgh, PA, USA
- Department of Psychology, University of PittsburghPittsburgh, PA, USA
- Department of Psychology at Carnegie Mellon University and Department of Neuroscience at the University of Pittsburgh, Center for the Neural Basis of CognitionPittsburgh, PA, USA
| | - Lori L. Holt
- Department of Neuroscience, Center for Neuroscience at the University of Pittsburgh, University of PittsburghPittsburgh, PA, USA
- Department of Psychology at Carnegie Mellon University and Department of Neuroscience at the University of Pittsburgh, Center for the Neural Basis of CognitionPittsburgh, PA, USA
- Department of Psychology, Carnegie Mellon UniversityPittsburgh, PA, USA
| |
Collapse
|
49
|
Tarasenko MA, Swerdlow NR, Makeig S, Braff DL, Light GA. The auditory brain-stem response to complex sounds: a potential biomarker for guiding treatment of psychosis. Front Psychiatry 2014; 5:142. [PMID: 25352811 PMCID: PMC4195270 DOI: 10.3389/fpsyt.2014.00142] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2014] [Accepted: 09/25/2014] [Indexed: 12/28/2022] Open
Abstract
Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker - the auditory brain-stem response (ABR) to complex sounds (cABR) - that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions.
Collapse
Affiliation(s)
- Melissa A Tarasenko
- VISN-22 Mental Illness, Research, Education and Clinical Center (MIRECC), VA San Diego Healthcare System , La Jolla, CA , USA ; Department of Psychiatry, University of California San Diego , La Jolla, CA , USA
| | - Neal R Swerdlow
- Department of Psychiatry, University of California San Diego , La Jolla, CA , USA
| | - Scott Makeig
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego , La Jolla, CA , USA
| | - David L Braff
- VISN-22 Mental Illness, Research, Education and Clinical Center (MIRECC), VA San Diego Healthcare System , La Jolla, CA , USA ; Department of Psychiatry, University of California San Diego , La Jolla, CA , USA
| | - Gregory A Light
- VISN-22 Mental Illness, Research, Education and Clinical Center (MIRECC), VA San Diego Healthcare System , La Jolla, CA , USA ; Department of Psychiatry, University of California San Diego , La Jolla, CA , USA
| |
Collapse
|
50
|
Campbell J, Sharma A. Compensatory changes in cortical resource allocation in adults with hearing loss. Front Syst Neurosci 2013; 7:71. [PMID: 24478637 PMCID: PMC3905471 DOI: 10.3389/fnsys.2013.00071] [Citation(s) in RCA: 124] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Accepted: 10/07/2013] [Indexed: 12/13/2022] Open
Abstract
Hearing loss has been linked to many types of cognitive decline in adults, including an association between hearing loss severity and dementia. However, it remains unclear whether cortical re-organization associated with hearing loss occurs in early stages of hearing decline and in early stages of auditory processing. In this study, we examined compensatory plasticity in adults with mild-moderate hearing loss using obligatory, passively-elicited, cortical auditory evoked potentials (CAEP). High-density EEG elicited by speech stimuli was recorded in adults with hearing loss and age-matched normal hearing controls. Latency, amplitude and source localization of the P1, N1, P2 components of the CAEP were analyzed. Adults with mild-moderate hearing loss showed increases in latency and amplitude of the P2 CAEP relative to control subjects. Current density reconstructions revealed decreased activation in temporal cortex and increased activation in frontal cortical areas for hearing-impaired listeners relative to normal hearing listeners. Participants' behavioral performance on a clinical test of speech perception in noise was significantly correlated with the increases in P2 latency. Our results indicate that changes in cortical resource allocation are apparent in early stages of adult hearing loss, and that these passively-elicited cortical changes are related to behavioral speech perception outcome.
Collapse
Affiliation(s)
- Julia Campbell
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder Boulder, CO, USA
| | - Anu Sharma
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder Boulder, CO, USA ; Institute of Cognitive Science, University of Colorado at Boulder Boulder, CO, USA
| |
Collapse
|