1
|
Easwar V, Peng ZE, Boothalingam S, Seeto M. Neural Envelope Processing at Low Frequencies Predicts Speech Understanding of Children With Hearing Loss in Noise and Reverberation. Ear Hear 2024; 45:837-849. [PMID: 38768048 PMCID: PMC11175738 DOI: 10.1097/aud.0000000000001481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 12/22/2023] [Indexed: 05/22/2024]
Abstract
OBJECTIVE Children with hearing loss experience greater difficulty understanding speech in the presence of noise and reverberation relative to their normal hearing peers despite provision of appropriate amplification. The fidelity of fundamental frequency of voice (f0) encoding-a salient temporal cue for understanding speech in noise-could play a significant role in explaining the variance in abilities among children. However, the nature of deficits in f0 encoding and its relationship with speech understanding are poorly understood. To this end, we evaluated the influence of frequency-specific f0 encoding on speech perception abilities of children with and without hearing loss in the presence of noise and/or reverberation. METHODS In 14 school-aged children with sensorineural hearing loss fitted with hearing aids and 29 normal hearing peers, envelope following responses (EFRs) were elicited by the vowel /i/, modified to estimate f0 encoding in low (<1.1 kHz) and higher frequencies simultaneously. EFRs to /i/ were elicited in quiet, in the presence of speech-shaped noise at +5 dB signal to noise ratio, with simulated reverberation time of 0.62 sec, as well as both noise and reverberation. EFRs were recorded using single-channel electroencephalogram between the vertex and the nape while children watched a silent movie with captions. Speech discrimination accuracy was measured using the University of Western Ontario Distinctive Features Differences test in each of the four acoustic conditions. Stimuli for EFR recordings and speech discrimination were presented monaurally. RESULTS Both groups of children demonstrated a frequency-dependent dichotomy in the disruption of f0 encoding, as reflected in EFR amplitude and phase coherence. Greater disruption (i.e., lower EFR amplitudes and phase coherence) was evident in EFRs elicited by low frequencies due to noise and greater disruption was evident in EFRs elicited by higher frequencies due to reverberation. Relative to normal hearing peers, children with hearing loss demonstrated: (a) greater disruption of f0 encoding at low frequencies, particularly in the presence of reverberation, and (b) a positive relationship between f0 encoding at low frequencies and speech discrimination in the hardest listening condition (i.e., when both noise and reverberation were present). CONCLUSIONS Together, these results provide new evidence for the persistence of suprathreshold temporal processing deficits related to f0 encoding in children despite the provision of appropriate amplification to compensate for hearing loss. These objectively measurable deficits may underlie the greater difficulty experienced by children with hearing loss.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communcation Sciences and Disorders, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communication Sciences Department, National Acoustic Laboratories, Sydney, Australia
- Linguistics, Macquarie University, Sydney, Australia
| | - Z. Ellen Peng
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Sriram Boothalingam
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communcation Sciences and Disorders, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communication Sciences Department, National Acoustic Laboratories, Sydney, Australia
- Linguistics, Macquarie University, Sydney, Australia
| | | |
Collapse
|
2
|
Xu C, Cheng FY, Medina S, Eng E, Gifford R, Smith S. Objective discrimination of bimodal speech using frequency following responses. Hear Res 2023; 437:108853. [PMID: 37441879 DOI: 10.1016/j.heares.2023.108853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/03/2023] [Accepted: 07/08/2023] [Indexed: 07/15/2023]
Abstract
Bimodal hearing, in which a contralateral hearing aid is combined with a cochlear implant (CI), provides greater speech recognition benefits than using a CI alone. Factors predicting individual bimodal patient success are not fully understood. Previous studies have shown that bimodal benefits may be driven by a patient's ability to extract fundamental frequency (f0) and/or temporal fine structure cues (e.g., F1). Both of these features may be represented in frequency following responses (FFR) to bimodal speech. Thus, the goals of this study were to: 1) parametrically examine neural encoding of f0 and F1 in simulated bimodal speech conditions; 2) examine objective discrimination of FFRs to bimodal speech conditions using machine learning; 3) explore whether FFRs are predictive of perceptual bimodal benefit. Three vowels (/ε/, /i/, and /ʊ/) with identical f0 were manipulated by a vocoder (right ear) and low-pass filters (left ear) to create five bimodal simulations for evoking FFRs: Vocoder-only, Vocoder +125 Hz, Vocoder +250 Hz, Vocoder +500 Hz, and Vocoder +750 Hz. Perceptual performance on the BKB-SIN test was also measured using the same five configurations. Results suggested that neural representation of f0 and F1 FFR components were enhanced with increasing acoustic bandwidth in the simulated "non-implanted" ear. As spectral differences between vowels emerged in the FFRs with increased acoustic bandwidth, FFRs were more accurately classified and discriminated using a machine learning algorithm. Enhancement of f0 and F1 neural encoding with increasing bandwidth were collectively predictive of perceptual bimodal benefit on a speech-in-noise task. Given these results, FFR may be a useful tool to objectively assess individual variability in bimodal hearing.
Collapse
Affiliation(s)
- Can Xu
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - Fan-Yin Cheng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - Sarah Medina
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - Erica Eng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA
| | - René Gifford
- Department of Speech, Language, and Hearing Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, 2504A Whitis Ave. (A1100), Austin 78712-0114, TX, USA.
| |
Collapse
|
3
|
Characterizing Electrophysiological Response Properties of the Peripheral Auditory System Evoked by Phonemes in Normal and Hearing Impaired Ears. Ear Hear 2022; 43:1526-1539. [DOI: 10.1097/aud.0000000000001213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
4
|
Easwar V, Purcell D, Eeckhoutte MV, Aiken SJ. The Influence of Male- and Female-Spoken Vowel Acoustics on Envelope-Following Responses. Semin Hear 2022; 43:223-239. [PMID: 36313043 PMCID: PMC9605803 DOI: 10.1055/s-0042-1756165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023] Open
Abstract
The influence of male and female vowel characteristics on the envelope-following responses (EFRs) is not well understood. This study explored the role of vowel characteristics on the EFR at the fundamental frequency (f0) in response to the vowel /ε/ (as in "head"). Vowel tokens were spoken by five males and five females and EFRs were measured in 25 young adults (21 females). An auditory model was used to estimate changes in auditory processing that might account for talker effects on EFR amplitude. There were several differences between male and female vowels in relation to the EFR. For male talkers, EFR amplitudes were correlated with the bandwidth and harmonic count of the first formant, and the amplitude of the trough below the second formant. For female talkers, EFR amplitudes were correlated with the range of f0 frequencies and the amplitude of the trough above the second formant. The model suggested that the f0 EFR reflects a wide distribution of energy in speech, with primary contributions from high-frequency harmonics mediated from cochlear regions basal to the peaks of the first and second formants, not from low-frequency harmonics with energy near f0. Vowels produced by female talkers tend to produce lower-amplitude EFR, likely because they depend on higher-frequency harmonics where speech sound levels tend to be lower. This work advances auditory electrophysiology by showing how the EFR evoked by speech relates to the acoustics of speech, for both male and female voices.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders & Waisman Center, University of Wisconsin, Madison
- Department of Communication Sciences, National Acoustic Laboratories, Sydney, Australia
| | - David Purcell
- National Center for Audiology, School of Communication Sciences and Disorders, Western University, London, Canada
| | - Maaike Van Eeckhoutte
- Division of Hearing Systems, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
- Copenhagen Hearing and Balance Centre - Ear, Nose, Throat and Audiology Clinic, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- National Center for Audiology, Western University, London, Canada
| | - Steven J. Aiken
- School of Communication Sciences and Disorders, Departments of Surgery and Psychology and Neuroscience, Dalhousie University, Halifax, Canada
| |
Collapse
|
5
|
Simon KR, Merz EC, He X, Noble KG. Environmental noise, brain structure, and language development in children. BRAIN AND LANGUAGE 2022; 229:105112. [PMID: 35398600 PMCID: PMC9126644 DOI: 10.1016/j.bandl.2022.105112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 03/26/2022] [Accepted: 03/28/2022] [Indexed: 06/03/2023]
Abstract
While excessive noise exposure in childhood has been associated with reduced language ability, few studies have examined potential underlying neurobiological mechanisms that may account for noise-related differences in language skills. In this study, we tested the hypotheses that higher everyday noise exposure would be associated with 1) poorer language skills and 2) differences in language-related cortical structure. A socioeconomically diverse sample of children aged 5-9 (N = 94) completed standardized language assessments. High-resolution T1-weighted magnetic resonance imaging (MRI) scans were acquired, and surface area and cortical thickness of the left inferior frontal gyrus (IFG) and left superior temporal gyrus (STG) were extracted. Language Environmental Analysis (LENA) was used to measure levels of exposure to excessive environmental noise over the course of a typical day (n = 43 with complete LENA, MRI, and behavioral data). Results indicated that children exposed to excessive levels of noise exhibited reduced cortical thickness in the left IFG. These findings add to a growing literature that explores the extent to which home environmental factors, such as environmental noise, are associated with neurobiological development related to language development in children.
Collapse
Affiliation(s)
- Katrina R Simon
- Department of Human Development, Teachers College, Columbia University, New York, NY, USA
| | - Emily C Merz
- Department of Psychology, Colorado State University, Fort Collins, CO, USA
| | - Xiaofu He
- Department of Psychiatry, The Vagelos College of Physicians and Surgeons, Columbia University and the New York State Psychiatric Institute, New York, NY, USA
| | - Kimberly G Noble
- Department of Human Development, Teachers College, Columbia University, New York, NY, USA; Teachers College, Columbia University, Department of Biobehavioral Sciences, USA.
| |
Collapse
|
6
|
Kalaiah MK, Mishra K, Shastri U. The Relationship between Contralateral Suppression of Transient Evoked Otoacoustic Emission and Unmasking of Speech Evoked Auditory Brainstem Response. Int Arch Otorhinolaryngol 2022; 26:e676-e682. [DOI: 10.1055/s-0042-1742774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 12/21/2021] [Indexed: 10/18/2022] Open
Abstract
Abstract
Introduction Several studies have shown that efferent pathways of the auditory system improve perception of speech-in-noise. But, the majority of investigations assessing the role of efferent pathways on speech perception have used contralateral suppression of otoacoustic emissions as a measure of efferent activity. By studying the effect of efferent activity on the speech-evoked auditory brainstem response (ABR), some more light could be shed on the effect of efferent pathways on the encoding of speech in the auditory pathway.
Objectives To investigate the relationship between contralateral suppression of transient evoked otoacoustic emission (CSTEOAE) and unmasking of speech ABR.
Methods A total of 23 young adults participated in the study. The CSTEOAE was measured using linear clicks at 60 dB peSPL and white noise at 60 dB sound pressure level (SPL). The speech ABR was recorded using the syllable /da/ at 80 dB SPL in quiet, ipsilateral noise, and binaural noise conditions. In the ipsilateral noise condition, white noise was presented to the test ear at 60 dB SPL, and, in the binaural noise condition, two separate white noises were presented to both ears.
Results The F0 amplitude of speech ABR was higher in quiet condition; however, the mean amplitude of F0 was not significantly different across conditions. Correlation analysis showed a significant positive correlation between the CSTEOAE and the magnitude of unmasking of F0 amplitude of speech ABR.
Conclusions The findings of the present study suggests that the efferent pathways are involved in speech-in-noise processing.
Collapse
Affiliation(s)
- Mohan Kumar Kalaiah
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Keshav Mishra
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Usha Shastri
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, Karnataka, India
| |
Collapse
|
7
|
Bsharat-Maalouf D, Karawani H. Bilinguals' speech perception in noise: Perceptual and neural associations. PLoS One 2022; 17:e0264282. [PMID: 35196339 PMCID: PMC8865662 DOI: 10.1371/journal.pone.0264282] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 02/07/2022] [Indexed: 01/26/2023] Open
Abstract
The current study characterized subcortical speech sound processing among monolinguals and bilinguals in quiet and challenging listening conditions and examined the relation between subcortical neural processing and perceptual performance. A total of 59 normal-hearing adults, ages 19–35 years, participated in the study: 29 native Hebrew-speaking monolinguals and 30 Arabic-Hebrew-speaking bilinguals. Auditory brainstem responses to speech sounds were collected in a quiet condition and with background noise. The perception of words and sentences in quiet and background noise conditions was also examined to assess perceptual performance and to evaluate the perceptual-physiological relationship. Perceptual performance was tested among bilinguals in both languages (first language (L1-Arabic) and second language (L2-Hebrew)). The outcomes were similar between monolingual and bilingual groups in quiet. Noise, as expected, resulted in deterioration in perceptual and neural responses, which was reflected in lower accuracy in perceptual tasks compared to quiet, and in more prolonged latencies and diminished neural responses. However, a mixed picture was observed among bilinguals in perceptual and physiological outcomes in noise. In the perceptual measures, bilinguals were significantly less accurate than their monolingual counterparts. However, in neural responses, bilinguals demonstrated earlier peak latencies compared to monolinguals. Our results also showed that perceptual performance in noise was related to subcortical resilience to the disruption caused by background noise. Specifically, in noise, increased brainstem resistance (i.e., fewer changes in the fundamental frequency (F0) representations or fewer shifts in the neural timing) was related to better speech perception among bilinguals. Better perception in L1 in noise was correlated with fewer changes in F0 representations, and more accurate perception in L2 was related to minor shifts in auditory neural timing. This study delves into the importance of using neural brainstem responses to speech sounds to differentiate individuals with different language histories and to explain inter-subject variability in bilinguals’ perceptual abilities in daily life situations.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
- * E-mail:
| |
Collapse
|
8
|
Chauvette L, Fournier P, Sharp A. The frequency-following response to assess the neural representation of spectral speech cues in older adults. Hear Res 2022; 418:108486. [DOI: 10.1016/j.heares.2022.108486] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 03/12/2022] [Accepted: 03/15/2022] [Indexed: 11/04/2022]
|
9
|
JAROLLAHI F, VALADBEIGI A, JALAEI B, MAAREFVAND M, MOTASADDI ZARANDY M, HAGHANI H, SHIRZHIYZN Z. Comparing Sound-Field Speech-Auditory Brainstem Response Components between Cochlear Implant Users with Different Speech Recognition in Noise Scores. IRANIAN JOURNAL OF CHILD NEUROLOGY 2022; 16:93-105. [PMID: 35497112 PMCID: PMC9047831 DOI: 10.22037/ijcn.v16i2.27210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 02/28/2021] [Indexed: 12/02/2022]
Abstract
OBJECTIVES Many studies have suggested that cochlear implant (CI) users vary in terms of speech recognition in noise. Studies in this field attribute this variety partly to subcortical auditory processing. Studying speech-Auditory Brainstem Response (speech-ABR) provides good information about speech processing; thus, this work was designed to compare speech-ABR components between two groups of CI users with good and poor speech recognition in noise scores. MATERIALS & METHODS The present study was conducted on two groups of CI users aged 8-10 years old. The first group (CI-good) consisted of 15 children with prelingual CI who had good speech recognition in noise performance. The second group (CI-poor) was matched with the first group, but they had poor speech recognition in noise performance. The speech-ABR test in a sound-field presentation was performed for all the participants. RESULTS The speech-ABR response showed more delay in C, D, E, F, O latencies in CI-poor than CI-good users (P <0.05), meanwhile no significant difference was observed in initial wave (V(t= -0.293, p= 0.771 and A (t= -1.051, p= 0.307). Analysis in spectral-domain showed a weaker representation of fundamental frequency as well as the first formant and high-frequency component of speech stimuli in the CI users with poor auditory performance. CONCLUSIONS Results revealed that CI users who showed poor auditory performance in noise performance had deficits in encoding the periodic portion of speech signals at the brainstem level. Also, this study could be as physiological evidence for poorer pitch processing in CI users with poor speech recognition in noise performance.
Collapse
Affiliation(s)
- Farnoush JAROLLAHI
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Ayub VALADBEIGI
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Bahram JALAEI
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Mohammad MAAREFVAND
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Masoud MOTASADDI ZARANDY
- Cochlear Implant Center and Department of Otorhinolaryngology, Amir Aalam Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Hamid HAGHANI
- Department of Biostatistics, School of Public Health, Iran University of Medical Sciences, Tehran, Iran
| | - Zahra SHIRZHIYZN
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
10
|
Krizman J, Tierney A, Nicol T, Kraus N. Listening in the Moment: How Bilingualism Interacts With Task Demands to Shape Active Listening. Front Neurosci 2021; 15:717572. [PMID: 34955707 PMCID: PMC8702653 DOI: 10.3389/fnins.2021.717572] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 11/11/2021] [Indexed: 01/25/2023] Open
Abstract
While there is evidence for bilingual enhancements of inhibitory control and auditory processing, two processes that are fundamental to daily communication, it is not known how bilinguals utilize these cognitive and sensory enhancements during real-world listening. To test our hypothesis that bilinguals engage their enhanced cognitive and sensory processing in real-world listening situations, bilinguals and monolinguals performed a selective attention task involving competing talkers, a common demand of everyday listening, and then later passively listened to the same competing sentences. During the active and passive listening periods, evoked responses to the competing talkers were collected to understand how online auditory processing facilitates active listening and if this processing differs between bilinguals and monolinguals. Additionally, participants were tested on a separate measure of inhibitory control to see if inhibitory control abilities related with performance on the selective attention task. We found that although monolinguals and bilinguals performed similarly on the selective attention task, the groups differed in the neural and cognitive processes engaged to perform this task, compared to when they were passively listening to the talkers. Specifically, during active listening monolinguals had enhanced cortical phase consistency while bilinguals demonstrated enhanced subcortical phase consistency in the response to the pitch contours of the sentences, particularly during passive listening. Moreover, bilinguals’ performance on the inhibitory control test related with performance on the selective attention test, a relationship that was not seen for monolinguals. These results are consistent with the hypothesis that bilinguals utilize inhibitory control and enhanced subcortical auditory processing in everyday listening situations to engage with sound in ways that are different than monolinguals.
Collapse
Affiliation(s)
- Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Adam Tierney
- The ALPHALAB, Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
- Departments of Neurobiology and Otolaryngology, Northwestern University, Evanston, IL, United States
- *Correspondence: Nina Kraus,
| |
Collapse
|
11
|
The effect of harmonic training on speech perception in noise in hearing-impaired children. Int J Pediatr Otorhinolaryngol 2021; 149:110845. [PMID: 34293627 DOI: 10.1016/j.ijporl.2021.110845] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/16/2021] [Accepted: 07/16/2021] [Indexed: 11/21/2022]
Abstract
OBJECTIVE Speech perception in noise is a highly challenging situation experienced by hearing-impaired children (HIC). Despite advances in hearing aid technologies, speech perception in noise still poses challenges. Pitch-based training improves pitch discrimination and speech perception and may facilitate concurrent sound segregation. Considering the role of harmonics in the analysis of concurrent sounds, we performed a harmonic assessment, examined the role of harmonic training in the rehabilitation of moderate-to-severe HIC, and investigated its effect on their speech perception in noise. METHODS The participants were 57 normally hearing children (NHC) with a mean age of 7.73 ± 1.57 years and 18 HIC with a mean age of 7.94 ± 1.47 years. The two groups were compared in terms of harmonic assessment, the Pitch Pattern Sequence Test (PPST), the Consonant-Vowel in Noise (CV in noise) test, and the Bamford-Kowal Bench (BKB) test. Subsequently, the HIC underwent harmonic training, and the results of the pre- and post-harmonic training assessments were compared. RESULTS HIC displayed poorer harmonic discrimination than NHC at all harmonics (P < 0.05). They also showed lower scores in PPST, CV in noise, and BKB tests compared to NHC (P < 0.05). Harmonic training led to HIC's better performance in harmonic assessment, PPST, and CV in noise test (P < 0.05). However, the BKB test results pre- and post-training did not significantly differ (P > 0.05). CONCLUSION Harmonic training plays a significant role in improving the HIC's temporal processing of the PPST and CV in noise test; therefore, it can serve as a rehabilitation method to enhance temporal processing and auditory scene analysis.
Collapse
|
12
|
Jain S, Cherian R, Nataraja NP, Narne VK. The Relationship Between Tinnitus Pitch, Audiogram Edge Frequency, and Auditory Stream Segregation Abilities in Individuals With Tinnitus. Am J Audiol 2021; 30:524-534. [PMID: 34139145 DOI: 10.1044/2021_aja-20-00087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Purpose Around 80%-93% of the individuals with tinnitus have hearing loss. Researchers have found that tinnitus pitch was related to the frequencies of hearing loss, but unclear about the relationship between tinnitus pitch and audiometry edge frequency. The comorbidity of tinnitus and speech perception in noise problems had also been reported, but the relationship between tinnitus pitch and speech perception in noise had seldom been investigated. This study was designed to estimate the relationship between tinnitus pitch, audiogram edge frequency, and speech perception in noise. The speech perception in noise was measured using auditory stream segregation paradigm. Method Thirteen individuals with bilateral mild-to-severe tonal tinnitus and minimal-to-mild cochlear hearing loss were selected. Thirteen individuals with hearing loss without tinnitus were also selected. The audiogram of each participant with tinnitus was matched with that of the participant without tinnitus. Tinnitus pitch of the participants with tinnitus was measured and compared with audiogram edge frequency. The stream segregation thresholds were calculated at the participants' admitted tinnitus pitch and one octave below the tinnitus pitch. The stream segregation thresholds were estimated at fission and fusion boundary using pure-tone stimuli in ABA paradigm. Results High correlation between tinnitus pitch and audiogram edge frequency was noted. Overall stream segregation thresholds were higher for individuals with tinnitus. Higher thresholds indicated poorer stream segregation abilities. Within tinnitus participants, the thresholds were significantly lesser at frequency corresponding to admitted tinnitus pitch than at one octave below the tinnitus pitch. Conclusions The information from this study may be helpful in educating the patients about the relationship between hearing loss and tinnitus. The findings may also account for speech-perception-in-noise difficulties often reported by the individuals with tinnitus.
Collapse
Affiliation(s)
- Saransh Jain
- Department of Speech and Hearing, Jagadguru Sri Shivarathreeshwara Institute of Speech and Hearing, Mysuru, India
| | - Riya Cherian
- Department of ENT, Sree Gokulam Medical College & Research Foundation, Venjaranmood, India
| | - Nuggehalli P. Nataraja
- Department of Speech and Hearing, Jagadguru Sri Shivarathreeshwara Institute of Speech and Hearing, Mysuru, India
| | - Vijaya Kumar Narne
- Department of Mechanical Engineering, Indian Institute of Technology Kanpur, India
| |
Collapse
|
13
|
Thompson EC, Estabrook R, Krizman J, Smith S, Huang S, White-Schwoch T, Nicol T, Kraus N. Auditory neurophysiological development in early childhood: A growth curve modeling approach. Clin Neurophysiol 2021; 132:2110-2122. [PMID: 34284246 DOI: 10.1016/j.clinph.2021.05.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Revised: 04/12/2021] [Accepted: 05/24/2021] [Indexed: 10/21/2022]
Abstract
OBJECTIVE During early childhood, the development of communication skills, such as language and speech perception, relies in part on auditory system maturation. Because auditory behavioral tests engage cognition, mapping auditory maturation in the absence of cognitive influence remains a challenge. Furthermore, longitudinal investigations that capture auditory maturation within and between individuals in this age group are scarce. The goal of this study is to longitudinally measure auditory system maturation in early childhood using an objective approach. METHODS We collected frequency-following responses (FFR) to speech in 175 children, ages 3-8 years, annually for up to five years. The FFR is an objective measure of sound encoding that predominantly reflects auditory midbrain activity. Eliciting FFRs to speech provides rich details of various aspects of sound processing, namely, neural timing, spectral coding, and response stability. We used growth curve modeling to answer three questions: 1) does sound encoding change across childhood? 2) are there individual differences in sound encoding? and 3) are there individual differences in the development of sound encoding? RESULTS Subcortical auditory maturation develops linearly from 3-8 years. With age, FFRs became faster, more robust, and more consistent. Individual differences were evident in each aspect of sound processing, while individual differences in rates of change were observed for spectral coding alone. CONCLUSIONS By using an objective measure and a longitudinal approach, these results suggest subcortical auditory development continues throughout childhood, and that different facets of auditory processing follow distinct developmental trajectories. SIGNIFICANCE The present findings improve our understanding of auditory system development in typically-developing children, opening the door for future investigations of disordered sound processing in clinical populations.
Collapse
Affiliation(s)
- Elaine C Thompson
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Ryne Estabrook
- Department of Psychology, University of Illinois at Chicago, Chicago, IL, USA
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Spencer Smith
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Stephanie Huang
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA; Institute for Neuroscience, Northwestern University, Evanston, IL, USA; Department of Neurobiology, Northwestern University, Evanston, IL, USA; Department of Otolaryngology, Northwestern University, Chicago, IL, USA.
| |
Collapse
|
14
|
Coffey EBJ, Arseneau-Bruneau I, Zhang X, Baillet S, Zatorre RJ. Oscillatory Entrainment of the Frequency-following Response in Auditory Cortical and Subcortical Structures. J Neurosci 2021; 41:4073-4087. [PMID: 33731448 PMCID: PMC8176755 DOI: 10.1523/jneurosci.2313-20.2021] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 02/23/2021] [Accepted: 02/24/2021] [Indexed: 11/21/2022] Open
Abstract
There is much debate about the existence and function of neural oscillatory mechanisms in the auditory system. The frequency-following response (FFR) is an index of neural periodicity encoding that can provide a vehicle to study entrainment in frequency ranges relevant to speech and music processing. Criteria for entrainment include the presence of poststimulus oscillations and phase alignment between stimulus and endogenous activity. To test the hypothesis of entrainment, in experiment 1 we collected FFR data for a repeated syllable using magnetoencephalography (MEG) and electroencephalography in 20 male and female human adults. We observed significant oscillatory activity after stimulus offset in auditory cortex and subcortical auditory nuclei, consistent with entrainment. In these structures, the FFR fundamental frequency converged from a lower value over 100 ms to the stimulus frequency, consistent with phase alignment, and diverged to a lower value after offset, consistent with relaxation to a preferred frequency. In experiment 2, we tested how transitions between stimulus frequencies affected the MEG FFR to a train of tone pairs in 30 people. We found that the FFR was affected by the frequency of the preceding tone for up to 40 ms at subcortical levels, and even longer durations at cortical levels. Our results suggest that oscillatory entrainment may be an integral part of periodic sound representation throughout the auditory neuraxis. The functional role of this mechanism is unknown, but it could serve as a fine-scale temporal predictor for frequency information, enhancing stability and reducing susceptibility to degradation that could be useful in real-life noisy environments.SIGNIFICANCE STATEMENT Neural oscillations are proposed to be a ubiquitous aspect of neural function, but their contribution to auditory encoding is not clear, particularly at higher frequencies associated with pitch encoding. In a magnetoencephalography experiment, we found converging evidence that the frequency-following response has an oscillatory component according to established criteria: poststimulus resonance, progressive entrainment of the neural frequency to the stimulus frequency, and relaxation toward the original state on stimulus offset. In a second experiment, we found that the frequency and amplitude of the frequency-following response to tones are affected by preceding stimuli. These findings support the contribution of intrinsic oscillations to the encoding of sound, and raise new questions about their functional roles, possibly including stabilization and low-level predictive coding.
Collapse
Affiliation(s)
- Emily B J Coffey
- Department of Psychology, Concordia University, Montreal, Quebec H4B 1R6, Canada
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec H3C 3J7, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec H3G 2A8, Canada
| | - Isabelle Arseneau-Bruneau
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec H3C 3J7, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec H3G 2A8, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), McGill University, Montreal, Quebec H3A 1E3, Canada
| | - Xiaochen Zhang
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, People's Republic of China
| | - Sylvain Baillet
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec H3G 2A8, Canada
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec H3C 3J7, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec H3G 2A8, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), McGill University, Montreal, Quebec H3A 1E3, Canada
| |
Collapse
|
15
|
Elmahallawi TH, Gabr TA, Darwish ME, Seleem FM. Children with developmental language disorder: a frequency following response in the noise study. Braz J Otorhinolaryngol 2021; 88:954-961. [PMID: 33766501 PMCID: PMC9615520 DOI: 10.1016/j.bjorl.2021.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 12/21/2020] [Accepted: 01/31/2021] [Indexed: 11/27/2022] Open
Abstract
Introduction Children with developmental language disorder have been reported to have poor temporal auditory processing. This study aimed to examine the frequency following response. Objective This work aimed to investigate speech processing in quiet and in noise. Methods Two groups of children were included in this work: the control group (15 children with normal language development) and the study group (25 children diagnosed with developmental language disorder). All children were submitted to intelligence scale, language assessment, full audiological evaluation, and frequency following response in quiet and noise (+5QNR and +10QNR). Results Results showed no statically significant difference between both groups as regards IQ or PTA. In the study group, the advanced analysis of frequency following response showed reduced F0 and F2 amplitudes. Results also showed that noise has an impact on both the transient and sustained components of the frequency following response in the same group. Conclusion Children with developmental language disorder have difficulty in speech processing especially in the presence of background noise. Frequency following response is an efficient procedure that can be used to address speech processing problems in children with developmental language disorder.
Collapse
Affiliation(s)
- Trandil H Elmahallawi
- Tanta University Hospitals, Otolaryngology Head and Neck Surgery Department, Audiovestibular Unit, Tanta, Egypt
| | - Takwa A Gabr
- Kafrelsheikh University Hospitals, Otolaryngology Head and Neck Surgery Department, Audiovestibular Unit, Kafrelsheikh, Egypt.
| | - Mohamed E Darwish
- Tanta University Hospitals, Otolaryngology Head and Neck Surgery Department, Phoniatrics Unit, Tanta, Egypt
| | - Fatma M Seleem
- Tanta University Hospitals, Otolaryngology Head and Neck Surgery Department, Audiovestibular Unit, Tanta, Egypt
| |
Collapse
|
16
|
Rauterkus G, Moncrieff D, Stewart G, Skoe E. Baseline, retest, and post-injury profiles of auditory neural function in collegiate football players. Int J Audiol 2021; 60:650-662. [PMID: 33439060 DOI: 10.1080/14992027.2020.1860261] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVES Recent retrospective studies report differences in auditory neurophysiology between concussed athletes and uninjured controls using the frequency-following response (FFR). Adopting a prospective design in college football players, we compared FFRs before and after a concussion and evaluated test-retest reliability in non-concussed teammates. DESIGN Testing took place in a locker room. We analysed the FFR to the fundamental frequency (F0) (FFR-F0) of a speech stimulus, previously identified as a potential concussion biomarker. Baseline FFRs were obtained during the football pre-season. In athletes diagnosed with concussions during the season, FFRs were measured days after injury and compared to pre-season baseline. In uninjured controls, comparisons were made between pre- and post-season. STUDY SAMPLE Participants were Tulane University football athletes (n = 65). RESULTS In concussed athletes, there was a significant group-level decrease in FFR-F0 from baseline (26% decrease on average). By contrast, the control group's change from baseline was not statistically significant, and comparisons of pre- and post-season had good repeatability (intraclass correlation coefficient = 0.75). CONCLUSIONS Results converge with previous work to evince suppressed neural function to the FFR-F0 following concussion. This preliminary study paves the way for larger-scale clinical evaluation of the specificity and reliability of the FFR as a concussion diagnostic.HighlightsThis prospective study reveals suppressed neural responses to sound in concussed athletes compared to baseline.Neural responses to sound show good repeatability in uninjured athletes tested in a locker-room setting.Results support the feasibility of recording frequency-following responses in non-laboratory conditions.
Collapse
Affiliation(s)
- Grant Rauterkus
- Center for Sport, Tulane University School of Medicine, New Orleans, LA, USA
| | - Deborah Moncrieff
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA
| | - Gregory Stewart
- Department of Orthopaedics, Tulane University School of Medicine, New Orleans, LA, USA
| | - Erika Skoe
- Department of Speech, Language, and Hearing Sciences, Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
17
|
Kessler DM, Ananthakrishnan S, Smith SB, D'Onofrio K, Gifford RH. Frequency Following Response and Speech Recognition Benefit for Combining a Cochlear Implant and Contralateral Hearing Aid. Trends Hear 2020; 24:2331216520902001. [PMID: 32003296 PMCID: PMC7257083 DOI: 10.1177/2331216520902001] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Multiple studies have shown significant speech recognition benefit when acoustic hearing is combined with a cochlear implant (CI) for a bimodal hearing configuration. However, this benefit varies greatly between individuals. There are few clinical measures correlated with bimodal benefit and those correlations are driven by extreme values prohibiting data-driven, clinical counseling. This study evaluated the relationship between neural representation of fundamental frequency (F0) and temporal fine structure via the frequency following response (FFR) in the nonimplanted ear as well as spectral and temporal resolution of the nonimplanted ear and bimodal benefit for speech recognition in quiet and noise. Participants included 14 unilateral CI users who wore a hearing aid (HA) in the nonimplanted ear. Testing included speech recognition in quiet and in noise with the HA-alone, CI-alone, and in the bimodal condition (i.e., CI + HA), measures of spectral and temporal resolution in the nonimplanted ear, and FFR recording for a 170-ms/da/stimulus in the nonimplanted ear. Even after controlling for four-frequency pure-tone average, there was a significant correlation (r = .83) between FFR F0 amplitude in the nonimplanted ear and bimodal benefit. Other measures of auditory function of the nonimplanted ear were not significantly correlated with bimodal benefit. The FFR holds potential as an objective tool that may allow data-driven counseling regarding expected benefit from the nonimplanted ear. It is possible that this information may eventually be used for clinical decision-making, particularly in difficult-to-test populations such as young children, regarding effectiveness of bimodal hearing versus bilateral CI candidacy.
Collapse
Affiliation(s)
- David M Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Spencer B Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, TX, USA
| | - Kristen D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
18
|
Tecoulesco L, Skoe E, Naigles LR. Phonetic discrimination mediates the relationship between auditory brainstem response stability and syntactic performance. BRAIN AND LANGUAGE 2020; 208:104810. [PMID: 32683226 DOI: 10.1016/j.bandl.2020.104810] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 02/03/2020] [Accepted: 04/27/2020] [Indexed: 06/11/2023]
Abstract
Syntactic, lexical, and phonological/phonetic knowledge are vital aspects of macro level language ability. Prior research has predominantly focused on environmental or cortical sources of individual differences in these areas; however, a growing literature suggests an auditory brainstem contribution to language performance in both typically developing (TD) populations and children with autism spectrum disorder (ASD). This study investigates whether one aspect of auditory brainstem responses (ABRs), neural response stability, which is a metric reflecting trial-by-trial consistency in the neural encoding of sound, can predict syntactic, lexical, and phonetic performance in TD and ASD school-aged children. Pooling across children with ASD and TD, results showed that higher neural stability in response to the syllable /da/ was associated with better phonetic discrimination, and with better syntactic performance on a standardized measure. Furthermore, phonetic discrimination was a successful mediator of the relationship between neural stability and syntactic performance. This study supports the growing body of literature that stable subcortical neural encoding of sound is important for successful language performance.
Collapse
Affiliation(s)
- Lisa Tecoulesco
- University of Connecticut Psychological Sciences, United States.
| | - Erika Skoe
- University of Connecticut, Speech Language and Hearing Sciences, United States
| | | |
Collapse
|
19
|
Heidari A, Moossavi A, Yadegari F, Bakhshi E, Ahadi M. Effect of Vowel Auditory Training on the Speech-In-Noise Perception among Older Adults with Normal Hearing. IRANIAN JOURNAL OF OTORHINOLARYNGOLOGY 2020; 32:229-236. [PMID: 32850511 PMCID: PMC7423087 DOI: 10.22038/ijorl.2019.33433.2110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Introduction: Aging reduces the ability to understand speech in noise. Hearing rehabilitation is one of the ways to help older people communicate effectively. This study aimed to investigate the effect of vowel auditory training on the improvement of speech-in-noise (SIN) perception among elderly listeners. Materials and Methods: This study was conducted on 36 elderly listeners (17 males and 15 females) with the mean±SD of 67.6±6.33. They had the normal peripheral auditory ability but had difficulties in SIN perception. The samples were randomly divided into two groups of intervention and control. The intervention group underwent vowel auditory training; however, the control group received no training. Results: After vowel auditory training, the intervention group showed significant changes in the results of the SIN test at two signal-to-noise ratios of 0 and -10 and the Iranian version of the Speech, Spatial, and Qualities of Hearing Scale, compared to the control group (P<0.001). Regarding the Speech Auditory Brainstem Response test, the F0 magnitude was higher in the intervention group (8.42±2.26), compared to the control group (6.68±1.87) (P<0.011). Conclusion: This study investigated the effect of vowel auditory training on the improvement of SIN perception which could be probably due to better F0 encoding and receiving. This ability enhancement resulted in the easier perception of speech and its more proper separation from background noise which in turn enhanced the ability of the old people to follow the speech of a specific person and track the discussion.
Collapse
Affiliation(s)
- Atta Heidari
- Department of Audiology, Faculty of Rehabilitation, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Abdollah Moossavi
- Department of Otolaryngology and Head and Neck Surgery, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Fariba Yadegari
- Department of Speech Therapy, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Enayatollah Bakhshi
- Department of Biostatistics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Mohsen Ahadi
- Department of Audiology, Rehabilitation Research Center, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
20
|
Richard C, Neel ML, Jeanvoine A, Connell SM, Gehred A, Maitre NL. Characteristics of the Frequency-Following Response to Speech in Neonates and Potential Applicability in Clinical Practice: A Systematic Review. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1618-1635. [PMID: 32407639 DOI: 10.1044/2020_jslhr-19-00322] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose We sought to critically analyze and evaluate published evidence regarding feasibility and clinical potential for predicting neurodevelopmental outcomes of the frequency-following responses (FFRs) to speech recordings in neonates (birth to 28 days). Method A systematic search of MeSH terms in the Cumulative Index to Nursing and Allied HealthLiterature, Embase, Google Scholar, Ovid Medline (R) and E-Pub Ahead of Print, In-Process & Other Non-Indexed Citations and Daily, Web of Science, SCOPUS, COCHRANE Library, and ClinicalTrials.gov was performed. Manual review of all items identified in the search was performed by two independent reviewers. Articles were evaluated based on the level of methodological quality and evidence according to the RTI item bank. Results Seven articles met inclusion criteria. None of the included studies reported neurodevelopmental outcomes past 3 months of age. Quality of the evidence ranged from moderate to high. Protocol variations were frequent. Conclusions Based on this systematic review, the FFR to speech can capture both temporal and spectral acoustic features in neonates. It can accurately be recorded in a fast and easy manner at the infant's bedside. However, at this time, further studies are needed to identify and validate which FFR features could be incorporated as an addition to standard evaluation of infant sound processing evaluation in subcortico-cortical networks. This review identifies the need for further research focused on identifying specific features of the neonatal FFRs, those with predictive value for early childhood outcomes to help guide targeted early speech and hearing interventions.
Collapse
Affiliation(s)
- Céline Richard
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
- Laboratory for Investigative Neurophysiology, Department of Radiology and Department of Clinical Neurosciences, University Hospital Center and University of Lausanne, Switzerland
| | - Mary Lauren Neel
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
| | - Arnaud Jeanvoine
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
| | - Sharon Mc Connell
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
| | - Alison Gehred
- Medical Library Division, Nationwide Children's Hospital, Columbus, OH
| | - Nathalie L Maitre
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
21
|
De Vos A, Vanvooren S, Ghesquière P, Wouters J. Subcortical auditory neural synchronization is deficient in pre-reading children who develop dyslexia. Dev Sci 2020; 23:e12945. [PMID: 32034978 DOI: 10.1111/desc.12945] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 02/03/2020] [Accepted: 02/04/2020] [Indexed: 01/19/2023]
Abstract
Auditory processing of temporal information in speech is sustained by synchronized firing of neurons along the entire auditory pathway. In school-aged children and adults with dyslexia, neural synchronization deficits have been found at cortical levels of the auditory system, however, these deficits do not appear to be present in pre-reading children. An alternative role for subcortical synchronization in reading development and dyslexia has been suggested, but remains debated. By means of a longitudinal study, we assessed cognitive reading-related skills and subcortical auditory steady-state responses (80 Hz ASSRs) in a group of children before formal reading instruction (pre-reading), after 1 year of formal reading instruction (beginning reading), and after 3 years of formal reading instruction (more advanced reading). Children were retrospectively classified into three groups based on family risk and literacy achievement: typically developing children without a family risk for dyslexia, typically developing children with a family risk for dyslexia, and children who developed dyslexia. Our results reveal that children who developed dyslexia demonstrate decreased 80 Hz ASSRs at the pre-reading stage. This effect is no longer present after the onset of reading instruction, due to an atypical developmental increase in 80 Hz ASSRs between the pre-reading and the beginning reading stage. A forward stepwise logistic regression analysis showed that literacy achievement was predictable with an accuracy of 90.4% based on a model including three significant predictors, that is, family risk for dyslexia (R = .31), phonological awareness (R = .23), and 80 Hz ASSRs (R = .26). Given that (1) abnormalities in subcortical ASSRs preceded reading acquisition in children who developed dyslexia and (2) subcortical ASSRs contributed to the prediction of literacy achievement, subcortical auditory synchronization deficits may constitute a pre-reading risk factor in the emergence of dyslexia.
Collapse
Affiliation(s)
- Astrid De Vos
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven - University of Leuven, Leuven, Belgium.,Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leuven, Belgium
| | - Sophie Vanvooren
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven - University of Leuven, Leuven, Belgium.,Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leuven, Belgium
| | - Jan Wouters
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven - University of Leuven, Leuven, Belgium
| |
Collapse
|
22
|
BinKhamis G, Elia Forte A, Reichenbach T, O'Driscoll M, Kluk K. Speech Auditory Brainstem Responses in Adult Hearing Aid Users: Effects of Aiding and Background Noise, and Prediction of Behavioral Measures. Trends Hear 2019; 23:2331216519848297. [PMID: 31264513 PMCID: PMC6607564 DOI: 10.1177/2331216519848297] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Evaluation of patients who are unable to provide behavioral responses on standard clinical measures is challenging due to the lack of standard objective (non-behavioral) clinical audiological measures that assess the outcome of an intervention (e.g., hearing aids). Brainstem responses to short consonant-vowel stimuli (speech-auditory brainstem responses [speech-ABRs]) have been proposed as a measure of subcortical encoding of speech, speech detection, and speech-in-noise performance in individuals with normal hearing. Here, we investigated the potential application of speech-ABRs as an objective clinical outcome measure of speech detection, speech-in-noise detection and recognition, and self-reported speech understanding in 98 adults with sensorineural hearing loss. We compared aided and unaided speech-ABRs, and speech-ABRs in quiet and in noise. In addition, we evaluated whether speech-ABR F0 encoding (obtained from the complex cross-correlation with the 40 ms [da] fundamental waveform) predicted aided behavioral speech recognition in noise or aided self-reported speech understanding. Results showed that (a) aided speech-ABRs had earlier peak latencies, larger peak amplitudes, and larger F0 encoding amplitudes compared to unaided speech-ABRs; (b) the addition of background noise resulted in later F0 encoding latencies but did not have an effect on peak latencies and amplitudes or on F0 encoding amplitudes; and (c) speech-ABRs were not a significant predictor of any of the behavioral or self-report measures. These results show that speech-ABR F0 encoding is not a good predictor of speech-in-noise recognition or self-reported speech understanding with hearing aids. However, our results suggest that speech-ABRs may have potential for clinical application as an objective measure of speech detection with hearing aids.
Collapse
Affiliation(s)
- Ghada BinKhamis
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.,2 Department of Communication and Swallowing Disorders, King Fahad Medical City, Riyadh, Saudi Arabia
| | - Antonio Elia Forte
- 3 John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Tobias Reichenbach
- 4 Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, UK
| | - Martin O'Driscoll
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.,5 Manchester Auditory Implant Centre, Manchester University Hospitals NHS Foundation Trust, Manchester, UK
| | - Karolina Kluk
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
23
|
Moossavi A, Lotfi Y, Javanbakht M, Faghihzadeh S. Speech-evoked auditory brainstem response; electrophysiological evidence of upper brainstem facilitative role on sound lateralization in noise. Neurol Sci 2019; 41:611-617. [PMID: 31732889 DOI: 10.1007/s10072-019-04102-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 10/04/2019] [Indexed: 11/29/2022]
Abstract
BACKGROUND AND AIM Sound lateralization/localization is one of the most important auditory processing abilities, which plays approved role in auditory streaming and speech perception in challenging situations like noisy places. In addition to the main role of lower brainstem centers like superior olivary complex in sound lateralization, efferent auditory system effects on improving auditory skills in everyday auditory challenging positions were revealed. This study evaluated noise effects on lateralization scores in correlation with an objective electrophysiologic test (Speech-ABR in noise), which objectively shows cumulative effects of the afferent and efferent auditory systems at the inferior colliculus and upper brainstem pathway. METHOD Fourteen normal-hearing subjects in the age range of 18 to 25 participated in this study. Lateralization scores in the quiet and noisy modes were evaluated. Speech-ABR in both ears for quiet mode and three different contralateral noise levels (SNR = + 5, 0, - 5) were recorded, too. Correlation of lateralization scores and Speech-ABR changes in noise was studied. RESULTS Significant decrease of lateralization scores with latency increase and amplitude decrease of Speech-ABR transient peaks (V, A, O) was seen with noise presentation. A high positive correlation between lateralization decrease with latency increase of onset peaks (V, A) and amplitude decrease of transient peaks (V, A, O) was found in low signal-to-noise ratios. CONCLUSION The study revealed that in high challenging auditory situations like auditory lateralization in noise, upper brainstem centers and pathways play a facilitative role for main auditory lateralization centers in lower levels.
Collapse
Affiliation(s)
- Abdollah Moossavi
- Department of Otolaryngology, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Yones Lotfi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Mohanna Javanbakht
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran.
| | - Soghrat Faghihzadeh
- Department of Biostatistics and Epidemiology, Zanjan University of Medical Sciences, Zanjan, Iran
| |
Collapse
|
24
|
Rosenthal MA. A systematic review of the voice-tagging hypothesis of speech-in-noise perception. Neuropsychologia 2019; 136:107256. [PMID: 31715197 DOI: 10.1016/j.neuropsychologia.2019.107256] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 11/03/2019] [Accepted: 11/06/2019] [Indexed: 01/05/2023]
Abstract
The voice-tagging hypothesis claims that individuals who better represent pitch information in a speaker's voice, as measured with the frequency following response (FFR), will be better at speech-in-noise perception. The hypothesis has been provided to explain how music training might improve speech-in-noise perception. This paper reviews studies that are relevant to the voice-tagging hypothesis, including studies on musicians and nonmusicians. Most studies on musicians show greater f0 amplitude compared to controls. Most studies on nonmusicians do not show group differences in f0 amplitude. Across all studies reviewed, f0 amplitude does not consistently predict accuracy in speech-in-noise perception. The evidence suggests that music training does not improve speech-in-noise perception via enhanced subcortical representation of the f0.
Collapse
Affiliation(s)
- Matthew A Rosenthal
- University of Kansas, 1450 Jayhawk Blvd, Lawrence, KS, 66045, Department of Psychology, United States.
| |
Collapse
|
25
|
Lotfi Y, Moossavi A, Javanbakht M, Faghih Zadeh S. Speech-ABR in contralateral noise: A potential tool to evaluate rostral part of the auditory efferent system. Med Hypotheses 2019; 132:109355. [DOI: 10.1016/j.mehy.2019.109355] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Revised: 08/01/2019] [Accepted: 08/07/2019] [Indexed: 11/25/2022]
|
26
|
Krizman J, Kraus N. Analyzing the FFR: A tutorial for decoding the richness of auditory function. Hear Res 2019; 382:107779. [PMID: 31505395 PMCID: PMC6778514 DOI: 10.1016/j.heares.2019.107779] [Citation(s) in RCA: 81] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 08/01/2019] [Accepted: 08/06/2019] [Indexed: 01/12/2023]
Abstract
The frequency-following response, or FFR, is a neurophysiological response to sound that precisely reflects the ongoing dynamics of sound. It can be used to study the integrity and malleability of neural encoding of sound across the lifespan. Sound processing in the brain can be impaired with pathology and enhanced through expertise. The FFR can index linguistic deprivation, autism, concussion, and reading impairment, and can reflect the impact of enrichment with short-term training, bilingualism, and musicianship. Because of this vast potential, interest in the FFR has grown considerably in the decade since our first tutorial. Despite its widespread adoption, there remains a gap in the current knowledge of its analytical potential. This tutorial aims to bridge this gap. Using recording methods we have employed for the last 20 + years, we have explored many analysis strategies. In this tutorial, we review what we have learned and what we think constitutes the most effective ways of capturing what the FFR can tell us. The tutorial covers FFR components (timing, fundamental frequency, harmonics) and factors that influence FFR (stimulus polarity, response averaging, and stimulus presentation/recording jitter). The spotlight is on FFR analyses, including ways to analyze FFR timing (peaks, autocorrelation, phase consistency, cross-phaseogram), magnitude (RMS, SNR, FFT), and fidelity (stimulus-response correlations, response-to-response correlations and response consistency). The wealth of information contained within an FFR recording brings us closer to understanding how the brain reconstructs our sonic world.
Collapse
Affiliation(s)
- Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA. https://www.brainvolts.northwestern.edu
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA; Department of Neurobiology, Northwestern University, Evanston, IL, 60208, USA.
| |
Collapse
|
27
|
Thompson EC, Krizman J, White-Schwoch T, Nicol T, Estabrook R, Kraus N. Neurophysiological, linguistic, and cognitive predictors of children's ability to perceive speech in noise. Dev Cogn Neurosci 2019; 39:100672. [PMID: 31430627 PMCID: PMC6886664 DOI: 10.1016/j.dcn.2019.100672] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 06/07/2019] [Accepted: 06/10/2019] [Indexed: 11/16/2022] Open
Abstract
Hearing in noisy environments is a complicated task that engages attention, memory, linguistic knowledge, and precise auditory-neurophysiological processing of sound. Accumulating evidence in school-aged children and adults suggests these mechanisms vary with the task’s demands. For instance, co-located speech and noise demands a large cognitive load and recruits working memory, while spatially separating speech and noise diminishes this load and draws on alternative skills. Past research has focused on one or two mechanisms underlying speech-in-noise perception in isolation; few studies have considered multiple factors in tandem, or how they interact during critical developmental years. This project sought to test complementary hypotheses involving neurophysiological, cognitive, and linguistic processes supporting speech-in-noise perception in young children under different masking conditions (co-located, spatially separated). Structural equation modeling was used to identify latent constructs and examine their contributions as predictors. Results reveal cognitive and language skills operate as a single factor supporting speech-in-noise perception under different masking conditions. While neural coding of the F0 supports perception in both co-located and spatially separated conditions, neural timing predicts perception of spatially separated listening exclusively. Together, these results suggest co-located and spatially separated speech-in-noise perception draw on similar cognitive/linguistic skills, but distinct neural factors, in early childhood.
Collapse
Affiliation(s)
- Elaine C Thompson
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Ryne Estabrook
- Department of Medical Social Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; Department of Communication Sciences, Northwestern University, Evanston, IL, USA; Institute for Neuroscience, Northwestern University, Evanston, IL, USA; Department of Neurobiology, Northwestern University, Evanston, IL, USA; Department of Otolaryngology, Northwestern University, Chicago, IL, USA.
| |
Collapse
|
28
|
Hayakawa S, Marian V. Consequences of multilingualism for neural architecture. Behav Brain Funct 2019; 15:6. [PMID: 30909931 PMCID: PMC6432751 DOI: 10.1186/s12993-019-0157-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Accepted: 03/16/2019] [Indexed: 12/15/2022] Open
Abstract
Language has the power to shape cognition, behavior, and even the form and function of the brain. Technological and scientific developments have recently yielded an increasingly diverse set of tools with which to study the way language changes neural structures and processes. Here, we review research investigating the consequences of multilingualism as revealed by brain imaging. A key feature of multilingual cognition is that two or more languages can become activated at the same time, requiring mechanisms to control interference. Consequently, extensive experience managing multiple languages can influence cognitive processes as well as their neural correlates. We begin with a brief discussion of how bilinguals activate language, and of the brain regions implicated in resolving language conflict. We then review evidence for the pervasive impact of bilingual experience on the function and structure of neural networks that support linguistic and non-linguistic cognitive control, speech processing and production, and language learning. We conclude that even seemingly distinct effects of language on cognitive operations likely arise from interdependent functions, and that future work directly exploring the interactions between multiple levels of processing could offer a more comprehensive view of how language molds the mind.
Collapse
Affiliation(s)
- Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA
| | - Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA.
| |
Collapse
|
29
|
Yellamsetty A, Bidelman GM. Brainstem correlates of concurrent speech identification in adverse listening conditions. Brain Res 2019; 1714:182-192. [PMID: 30796895 DOI: 10.1016/j.brainres.2019.02.025] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 01/07/2019] [Accepted: 02/19/2019] [Indexed: 01/20/2023]
Abstract
When two voices compete, listeners can segregate and identify concurrent speech sounds using pitch (fundamental frequency, F0) and timbre (harmonic) cues. Speech perception is also hindered by the signal-to-noise ratio (SNR). How clear and degraded concurrent speech sounds are represented at early, pre-attentive stages of the auditory system is not well understood. To this end, we measured scalp-recorded frequency-following responses (FFR) from the EEG while human listeners heard two concurrently presented, steady-state (time-invariant) vowels whose F0 differed by zero or four semitones (ST) presented diotically in either clean (no noise) or noise-degraded (+5dB SNR) conditions. Listeners also performed a speeded double vowel identification task in which they were required to identify both vowels correctly. Behavioral results showed that speech identification accuracy increased with F0 differences between vowels, and this perceptual F0 benefit was larger for clean compared to noise degraded (+5dB SNR) stimuli. Neurophysiological data demonstrated more robust FFR F0 amplitudes for single compared to double vowels and considerably weaker responses in noise. F0 amplitudes showed speech-on-speech masking effects, along with a non-linear constructive interference at 0ST, and suppression effects at 4ST. Correlations showed that FFR F0 amplitudes failed to predict listeners' identification accuracy. In contrast, FFR F1 amplitudes were associated with faster reaction times, although this correlation was limited to noise conditions. The limited number of brain-behavior associations suggests subcortical activity mainly reflects exogenous processing rather than perceptual correlates of concurrent speech perception. Collectively, our results demonstrate that FFRs reflect pre-attentive coding of concurrent auditory stimuli that only weakly predict the success of identifying concurrent speech.
Collapse
Affiliation(s)
- Anusha Yellamsetty
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Department of Communication Sciences & Disorders, University of South Florida, USA.
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
30
|
Liu D, Hu J, Dong R, Chen J, Musacchia G, Wang S. Effects of Inter-Stimulus Interval on Speech-Evoked Frequency-Following Response in Elderly Adults. Front Aging Neurosci 2018; 10:357. [PMID: 30467474 PMCID: PMC6236020 DOI: 10.3389/fnagi.2018.00357] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 10/19/2018] [Indexed: 11/17/2022] Open
Abstract
Background: The speech-evoked frequency following response (FFR) has shown to be useful in assessing complex auditory processing abilities and in different age groups. While many aspects of FFR have been studied extensively, the effect of timing, as measured by inter-stimulus-interval (ISI), especially in the older adult population, has yet to be thoroughly investigated. Objective: The purpose of this study was to examine the effects of different ISIs on speech evoked FFR in older and younger adults who speak a tonal language, and to investigate whether the older adults' FFR were more susceptible to the change in ISI. Materials and Methods: Twenty-two normal hearing participants were recruited in our study, including 11 young adult participants and 11 elderly participants. An Intelligent Hearing Systems Smart EP evoke potential system was used to record the FFR in four ISI conditions (40, 80, 120 and 160 ms). A recorded natural speech token with a falling tone /yi/ was used as the stimulus. Two indices, stimulus-to-response correlation coefficient and pitch strength, were used to quantify the FFR responses. Two-way analysis of variance (ANOVA) was used to analyze the differences in different age groups and different ISI conditions. Results: There was no significant difference in stimulus-to-response correlation coefficient and pitch strength among the different ISI conditions, in either age groups. Older adults appeared to have weaker FFR for all ISI conditions when compared to their younger adult counterparts. Conclusion: Shorter ISIs did not result in worse FFRs from older adults or younger adults. For speech-evoked FFR using a recorded natural speech token that is 250 ms in length, an ISI of as short as 40 ms appeared to be sufficient and effective to record FFR for elderly adults.
Collapse
Affiliation(s)
- Dongxin Liu
- Otolaryngology—Head & Neck Surgery, Beijing Tongren Hospital, Beijing Institute of Otolaryngology, Capital Medical University, Beijing, China
| | - Jiong Hu
- Department of Audiology, University of the Pacific, San Francisco, CA, United States
| | - Ruijuan Dong
- Otolaryngology—Head & Neck Surgery, Beijing Tongren Hospital, Beijing Institute of Otolaryngology, Capital Medical University, Beijing, China
| | - Jing Chen
- Otolaryngology—Head & Neck Surgery, Beijing Tongren Hospital, Beijing Institute of Otolaryngology, Capital Medical University, Beijing, China
| | - Gabriella Musacchia
- Department of Audiology, University of the Pacific, San Francisco, CA, United States
| | - Shuo Wang
- Otolaryngology—Head & Neck Surgery, Beijing Tongren Hospital, Beijing Institute of Otolaryngology, Capital Medical University, Beijing, China
| |
Collapse
|
31
|
Tabachnick AR, Toscano JC. Perceptual Encoding in Auditory Brainstem Responses: Effects of Stimulus Frequency. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:2364-2375. [PMID: 30193361 DOI: 10.1044/2018_jslhr-h-17-0486] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2018] [Accepted: 05/09/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE A central question about auditory perception concerns how acoustic information is represented at different stages of processing. The auditory brainstem response (ABR) provides a potentially useful index of the earliest stages of this process. However, it is unclear how basic acoustic characteristics (e.g., differences in tones spanning a wide range of frequencies) are indexed by ABR components. This study addresses this by investigating how ABR amplitude and latency track stimulus frequency for tones ranging from 250 to 8000 Hz. METHOD In a repeated-measures experimental design, listeners were presented with brief tones (250, 500, 1000, 2000, 4000, and 8000 Hz) in random order while electroencephalography was recorded. ABR latencies and amplitudes for Wave V (6-9 ms) and in the time window following the Wave V peak (labeled as Wave VI; 9-12 ms) were measured. RESULTS Wave V latency decreased with increasing frequency, replicating previous work. In addition, Waves V and VI amplitudes tracked differences in tone frequency, with a nonlinear response from 250 to 8000 Hz and a clear log-linear response to tones from 500 to 8000 Hz. CONCLUSIONS Results demonstrate that the ABR provides a useful measure of early perceptual encoding for stimuli varying in frequency and that the tonotopic organization of the auditory system is preserved at this stage of processing for stimuli from 500 to 8000 Hz. Such a measure may serve as a useful clinical tool for evaluating a listener's ability to encode specific frequencies in sounds. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.6987422.
Collapse
Affiliation(s)
| | - Joseph C Toscano
- Department of Psychological and Brain Sciences, Villanova University, PA
| |
Collapse
|
32
|
Phase delays between tone pairs reveal interactions in scalp-recorded envelope following responses. Neurosci Lett 2018; 665:257-262. [DOI: 10.1016/j.neulet.2017.12.014] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Revised: 12/02/2017] [Accepted: 12/05/2017] [Indexed: 11/22/2022]
|
33
|
Abd El-Ghaffar NM, El-Gharib AM, Kolkaila EA, Elmahallawy TH. Speech-evoked auditory brainstem response with ipsilateral noise in adults with unilateral hearing loss. Acta Otolaryngol 2018; 138:145-152. [PMID: 29022419 DOI: 10.1080/00016489.2017.1380311] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
INTRODUCTION Subjects with unilateral hearing loss (UHL) report difficulties in speech understanding in noise. Speech-evoked auditory brainstem response (S-ABR) provides cues for temporal and spectral encoding of speech in the brainstem. S-ABR recording in noise increases its sensitivity in evaluating the auditory processing and related disorders. OBJECTIVES Study speech encoding at the level of brainstem when the auditory system relies on one ear and to study the effect of noise on this encoding. SUBJECTS AND METHOD This study included two groups: control group consisted of 15 adults with normal hearing sensitivity and study group consisted of 30 adults with UHL. The study group was further subdivided into two subgroups: study subgroup A (SG A) consisted of 15 adults with right functioning ears and study subgroup B (SG B) consisted of 15 adults with left functioning ears. S-ABR in quiet and with ipsilateral noise was recorded in both the groups using complex ABR advanced auditory research module. RESULTS In UHL, there was a statistically significant delay in the S-ABR onset and offset in noise compared to quiet. Moreover, quiet-noise (+5 SNR) correlation was significantly low compared to NH. Furthermore, pitch representation (F0 amplitude) was significantly degraded with noise. In addition, there was a statistically significant noise-induced phase shift in the transition region of speech syllable in these subjects. CONCLUSION In monaural processing, pitch representation (F0 amplitude) and cross-phaseogram were the main affected domains. Speech phonemes of transient origin can be confused in subjects with UHL.
Collapse
Affiliation(s)
| | | | - Enaas A. Kolkaila
- Audiology Unit ENT Department, Faculty of Medicine, Tanta University, Tanta, Egypt
| | | |
Collapse
|
34
|
Kim SG, Lepsien J, Fritz TH, Mildner T, Mueller K. Dissonance encoding in human inferior colliculus covaries with individual differences in dislike of dissonant music. Sci Rep 2017; 7:5726. [PMID: 28720776 PMCID: PMC5516034 DOI: 10.1038/s41598-017-06105-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Accepted: 06/09/2017] [Indexed: 12/20/2022] Open
Abstract
Harmony is one of the most fundamental elements of music that evokes emotional response. The inferior colliculus (IC) has been known to detect poor agreement of harmonics of sound, that is, dissonance. Electrophysiological evidence has implicated a relationship between a sustained auditory response mainly from the brainstem and unpleasant emotion induced by dissonant harmony. Interestingly, an individual’s dislike of dissonant harmony of an individual correlated with a reduced sustained auditory response. In the current paper, we report novel evidence based on functional magnetic resonance imaging (fMRI) for such a relationship between individual variability in dislike of dissonance and the IC activation. Furthermore, for the first time, we show how dissonant harmony modulates functional connectivity of the IC and its association with behaviourally reported unpleasantness. The current findings support important contributions of low level auditory processing and corticofugal interaction in musical harmony preference.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Jöran Lepsien
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas Hans Fritz
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Institute for Psychoacoustics and Electronic Music, University of Ghent, Ghent, Belgium
| | - Toralf Mildner
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Karsten Mueller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
35
|
Koerner TK, Zhang Y, Nelson PB, Wang B, Zou H. Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: A P3 study. Hear Res 2017; 350:58-67. [DOI: 10.1016/j.heares.2017.04.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2016] [Revised: 04/12/2017] [Accepted: 04/16/2017] [Indexed: 10/19/2022]
|
36
|
Skoe E, Burakiewicz E, Figueiredo M, Hardin M. Basic neural processing of sound in adults is influenced by bilingual experience. Neuroscience 2017; 349:278-290. [DOI: 10.1016/j.neuroscience.2017.02.049] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Revised: 02/18/2017] [Accepted: 02/21/2017] [Indexed: 11/30/2022]
|
37
|
Abrams DA, Nicol T, White-Schwoch T, Zecker S, Kraus N. Population responses in primary auditory cortex simultaneously represent the temporal envelope and periodicity features in natural speech. Hear Res 2017; 348:31-43. [PMID: 28216125 DOI: 10.1016/j.heares.2017.02.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Revised: 02/04/2017] [Accepted: 02/13/2017] [Indexed: 10/20/2022]
Abstract
Speech perception relies on a listener's ability to simultaneously resolve multiple temporal features in the speech signal. Little is known regarding neural mechanisms that enable the simultaneous coding of concurrent temporal features in speech. Here we show that two categories of temporal features in speech, the low-frequency speech envelope and periodicity cues, are processed by distinct neural mechanisms within the same population of cortical neurons. We measured population activity in primary auditory cortex of anesthetized guinea pig in response to three variants of a naturally produced sentence. Results show that the envelope of population responses closely tracks the speech envelope, and this cortical activity more closely reflects wider bandwidths of the speech envelope compared to narrow bands. Additionally, neuronal populations represent the fundamental frequency of speech robustly with phase-locked responses. Importantly, these two temporal features of speech are simultaneously observed within neuronal ensembles in auditory cortex in response to clear, conversation, and compressed speech exemplars. Results show that auditory cortical neurons are adept at simultaneously resolving multiple temporal features in extended speech sentences using discrete coding mechanisms.
Collapse
Affiliation(s)
- Daniel A Abrams
- Auditory Neuroscience Laboratory, The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA.
| | - Trent Nicol
- Auditory Neuroscience Laboratory, The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA
| | - Steven Zecker
- Auditory Neuroscience Laboratory, The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA; Departments of Neurobiology and Physiology, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA; Department of Otolaryngology, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA
| |
Collapse
|
38
|
The Janus Face of Auditory Learning: How Life in Sound Shapes Everyday Communication. THE FREQUENCY-FOLLOWING RESPONSE 2017. [DOI: 10.1007/978-3-319-47944-6_6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
39
|
Crosslinguistic Intelligibility of Russian and German Speech in Noisy Environment. JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING 2017. [DOI: 10.1155/2017/1831856] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper discusses the results of the pilot experimental research dedicated to speech recognition and perception of the semantic content of the utterances in noisy environment. The experiment included perceptual-auditory analysis of words and phrases in Russian and German (in comparison) in the same noisy environment: various (pink and white) types of noise with various levels of signal-to-noise ratio. The statistical analysis showed that intelligibility and perception of the speech in noisy environment are influenced not only by noise type and its signal-to-noise ratio, but also by some linguistic and extralinguistic factors, such as the existing redundancy of a particular language at various levels of linguistic structure, changes in the acoustic characteristics of the speaker while switching from one language to another one, the level of speaker and listener’s proficiency in a specific language, and acoustic characteristics of the speaker’s voice.
Collapse
|
40
|
Thompson EC, Woodruff Carr K, White-Schwoch T, Otto-Meyer S, Kraus N. Individual differences in speech-in-noise perception parallel neural speech processing and attention in preschoolers. Hear Res 2016; 344:148-157. [PMID: 27864051 DOI: 10.1016/j.heares.2016.11.007] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2016] [Revised: 11/08/2016] [Accepted: 11/13/2016] [Indexed: 11/24/2022]
Abstract
From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech-in-noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3-5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ∼12 months), we followed a cohort of 59 preschoolers, ages 3.0-4.9, assessing word-in-noise perception, cognitive abilities (intelligence, short-term memory, attention), and neural responses to speech. Results reveal changes in word-in-noise perception parallel changes in processing of the fundamental frequency (F0), an acoustic cue known for playing a role central to speaker identification and auditory scene analysis. Four unique developmental trajectories (speech-in-noise perception groups) confirm this relationship, in that improvements and declines in word-in-noise perception couple with enhancements and diminishments of F0 encoding, respectively. Improvements in word-in-noise perception also pair with gains in attention. Word-in-noise perception does not relate to strength of neural harmonic representation or short-term memory. These findings reinforce previously-reported roles of F0 and attention in hearing speech in noise in older children and adults, and extend this relationship to preschool children.
Collapse
Affiliation(s)
- Elaine C Thompson
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, Evanston, IL 60208, USA
| | - Kali Woodruff Carr
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, Evanston, IL 60208, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, Evanston, IL 60208, USA
| | - Sebastian Otto-Meyer
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, Evanston, IL 60208, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, Evanston, IL 60208, USA; Institute for Neuroscience, Northwestern University, Evanston, IL 60208, USA; Department of Neurobiology & Physiology, Northwestern University, Evanston, IL 60208, USA; Department of Otolaryngology, Northwestern University, Evanston, IL 60208, USA.
| |
Collapse
|
41
|
Koerner TK, Zhang Y, Nelson PB, Wang B, Zou H. Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: A mismatch negativity study. Hear Res 2016; 339:40-9. [PMID: 27267705 DOI: 10.1016/j.heares.2016.06.001] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 05/16/2016] [Accepted: 06/02/2016] [Indexed: 11/17/2022]
Abstract
Successful speech communication requires the extraction of important acoustic cues from irrelevant background noise. In order to better understand this process, this study examined the effects of background noise on mismatch negativity (MMN) latency, amplitude, and spectral power measures as well as behavioral speech intelligibility tasks. Auditory event-related potentials (AERPs) were obtained from 15 normal-hearing participants to determine whether pre-attentive MMN measures recorded in response to a consonant (from /ba/ to /bu/) and vowel change (from /ba/ to /da/) in a double-oddball paradigm can predict sentence-level speech perception. The results showed that background noise increased MMN latencies and decreased MMN amplitudes with a reduction in the theta frequency band power. Differential noise-induced effects were observed for the pre-attentive processing of consonant and vowel changes due to different degrees of signal degradation by noise. Linear mixed-effects models further revealed significant correlations between the MMN measures and speech intelligibility scores across conditions and stimuli. These results confirm the utility of MMN as an objective neural marker for understanding noise-induced variations as well as individual differences in speech perception, which has important implications for potential clinical applications.
Collapse
Affiliation(s)
- Tess K Koerner
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA; Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN 55455, USA; Center for Applied Translational Sensory Science, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Peggy B Nelson
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA; Center for Applied Translational Sensory Science, University of Minnesota, Minneapolis, MN 55455, USA
| | - Boxiang Wang
- School of Statistics, University of Minnesota, Minneapolis, MN 55455, USA
| | - Hui Zou
- School of Statistics, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
42
|
Auditory training program for Arabic-speaking children with auditory figure-ground deficits. Int J Pediatr Otorhinolaryngol 2016; 83:160-7. [PMID: 26968071 DOI: 10.1016/j.ijporl.2016.02.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2015] [Revised: 02/01/2016] [Accepted: 02/04/2016] [Indexed: 10/22/2022]
Abstract
OBJECTIVE Listening to speech in noise makes up a great challenge for school children with auditory processing disorders mainly those with deficit in auditory figure ground (AFG) ability. These children are candidates for auditory training programs targeting AFG such as noise-desensitization programs. This work aimed to develop a new training material in Arabic language targeting this ability. METHODS A noise-desensitization semi-formal training program was developed and standardized on normal children in a pilot study preceding the main one. Seventeen school children with AFG deficit were submitted to the program for eight weeks then reevaluated. RESULTS The paired sample t-test revealed significant improvement of all trained children after training period in their psychophysical and electrophysiological results. The electrophysiological threshold of signal to noise ratio decreased from -5.3dB to -11.3dB after training. CONCLUSION The newly developed training material revealed efficacy in managing children with AFG deficit. The other affected auditory abilities improved also because of the multi-ability tapping character of the program.
Collapse
|
43
|
Jeng FC, Lin CD, Chou MS, Hollister GR, Sabol JT, Mayhugh GN, Wang TC, Wang CY. Development of Subcortical Pitch Representation in Three-Month-Old Chinese Infants. Percept Mot Skills 2016; 122:123-35. [PMID: 27420311 DOI: 10.1177/0031512516631054] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study investigated the development of subcortical pitch processing, as reflected by the scalp-recorded frequency-following response, during early infancy. Thirteen Chinese infants who were born and raised in Mandarin-speaking households were recruited to partake in this study. Through a prospective-longitudinal study design, infants were tested twice: at 1-3 days after birth and at three months of age. A set of four contrastive Mandarin pitch contours were used to elicit frequency-following responses. Frequency Error and Pitch Strength were derived to represent the accuracy and magnitude of the elicited responses. Paired-samples t tests were conducted and demonstrated a significant decrease in Frequency Error and a significant increase in Pitch Strength at three months of age compared to 1-3 days after birth. Results indicated the developmental trajectory of subcortical pitch processing during the first three months of life.
Collapse
Affiliation(s)
| | - Chia-Der Lin
- Department of Otolaryngology-HNS, China Medical University Hospital, Taiwan;School of Medicine, China Medical University, Taiwan
| | - Meng-Shih Chou
- Department of Otolaryngology-HNS, China Medical University Hospital, Taiwan;School of Medicine, China Medical University, Taiwan
| | | | - John T Sabol
- Communication Sciences and Disorders, Ohio University, USA
| | | | - Tang-Chuan Wang
- Department of Otolaryngology-HNS, China Medical University Hospital, Taiwan;School of Medicine, China Medical University, Taiwan
| | - Ching-Yuan Wang
- Department of Otolaryngology-HNS, China Medical University Hospital, Taiwan;School of Medicine, China Medical University, Taiwan
| |
Collapse
|
44
|
Auditory Processing Disorder: Biological Basis and Treatment Efficacy. TRANSLATIONAL RESEARCH IN AUDIOLOGY, NEUROTOLOGY, AND THE HEARING SCIENCES 2016. [DOI: 10.1007/978-3-319-40848-4_3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
45
|
Jeng FC, Lin CD, Sabol JT, Hollister GR, Chou MS, Chen CH, Kenny JE, Tsou YA. Pitch perception and frequency-following responses elicited by lexical-tone chimeras. Int J Audiol 2015; 55:53-63. [PMID: 26305289 DOI: 10.3109/14992027.2015.1072774] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE Previous research has shown the usefulness of utilizing auditory chimeras in assessing a listener's perception of the envelope and fine structure for an acoustic stimulus. However, research comparing and contrasting behavioral and electrophysiological responses to this stimulus type is scarce. DESIGN Two sets of chimeric stimuli were constructed by interchanging the envelopes and fine-structures of the rising/yi(2)/and falling/yi(4)/Mandarin pitch contours that were filtered through 1, 2, 4, 8, 16, 32, and 64 frequency banks. Behavioral pitch-perception tasks were administered through a two-alternative, forced-choice paradigm. Electrophysiological responses were measured through scalp-recorded frequency-following responses (FFRs) to the lexical-tone chimeras. STUDY SAMPLE Twenty American and twenty Chinese adults were recruited. RESULTS A two-way analysis of variance showed significance (p < 0.05) within and across the filter bank and language background factors for the behavioral measurements, while the frequency-following response demonstrated a significance only across the filter banks. CONCLUSIONS Perceptual importance of envelope cues increases starting from 16 filter banks, while the FFR accuracy and magnitude decreases with increasing number of filter banks. These results can be useful in assessing experience-dependent neuroplasticity and in designing speech processing strategies for cochlear-implant users who speak tonal or non-tonal languages around the globe.
Collapse
Affiliation(s)
- Fuh-Cherng Jeng
- a Communication Sciences and Disorders , Ohio University , Athens , USA and
| | - Chia-Der Lin
- b Department of Otolaryngology-HNS , China Medical University Hospital , Taichung , Taiwan
| | - John T Sabol
- a Communication Sciences and Disorders , Ohio University , Athens , USA and
| | - Grant R Hollister
- a Communication Sciences and Disorders , Ohio University , Athens , USA and
| | - Meng-Shih Chou
- b Department of Otolaryngology-HNS , China Medical University Hospital , Taichung , Taiwan
| | - Ching-Hua Chen
- b Department of Otolaryngology-HNS , China Medical University Hospital , Taichung , Taiwan
| | - Jessica E Kenny
- a Communication Sciences and Disorders , Ohio University , Athens , USA and
| | - Yung-An Tsou
- b Department of Otolaryngology-HNS , China Medical University Hospital , Taichung , Taiwan
| |
Collapse
|
46
|
Kumar P, Singh NK. BioMARK as electrophysiological tool for assessing children at risk for (central) auditory processing disorders without reading deficits. Hear Res 2015; 324:54-8. [DOI: 10.1016/j.heares.2015.03.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2014] [Revised: 02/10/2015] [Accepted: 03/01/2015] [Indexed: 10/23/2022]
|
47
|
Nuttall HE, Moore DR, Barry JG, Krumbholz K, de Boer J. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response. J Neurophysiol 2015; 113:3683-91. [PMID: 25787954 DOI: 10.1152/jn.00548.2014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2014] [Accepted: 03/12/2015] [Indexed: 12/16/2022] Open
Abstract
The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics.
Collapse
Affiliation(s)
- Helen E Nuttall
- MRC Institute of Hearing Research, University Park, Nottingham, United Kingdom; Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom; and
| | - David R Moore
- MRC Institute of Hearing Research, University Park, Nottingham, United Kingdom; Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio
| | - Johanna G Barry
- MRC Institute of Hearing Research, University Park, Nottingham, United Kingdom
| | - Katrin Krumbholz
- MRC Institute of Hearing Research, University Park, Nottingham, United Kingdom
| | - Jessica de Boer
- MRC Institute of Hearing Research, University Park, Nottingham, United Kingdom;
| |
Collapse
|
48
|
Bernard S, Proust J, Clément F. The medium helps the message: Early sensitivity to auditory fluency in children's endorsement of statements. Front Psychol 2014; 5:1412. [PMID: 25538662 PMCID: PMC4255489 DOI: 10.3389/fpsyg.2014.01412] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2014] [Accepted: 11/18/2014] [Indexed: 11/13/2022] Open
Abstract
Recently, a growing number of studies have investigated the cues used by children to selectively accept testimony. In parallel, several studies with adults have shown that the fluency with which information is provided influences message evaluation: adults evaluate fluent information as more credible than dysfluent information. It is therefore plausible that the fluency of a message could also influence children's endorsement of statements. Three experiments were designed to test this hypothesis with 3- to 5-year-olds where the auditory fluency of a message was manipulated by adding different levels of noise to recorded statements. The results show that 4 and 5-year-old children, but not 3-year-olds, are more likely to endorse a fluent statement than a dysfluent one. The present study constitutes a first attempt to show that fluency, i.e., ease of processing, is recruited as a cue to guide epistemic decision in children. An interpretation of the age difference based on the way cues are processed by younger children is suggested.
Collapse
Affiliation(s)
| | - Joëlle Proust
- Institut Jean Nicod, École Normale SupérieureParis, France
| | | |
Collapse
|
49
|
Ghannoum MT, Shalaby AA, Dabbous AO, Abd-El-Raouf ER, Abd-El-Hady HS. Central auditory processing functions in learning disabled children assessed by behavioural tests. HEARING, BALANCE AND COMMUNICATION 2014; 12:143-154. [DOI: 10.3109/21695717.2014.938908] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
50
|
Fujihira H, Shiraishi K. Correlations between word intelligibility under reverberation and speech auditory brainstem responses in elderly listeners. Clin Neurophysiol 2014; 126:96-102. [PMID: 24906808 DOI: 10.1016/j.clinph.2014.05.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Revised: 04/07/2014] [Accepted: 05/05/2014] [Indexed: 11/20/2022]
Abstract
OBJECTIVE To investigate the relationship between speech auditory brainstem responses (speech ABRs) and word intelligibility under reverberation in elderly adults. METHODS Word intelligibility for words under four reverberation times (RTs) of 0, 0.5, 1.0, 1.5s, and speech ABRs to the speech syllable/da/ were obtained from 30 elderly listeners. Root mean square (RMS) amplitudes and discrete Fourier transform (DFT) amplitudes were calculated for ADD and SUB responses in the speech ABRs. RESULTS No significant correlations were found between the word intelligibility scores under reverberation and the ADD response components. However, in the SUB responses we found that the DFT amplitudes associated with H4-SUB, H5-SUB, H8-SUB, H9-SUB and H10-SUB significantly correlated with the word intelligibility scores for words under reverberation. With Bonferroni correction, the DFT amplitudes for H5-SUB and the intelligibility scores for words with the RT of 0.5s and 1.5s were significant. CONCLUSIONS Word intelligibility under reverberation in elderly listeners is related to their ability to encode the temporal fine structure of speech. SIGNIFICANCE The results expand knowledge about subcortical responses of elderly listeners in daily-life listening situations. The SUB responses of speech ABR could be useful as an objective indicator to predict word intelligibility under reverberation.
Collapse
Affiliation(s)
- H Fujihira
- Department of Human Science, Graduate School of Design, Kyushu University, 4-9-1 Shiobaru, Minami-ku, Fukuoka 815-8540, Japan.
| | - K Shiraishi
- Department of Communication Design Science, Faculty of Design, Kyushu University, 4-9-1 Shiobaru, Minami-ku, Fukuoka 815-8540, Japan
| |
Collapse
|