1
|
Suite L, Freiwirth G, Babel M. Receptive vocabulary predicts multilinguals' recognition skills in adverse listening conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3916-3930. [PMID: 38126803 DOI: 10.1121/10.0023960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 11/29/2023] [Indexed: 12/23/2023]
Abstract
Adverse listening conditions are known to affect bilingual listeners' intelligibility scores more than those of monolingual listeners. To advance theoretical understanding of the mechanisms underpinning bilinguals' challenges in adverse listening conditions, vocabulary size and language entropy are compared as predictors in a sentence transcription task with a heterogeneous multilingual population representative of a speech community. Adverse listening was induced through noise type, bandwidth manipulations, and sentences varying in their semantic predictability. Overall, the results generally confirm anticipated patterns with respect to sentence type, noise masking, and bandwidth. Listeners show better comprehension of semantically coherent utterances without masking and with a full spectrum. Crucially, listeners with larger receptive vocabularies and lower language entropy, a measure of the predictability of one's language use, showed improved performance in adverse listening conditions. Vocabulary size had a substantially larger effect size, indicating that vocabulary size has more impact on performance in adverse listening conditions than bilingual language use. These results suggest that the mechanism behind the bilingual disadvantage in adverse listening conditions may be rooted in bilinguals' smaller language-specific receptive vocabularies, offering a harmonious explanation for challenges in adverse listening conditions experienced by monolinguals and multilinguals.
Collapse
Affiliation(s)
- Lexia Suite
- Department of Linguistics, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
| | - Galia Freiwirth
- Department of Linguistics, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
| | - Molly Babel
- Department of Linguistics, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
| |
Collapse
|
2
|
Chen Y, Guan L, Chen J, Han K, Yu Q, Zhou J, Wang X, Ma Y, Ji X, Zhao Z, Shen Q, Wang A, Wang M, Li J, Yu J, Zhang Y, Xu S, Liu J, Lu W, Ye B, Fang Y, Hu H, Shi H, Xiang M, Li X, Li Y, Wu H. Hearing intervention for decreasing risk of developing dementia in elders with mild cognitive impairment: study protocol of a multicenter randomized controlled trial for Chinese Hearing Solution for Improvement of Cognition in Elders (CHOICE). Trials 2023; 24:767. [PMID: 38017543 PMCID: PMC10685713 DOI: 10.1186/s13063-023-07813-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 11/21/2023] [Indexed: 11/30/2023] Open
Abstract
BACKGROUND Age-related hearing loss (ARHL) signifies the bilateral, symmetrical, sensorineural hearing loss that commonly occurs in elderly individuals. Several studies have suggested a higher risk of dementia among patients diagnosed with ARHL. Although the precise causal association between ARHL and cognitive decline remains unclear, ARHL has been recognized as one of the most significant factors that can be modified to reduce the risk of developing dementia potentially. Mild cognitive impairment (MCI) typically serves as the initial stage in the transition from normal cognitive function to dementia. Consequently, the objective of our randomized controlled trial (RCT) is to further investigate whether the use of hearing aids can enhance cognitive function in older adults diagnosed with ARHL and MCI. METHODS AND DESIGN This study is a parallel-arm, randomized controlled trial conducted at multiple centers in Shanghai, China. We aim to enlist a total of 688 older adults (age ≥ 60) diagnosed with moderate-to-severe ARHL and MCI from our four research centers. Participants will be assigned randomly to either the hearing aid fitting group or the health education group using block randomization with varying block sizes. Audiometry, cognitive function assessments, and other relevant data will be collected at baseline, as well as at 6, 12, and 24 months post-intervention by audiologists and trained researchers. The primary outcome of our study is the rate of progression to dementia among the two groups of participants. Additionally, various evaluations will be conducted to measure hearing improvement and changes in cognitive function. Apart from the final study results, we also plan to conduct an interim analysis using data from 12-month follow-up. DISCUSSION In recent years, there has been a notable lack of randomized controlled trials (RCTs) investigating the possible causal relationship between hearing fitting and the improvement of cognitive function. Our findings may demonstrate that hearing rehabilitation can be a valuable tool in managing ARHL and preventing cognitive decline, which will contribute to the development of a comprehensive framework for the prevention and control of cognitive decline. TRIAL REGISTRATION Chinese Clinical Trial Registry chictr.org.cn ChiCTR2000036139. Registered on 21 August 2020.
Collapse
Affiliation(s)
- Ying Chen
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lei Guan
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jie Chen
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Kun Han
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiongfei Yu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jin Zhou
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xue Wang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yunqian Ma
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiangyu Ji
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhonglu Zhao
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiyue Shen
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Anxian Wang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Mengping Wang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jin Li
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiali Yu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yiwen Zhang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Sijia Xu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jie Liu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wen Lu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bin Ye
- Department of Otolaryngology & Head and Neck Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yuan Fang
- Department of Geriatric Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haixia Hu
- Department of Otolaryngology & Head and Neck Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haibo Shi
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Mingliang Xiang
- Department of Otolaryngology & Head and Neck Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xia Li
- Department of Geriatric Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yun Li
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Hao Wu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Key Laboratory of Translational Medicine On Ear and Nose Diseases (14DZ2260300), Ear Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
3
|
Khayr R, Karawani H, Banai K. Implicit learning and individual differences in speech recognition: an exploratory study. Front Psychol 2023; 14:1238823. [PMID: 37744578 PMCID: PMC10513179 DOI: 10.3389/fpsyg.2023.1238823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 08/22/2023] [Indexed: 09/26/2023] Open
Abstract
Individual differences in speech recognition in challenging listening environments are pronounced. Studies suggest that implicit learning is one variable that may contribute to this variability. Here, we explored the unique contributions of three indices of implicit learning to individual differences in the recognition of challenging speech. To this end, we assessed three indices of implicit learning (perceptual, statistical, and incidental), three types of challenging speech (natural fast, vocoded, and speech in noise), and cognitive factors associated with speech recognition (vocabulary, working memory, and attention) in a group of 51 young adults. Speech recognition was modeled as a function of the cognitive factors and learning, and the unique contribution of each index of learning was statistically isolated. The three indices of learning were uncorrelated. Whereas all indices of learning had unique contributions to the recognition of natural-fast speech, only statistical learning had a unique contribution to the recognition of speech in noise and vocoded speech. These data suggest that although implicit learning may contribute to the recognition of challenging speech, the contribution may depend on the type of speech challenge and on the learning task.
Collapse
Affiliation(s)
- Ranin Khayr
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | | | | |
Collapse
|
4
|
Lansford KL, Barrett TS, Borrie SA. Cognitive Predictors of Perception and Adaptation to Dysarthric Speech in Young Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:30-47. [PMID: 36480697 PMCID: PMC10023189 DOI: 10.1044/2022_jslhr-22-00391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2022] [Accepted: 09/02/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE Although recruitment of cognitive-linguistic resources to support dysarthric speech perception and adaptation is presumed by theoretical accounts of effortful listening and supported by cross-disciplinary empirical findings, prospective relationships have received limited attention in the disordered speech literature. This study aimed to examine the predictive relationships between cognitive-linguistic parameters and intelligibility outcomes associated with familiarization with dysarthric speech in young adult listeners. METHOD A cohort of 156 listener participants between the ages of 18 and 50 years completed a three-phase perceptual training protocol (pretest, training, and posttest) with one of three speakers with dysarthria. Additionally, listeners completed the National Institutes of Health Toolbox Cognition Battery to obtain measures of the following cognitive-linguistic constructs: working memory, inhibitory control of attention, cognitive flexibility, processing speed, and vocabulary knowledge. RESULTS Elastic net regression models revealed that select cognitive-linguistic measures and their two-way interactions predicted both initial intelligibility and intelligibility improvement of dysarthric speech. While some consistency across models was shown, unique constellations of select cognitive factors and their interactions predicted initial intelligibility and intelligibility improvement of the three different speakers with dysarthria. CONCLUSIONS Current findings extend empirical support for theoretical models of speech perception in adverse listening conditions to dysarthric speech signals. Although predictive relationships were complex, vocabulary knowledge, working memory, and cognitive flexibility often emerged as important variables across the models.
Collapse
Affiliation(s)
- Kaitlin L. Lansford
- School of Communication Science & Disorders, Florida State University, Tallahassee
| | | | - Stephanie A. Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
5
|
Wang H, Chen R, Yan Y, McGettigan C, Rosen S, Adank P. Perceptual Learning of Noise-Vocoded Speech Under Divided Attention. Trends Hear 2023; 27:23312165231192297. [PMID: 37547940 PMCID: PMC10408355 DOI: 10.1177/23312165231192297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 07/13/2023] [Accepted: 07/19/2023] [Indexed: 08/08/2023] Open
Abstract
Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.
Collapse
Affiliation(s)
- Han Wang
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Rongru Chen
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Yu Yan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Stuart Rosen
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| |
Collapse
|
6
|
Francis AL. Adding noise is a confounded nuisance. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1375. [PMID: 36182286 DOI: 10.1121/10.0013874] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
A wide variety of research and clinical assessments involve presenting speech stimuli in the presence of some kind of noise. Here, I selectively review two theoretical perspectives and discuss ways in which these perspectives may help researchers understand the consequences for listeners of adding noise to a speech signal. I argue that adding noise changes more about the listening task than merely making the signal more difficult to perceive. To fully understand the effects of an added noise on speech perception, we must consider not just how much the noise affects task difficulty, but also how it affects all of the systems involved in understanding speech: increasing message uncertainty, modifying attentional demand, altering affective response, and changing motivation to perform the task.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| |
Collapse
|
7
|
Li MM, Moberly AC, Tamati TN. Factors affecting talker discrimination ability in adult cochlear implant users. JOURNAL OF COMMUNICATION DISORDERS 2022; 99:106255. [PMID: 35988314 PMCID: PMC10659049 DOI: 10.1016/j.jcomdis.2022.106255] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 08/10/2022] [Accepted: 08/11/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Real-world speech communication involves interacting with many talkers with diverse voices and accents. Many adults with cochlear implants (CIs) demonstrate poor talker discrimination, which may contribute to real-world communication difficulties. However, the factors contributing to talker discrimination ability, and how discrimination ability relates to speech recognition outcomes in adult CI users are still unknown. The current study investigated talker discrimination ability in adult CI users, and the contributions of age, auditory sensitivity, and neurocognitive skills. In addition, the relation between talker discrimination ability and multiple-talker sentence recognition was explored. METHODS Fourteen post-lingually deaf adult CI users (3 female, 11 male) with ≥1 year of CI use completed a talker discrimination task. Participants listened to two monosyllabic English words, produced by the same talker or by two different talkers, and indicated if the words were produced by the same or different talkers. Nine female and nine male native English talkers were paired, resulting in same- and different-talker pairs as well as same-gender and mixed-gender pairs. Participants also completed measures of spectro-temporal processing, neurocognitive skills, and multiple-talker sentence recognition. RESULTS CI users showed poor same-gender talker discrimination, but relatively good mixed-gender talker discrimination. Older age and weaker neurocognitive skills, in particular inhibitory control, were associated with less accurate mixed-gender talker discrimination. Same-gender discrimination was significantly related to multiple-talker sentence recognition accuracy. CONCLUSION Adult CI users demonstrate overall poor talker discrimination ability. Individual differences in mixed-gender discrimination ability were related to age and neurocognitive skills, suggesting that these factors contribute to the ability to make use of available, degraded talker characteristics. Same-gender talker discrimination was associated with multiple-talker sentence recognition, suggesting that access to subtle talker-specific cues may be important for speech recognition in challenging listening conditions.
Collapse
Affiliation(s)
- Michael M Li
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA
| | - Aaron C Moberly
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA
| | - Terrin N Tamati
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA; University Medical Center Groningen, University of Groningen, Department of Otorhinolaryngology/Head and Neck Surgery, Groningen, the Netherlands.
| |
Collapse
|
8
|
Heffner CC, Myers EB, Gracco VL. Impaired perceptual phonetic plasticity in Parkinson's disease. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:511. [PMID: 35931533 PMCID: PMC9299957 DOI: 10.1121/10.0012884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 07/05/2022] [Accepted: 07/06/2022] [Indexed: 06/08/2023]
Abstract
Parkinson's disease (PD) is a neurodegenerative condition primarily associated with its motor consequences. Although much of the focus within the speech domain has focused on PD's consequences for production, people with PD have been shown to differ in the perception of emotional prosody, loudness, and speech rate from age-matched controls. The current study targeted the effect of PD on perceptual phonetic plasticity, defined as the ability to learn and adjust to novel phonetic input, both in second language and native language contexts. People with PD were compared to age-matched controls (and, for three of the studies, a younger control population) in tasks of explicit non-native speech learning and adaptation to variation in native speech (compressed rate, accent, and the use of timing information within a sentence to parse ambiguities). The participants with PD showed significantly worse performance on the task of compressed rate and used the duration of an ambiguous fricative to segment speech to a lesser degree than age-matched controls, indicating impaired speech perceptual abilities. Exploratory comparisons also showed people with PD who were on medication performed significantly worse than their peers off medication on those two tasks and the task of explicit non-native learning.
Collapse
Affiliation(s)
- Christopher C Heffner
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269, USA
| | - Emily B Myers
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269, USA
| | | |
Collapse
|
9
|
Bieber RE, Brodbeck C, Anderson S. Examining the context benefit in older adults: A combined behavioral-electrophysiologic word identification study. Neuropsychologia 2022; 170:108224. [PMID: 35346650 DOI: 10.1016/j.neuropsychologia.2022.108224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 02/18/2022] [Accepted: 03/22/2022] [Indexed: 10/18/2022]
Abstract
When listening to degraded speech, listeners can use high-level semantic information to support recognition. The literature contains conflicting findings regarding older listeners' ability to benefit from semantic cues in recognizing speech, relative to younger listeners. Electrophysiologic (EEG) measures of lexical access (N400) often show that semantic context does not facilitate lexical access in older listeners; in contrast, auditory behavioral studies indicate that semantic context improves speech recognition in older listeners as much or more as in younger listeners. Many behavioral studies of aging and the context benefit have employed signal degradation or alteration, whereas this stimulus manipulation has been absent in the EEG literature, a possible reason for the inconsistencies between studies. Here we compared the context benefit as a function of age and signal type, using EEG combined with behavioral measures. Non-native accent, a common form of signal alteration which many older adults report as a challenge in daily speech recognition, was utilized for testing. The stimuli included English sentences produced by native speakers of English and Spanish, containing target words differing in cloze probability. Listeners performed a word identification task while 32-channel cortical responses were recorded. Results show that older adults' word identification performance was poorer in the low-predictability and non-native talker conditions than the younger adults, replicating earlier behavioral findings. However, older adults did not show reductions or delays in the average N400 response as compared to younger listeners, suggesting no age-related reduction in predictive processing capability. Potential sources for discrepancies in the prior literature are discussed.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, 0100 Lefrak Hall, University of Maryland College Park, College Park MD, 20740, USA.
| | - Christian Brodbeck
- Department of Psychological Sciences, University of Connecticut, Storrs CT, 06269, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, 0100 Lefrak Hall, University of Maryland College Park, College Park MD, 20740, USA
| |
Collapse
|
10
|
Braza MD, Porter HL, Buss E, Calandruccio L, McCreery RW, Leibold LJ. Effects of word familiarity and receptive vocabulary size on speech-in-noise recognition among young adults with normal hearing. PLoS One 2022; 17:e0264581. [PMID: 35271608 PMCID: PMC8912124 DOI: 10.1371/journal.pone.0264581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 02/11/2022] [Indexed: 11/29/2022] Open
Abstract
Having a large receptive vocabulary benefits speech-in-noise recognition for young children, though this is not always the case for older children or adults. These observations could indicate that effects of receptive vocabulary size on speech-in-noise recognition differ depending on familiarity of the target words, with effects observed only for more recently acquired and less frequent words. Two experiments were conducted to evaluate effects of vocabulary size on open-set speech-in-noise recognition for adults with normal hearing. Targets were words acquired at 4, 9, 12 and 15 years of age, and they were presented at signal-to-noise ratios (SNRs) of -5 and -7 dB. Percent correct scores tended to fall with increasing age of acquisition (AoA), with the caveat that performance at -7 dB SNR was better for words acquired at 9 years of age than earlier- or later-acquired words. Similar results were obtained whether the AoA of the target words was blocked or mixed across trials. Differences in word duration appear to account for nonmonotonic effects of AoA. For all conditions, a positive correlation was observed between recognition and vocabulary size irrespective of target word AoA, indicating that effects of vocabulary size are not limited to recently acquired words. This dataset does not support differential assessment of AoA, lexical frequency, and other stimulus features known to affect lexical access.
Collapse
Affiliation(s)
- Meredith D. Braza
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, United States of America
| | - Heather L. Porter
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, United States of America
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, Ohio, United States of America
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, United States of America
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, United States of America
| |
Collapse
|
11
|
Heffner CC, Fuhrmeister P, Luthra S, Mechtenberg H, Saltzman D, Myers EB. Reliability and validity for perceptual flexibility in speech. BRAIN AND LANGUAGE 2022; 226:105070. [PMID: 35026449 DOI: 10.1016/j.bandl.2021.105070] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 12/10/2021] [Accepted: 12/31/2021] [Indexed: 06/08/2023]
Abstract
The study of perceptual flexibility in speech depends on a variety of tasks that feature a large degree of variability between participants. Of critical interest is whether measures are consistent within an individual or across stimulus contexts. This is particularly key for individual difference designs that aredeployed to examine the neural basis or clinical consequences of perceptual flexibility. In the present set of experiments, we assess the split-half reliability and construct validity of five measures of perceptual flexibility: three of learning in a native language context (e.g., understanding someone with a foreign accent) and two of learning in a non-native context (e.g., learning to categorize non-native speech sounds). We find that most of these tasks show an appreciable level of split-half reliability, although construct validity was sometimes weak. This provides good evidence for reliability for these tasks, while highlighting possible upper limits on expected effect sizes involving each measure.
Collapse
Affiliation(s)
- Christopher C Heffner
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT 06269, United States; Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, United States; Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY 14214, United States.
| | - Pamela Fuhrmeister
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT 06269, United States; Department of Linguistics, University of Potsdam, 11476 Potsdam, Germany
| | - Sahil Luthra
- Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, United States; Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, United States
| | - Hannah Mechtenberg
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, United States
| | - David Saltzman
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, United States
| | - Emily B Myers
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT 06269, United States; Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, United States; Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, United States
| |
Collapse
|
12
|
Cutting Through the Noise: Noise-Induced Cochlear Synaptopathy and Individual Differences in Speech Understanding Among Listeners With Normal Audiograms. Ear Hear 2022; 43:9-22. [PMID: 34751676 PMCID: PMC8712363 DOI: 10.1097/aud.0000000000001147] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Following a conversation in a crowded restaurant or at a lively party poses immense perceptual challenges for some individuals with normal hearing thresholds. A number of studies have investigated whether noise-induced cochlear synaptopathy (CS; damage to the synapses between cochlear hair cells and the auditory nerve following noise exposure that does not permanently elevate hearing thresholds) contributes to this difficulty. A few studies have observed correlations between proxies of noise-induced CS and speech perception in difficult listening conditions, but many have found no evidence of a relationship. To understand these mixed results, we reviewed previous studies that have examined noise-induced CS and performance on speech perception tasks in adverse listening conditions in adults with normal or near-normal hearing thresholds. Our review suggests that superficially similar speech perception paradigms used in previous investigations actually placed very different demands on sensory, perceptual, and cognitive processing. Speech perception tests that use low signal-to-noise ratios and maximize the importance of fine sensory details- specifically by using test stimuli for which lexical, syntactic, and semantic cues do not contribute to performance-are more likely to show a relationship to estimated CS levels. Thus, the current controversy as to whether or not noise-induced CS contributes to individual differences in speech perception under challenging listening conditions may be due in part to the fact that many of the speech perception tasks used in past studies are relatively insensitive to CS-induced deficits.
Collapse
|
13
|
Bieber RE, Gordon-Salant S. Semantic context and stimulus variability independently affect rapid adaptation to non-native English speech in young adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:242. [PMID: 35104999 PMCID: PMC8769767 DOI: 10.1121/10.0009170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 11/26/2021] [Accepted: 12/07/2021] [Indexed: 06/14/2023]
Abstract
When speech is degraded or challenging to recognize, young adult listeners with normal hearing are able to quickly adapt, improving their recognition of the speech over a short period of time. This rapid adaptation is robust, but the factors influencing rate, magnitude, and generalization of improvement have not been fully described. Two factors of interest are lexico-semantic information and talker and accent variability; lexico-semantic information promotes perceptual learning for acoustically ambiguous speech, while talker and accent variability are beneficial for generalization of learning. In the present study, rate and magnitude of adaptation were measured for speech varying in level of semantic context, and in the type and number of talkers. Generalization of learning to an unfamiliar talker was also assessed. Results indicate that rate of rapid adaptation was slowed for semantically anomalous sentences, as compared to semantically intact or topic-grouped sentences; however, generalization was seen in the anomalous conditions. Magnitude of adaptation was greater for non-native as compared to native talker conditions, with no difference between single and multiple non-native talker conditions. These findings indicate that the previously documented benefit of lexical information in supporting rapid adaptation is not enhanced by the addition of supra-sentence context.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| |
Collapse
|
14
|
Heffner CC, Myers EB. Individual Differences in Phonetic Plasticity Across Native and Nonnative Contexts. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3720-3733. [PMID: 34525309 DOI: 10.1044/2021_jslhr-21-00004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose Individuals vary in their ability to learn the sound categories of nonnative languages (nonnative phonetic learning) and to adapt to systematic differences, such as accent or talker differences, in the sounds of their native language (native phonetic learning). Difficulties with both native and nonnative learning are well attested in people with speech and language disorders relative to healthy controls, but substantial variability in these skills is also present in the typical population. This study examines whether this individual variability can be organized around a common ability that we label "phonetic plasticity." Method A group of healthy young adult participants (N = 80), who attested they had no history of speech, language, neurological, or hearing deficits, completed two tasks of nonnative phonetic category learning, two tasks of learning to cope with variation in their native language, and seven tasks of other cognitive functions, distributed across two sessions. Performance on these 11 tasks was compared, and exploratory factor analysis was used to assess the extent to which performance on each task was related to the others. Results Performance on both tasks of native learning and an explicit task of nonnative learning patterned together, suggesting that native and nonnative phonetic learning tasks rely on a shared underlying capacity, which is termed "phonetic plasticity." Phonetic plasticity was also associated with vocabulary, comprehension of words in background noise, and, more weakly, working memory. Conclusions Nonnative sound learning and native language speech perception may rely on shared phonetic plasticity. The results suggest that good learners of native language phonetic variation are also good learners of nonnative phonetic contrasts. Supplemental Material https://doi.org/10.23641/asha.16606778.
Collapse
Affiliation(s)
- Christopher C Heffner
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs
- Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs
- Department of Communicative Disorders and Sciences, University at Buffalo, NY
- Center for Cognitive Science, University at Buffalo, NY
| | - Emily B Myers
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs
- Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs
- Department of Psychological Sciences, University of Connecticut, Storrs
| |
Collapse
|
15
|
Banks B, Gowen E, Munro KJ, Adank P. Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3432-3445. [PMID: 34463528 DOI: 10.1044/2021_jslhr-21-00106] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose Visual cues from a speaker's face may benefit perceptual adaptation to degraded speech, but current evidence is limited. We aimed to replicate results from previous studies to establish the extent to which visual speech cues can lead to greater adaptation over time, extending existing results to a real-time adaptation paradigm (i.e., without a separate training period). A second aim was to investigate whether eye gaze patterns toward the speaker's mouth were related to better perception, hypothesizing that listeners who looked more at the speaker's mouth would show greater adaptation. Method A group of listeners (n = 30) was presented with 90 noise-vocoded sentences in audiovisual format, whereas a control group (n = 29) was presented with the audio signal only. Recognition accuracy was measured throughout and eye tracking was used to measure fixations toward the speaker's eyes and mouth in the audiovisual group. Results Previous studies were partially replicated: The audiovisual group had better recognition throughout and adapted slightly more rapidly, but both groups showed an equal amount of improvement overall. Longer fixations on the speaker's mouth in the audiovisual group were related to better overall accuracy. An exploratory analysis further demonstrated that the duration of fixations to the speaker's mouth decreased over time. Conclusions The results suggest that visual cues may not benefit adaptation to degraded speech as much as previously thought. Longer fixations on a speaker's mouth may play a role in successfully decoding visual speech cues; however, this will need to be confirmed in future research to fully understand how patterns of eye gaze are related to audiovisual speech recognition. All materials, data, and code are available at https://osf.io/2wqkf/.
Collapse
Affiliation(s)
- Briony Banks
- Division of Neuroscience and Experimental Psychology, Faculty of Biology, Medicine and Health, The University of Manchester, United Kingdom
| | - Emma Gowen
- Division of Neuroscience and Experimental Psychology, Faculty of Biology, Medicine and Health, The University of Manchester, United Kingdom
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness, Faculty of Biology, Medicine and Health, The University of Manchester, United Kingdom
- Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| | - Patti Adank
- Speech, Hearing and Phonetic Sciences, University College London, United Kingdom
| |
Collapse
|
16
|
Tamati TN, Moberly AC. Talker Adaptation and Lexical Difficulty Impact Word Recognition in Adults with Cochlear Implants. Audiol Neurootol 2021; 27:260-270. [PMID: 34535583 DOI: 10.1159/000518643] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/19/2021] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Talker-specific adaptation facilitates speech recognition in normal-hearing listeners. This study examined talker adaptation in adult cochlear implant (CI) users. Three hypotheses were tested: (1) high-performing adult CI users show improved word recognition following exposure to a talker ("talker adaptation"), particularly for lexically hard words, (2) individual performance is determined by auditory sensitivity and neurocognitive skills, and (3) individual performance relates to real-world functioning. METHODS Fifteen high-performing, post-lingually deaf adult CI users completed a word recognition task consisting of 6 single-talker blocks (3 female/3 male native English speakers); words were lexically "easy" and "hard." Recognition accuracy was assessed "early" and "late" (first vs. last 10 trials); adaptation was assessed as the difference between late and early accuracy. Participants also completed measures of spectral-temporal processing and neurocognitive skills, as well as real-world measures of multiple-talker sentence recognition and quality of life (QoL). RESULTS CI users showed limited talker adaptation overall, but performance improved for lexically hard words. Stronger spectral-temporal processing and neurocognitive skills were weakly to moderately associated with more accurate word recognition and greater talker adaptation for hard words. Finally, word recognition accuracy for hard words was moderately related to multiple-talker sentence recognition and QoL. CONCLUSION Findings demonstrate a limited talker adaptation benefit for recognition of hard words in adult CI users. Both auditory sensitivity and neurocognitive skills contribute to performance, suggesting additional benefit from adaptation for individuals with stronger skills. Finally, processing differences related to talker adaptation and lexical difficulty may be relevant to real-world functioning.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology, Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA.,Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Aaron C Moberly
- Department of Otolaryngology, Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| |
Collapse
|
17
|
Abstract
The study examines school-aged L2 listeners' adaptation to an unfamiliar L2 accent and learner variables predicting such adaptation. Fourth-grade Mandarin L1 learners of English as a foreign language (N = 117) listened to a story twice in one of three accent conditions. In the single-talker condition, the story was produced by an Indian English (IE) speaker. In the multi-talker condition, the story was produced by two IE speakers. In the control condition, the story was produced by a Mandarin-accented speaker. Children's (re)interpretation of IE words/nonwords was assessed by referent selection tests administered before and after the first and the second exposures to the story. Repeated exposure to IE-accented speech forms influenced performance: the participants demonstrated better recognition of IE words across the referent selection tests but worse (re)interpretation of IE nonwords sounding similar to existing lexical items. Exposure to an IE-accented story yielded an additional advantage in word recognition, but the advantage was limited to words heard in the story. Furthermore, children's English phonological awareness, phonological memory, and vocabulary predicted their reinterpretation performance of the accented forms. These results suggest that school-aged L2 listeners with better phono-lexical representations develop better capacity in adapting to an unfamiliar accent of a foreign language by loosening their acceptability criteria for word recognition but the adaptation does not necessarily entail perceptual tuning to the specific phonological categories of the accent.
Collapse
Affiliation(s)
- Chieh-Fang Hu
- Department of English Instruction, University of Taipei, Taiwan
| |
Collapse
|
18
|
Trotter AS, Banks B, Adank P. The Relevance of the Availability of Visual Speech Cues During Adaptation to Noise-Vocoded Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2513-2528. [PMID: 34161748 DOI: 10.1044/2021_jslhr-20-00575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup. Method We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still). Results Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded. Conclusions The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate. Supplemental Material https://doi.org/10.23641/asha.14810523.
Collapse
Affiliation(s)
- Antony S Trotter
- Speech, Hearing and Phonetic Sciences, University College London, United Kingdom
| | - Briony Banks
- Department of Psychology, Lancaster University, United Kingdom
| | - Patti Adank
- Speech, Hearing and Phonetic Sciences, University College London, United Kingdom
| |
Collapse
|
19
|
Bieber RE, Tinnemore AR, Yeni-Komshian G, Gordon-Salant S. Younger and older adults show non-linear, stimulus-dependent performance during early stages of auditory training for non-native English. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:4348. [PMID: 34241442 PMCID: PMC8214469 DOI: 10.1121/10.0005279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 05/24/2021] [Accepted: 05/25/2021] [Indexed: 06/13/2023]
Abstract
Older adults often report difficulty understanding speech produced by non-native talkers. These listeners can achieve rapid adaptation to non-native speech, but few studies have assessed auditory training protocols to improve non-native speech recognition in older adults. In this study, a word-level training paradigm was employed, targeting improved recognition of Spanish-accented English. Younger and older adults were trained on Spanish-accented monosyllabic word pairs containing four phonemic contrasts (initial s/z, initial f/v, final b/p, final d/t) produced in English by multiple male native Spanish speakers. Listeners completed pre-testing, training, and post-testing over two sessions. Statistical methods, such as growth curve modeling and generalized additive mixed models, were employed to describe the patterns of rapid adaptation and how they varied between listener groups and phonemic contrasts. While the training protocol failed to elicit post-test improvements for recognition of Spanish-accented speech, examination of listeners' performance during the pre-testing period showed patterns of rapid adaptation that differed, depending on the nature of the phonemes to be learned and the listener group. Normal-hearing younger and older adults showed a faster rate of adaptation for non-native stimuli that were more nativelike in their productions, while older adults with hearing impairment did not realize this benefit.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| | - Anna R Tinnemore
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| | - Grace Yeni-Komshian
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| |
Collapse
|
20
|
Francis AL, Bent T, Schumaker J, Love J, Silbert N. Listener characteristics differentially affect self-reported and physiological measures of effort associated with two challenging listening conditions. Atten Percept Psychophys 2021; 83:1818-1841. [PMID: 33438149 PMCID: PMC8084824 DOI: 10.3758/s13414-020-02195-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/16/2020] [Indexed: 12/14/2022]
Abstract
Listeners vary in their ability to understand speech in adverse conditions. Differences in both cognitive and linguistic capacities play a role, but increasing evidence suggests that such factors may contribute differentially depending on the listening challenge. Here, we used multilevel modeling to evaluate contributions of individual differences in age, hearing thresholds, vocabulary, selective attention, working memory capacity, personality traits, and noise sensitivity to variability in measures of comprehension and listening effort in two listening conditions. A total of 35 participants completed a battery of cognitive and linguistic tests as well as a spoken story comprehension task using (1) native-accented English speech masked by speech-shaped noise and (2) nonnative accented English speech without masking. Masker levels were adjusted individually to ensure each participant would show (close to) equivalent word recognition performance across the two conditions. Dependent measures included comprehension tests results, self-rated effort, and electrodermal, cardiovascular, and facial electromyographic measures associated with listening effort. Results showed varied patterns of responsivity across different dependent measures as well as across listening conditions. In particular, results suggested that working memory capacity may play a greater role in the comprehension of nonnative accented speech than noise-masked speech, while hearing acuity and personality may have a stronger influence on physiological responses affected by demands of understanding speech in noise. Furthermore, electrodermal measures may be more strongly affected by affective response to noise-related interference while cardiovascular responses may be more strongly affected by demands on working memory and lexical access.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language and Hearing Sciences, Purdue University, Lyles-Porter Hall, 715 Clinic Dr., West Lafayette, IN, 47907, USA.
| | - Tessa Bent
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Jennifer Schumaker
- Department of Speech, Language and Hearing Sciences, Purdue University, Lyles-Porter Hall, 715 Clinic Dr., West Lafayette, IN, 47907, USA
| | - Jordan Love
- Department of Speech, Language and Hearing Sciences, Purdue University, Lyles-Porter Hall, 715 Clinic Dr., West Lafayette, IN, 47907, USA
| | - Noah Silbert
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park, MD, USA
| |
Collapse
|
21
|
Bieber RE, Gordon-Salant S. Improving older adults' understanding of challenging speech: Auditory training, rapid adaptation and perceptual learning. Hear Res 2021; 402:108054. [PMID: 32826108 PMCID: PMC7880302 DOI: 10.1016/j.heares.2020.108054] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 07/21/2020] [Accepted: 08/02/2020] [Indexed: 12/13/2022]
Abstract
The literature surrounding auditory perceptual learning and auditory training for challenging speech signals in older adult listeners is highly varied, in terms of both study methodology and reported outcomes. In this review, we discuss some of the pertinent features of listener, stimulus, and training protocol. Literature regarding the elicitation of auditory perceptual learning for time-compressed speech, non-native speech, and noise-vocoded speech is reviewed, as are auditory training protocols designed to improve speech-in-noise recognition. The literature is synthesized to establish some over-arching findings for the aging population, including an intact capacity for auditory perceptual learning, but a limited transfer of learning to untrained stimuli.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, University of Maryland, 0100 LeFrak Hall, 7251 Preinkert Drive, College Park, MD 20742, United States.
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, 0100 LeFrak Hall, 7251 Preinkert Drive, College Park, MD 20742, United States
| |
Collapse
|
22
|
Laturnus R. Comparative Acoustic Analyses of L2 English: The Search for Systematic Variation. PHONETICA 2020; 77:441-479. [PMID: 32694252 DOI: 10.1159/000508387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2017] [Accepted: 05/01/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND/AIMS Previous research has shown that exposure to multiple foreign accents facilitates adaptation to an untrained novel accent. One explanation is that L2 speech varies systematically such that there are commonalities in the productions of nonnative speakers, regardless of their language background. METHODS A systematic acoustic comparison was conducted between 3 native English speakers and 6 nonnative accents. Voice onset time, unstressed vowel duration, and formant values of stressed and unstressed vowels were analyzed, comparing each nonnative accent to the native English talkers. A subsequent perception experiment tests what effect training on regionally accented voices has on the participant's comprehension of nonnative accented speech to investigate the importance of within-speaker variation on attunement and generalization. RESULTS Data for each measure show substantial variability across speakers, reflecting phonetic transfer from individual L1s, as well as substantial inconsistency and variability in pronunciation, rather than commonalities in their productions. Training on native English varieties did not improve participants' accuracy in understanding nonnative speech. CONCLUSION These findings are more consistent with a hypothesis of accent attune-ment wherein listeners track general patterns of nonnative speech rather than relying on overlapping acoustic signals between speakers.
Collapse
Affiliation(s)
- Rebecca Laturnus
- Department of Linguistics, New York University, New York, New York, USA,
| |
Collapse
|
23
|
Rotman T, Lavie L, Banai K. Rapid Perceptual Learning: A Potential Source of Individual Differences in Speech Perception Under Adverse Conditions? Trends Hear 2020; 24:2331216520930541. [PMID: 32552477 PMCID: PMC7303778 DOI: 10.1177/2331216520930541] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Challenging listening situations (e.g., when speech is rapid or noisy) result in substantial individual differences in speech perception. We propose that rapid auditory perceptual learning is one of the factors contributing to those individual differences. To explore this proposal, we assessed rapid perceptual learning of time-compressed speech in young adults with normal hearing and in older adults with age-related hearing loss. We also assessed the contribution of this learning as well as that of hearing and cognition (vocabulary, working memory, and selective attention) to the recognition of natural-fast speech (NFS; both groups) and speech in noise (younger adults). In young adults, rapid learning and vocabulary were significant predictors of NFS and speech in noise recognition. In older adults, hearing thresholds, vocabulary, and rapid learning were significant predictors of NFS recognition. In both groups, models that included learning fitted the speech data better than models that did not include learning. Therefore, under adverse conditions, rapid learning may be one of the skills listeners could employ to support speech recognition.
Collapse
Affiliation(s)
- Tali Rotman
- Department of Communication Sciences and Disorders, University of Haifa
| | - Limor Lavie
- Department of Communication Sciences and Disorders, University of Haifa
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa
| |
Collapse
|
24
|
Brown VA, McLaughlin DJ, Strand JF, Van Engen KJ. Rapid adaptation to fully intelligible nonnative-accented speech reduces listening effort. Q J Exp Psychol (Hove) 2020; 73:1431-1443. [DOI: 10.1177/1747021820916726] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In noisy settings or when listening to an unfamiliar talker or accent, it can be difficult to understand spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech even when intelligibility is unaffected. In this study, we used a dual-task paradigm and pupillometry to assess the cognitive costs associated with processing fully intelligible accented speech, predicting that rapid perceptual adaptation to an accent would result in decreased listening effort over time. The behavioural and physiological paradigms provided converging evidence that listeners expend greater effort when processing nonnative- relative to native-accented speech, and both experiments also revealed an overall reduction in listening effort over the course of the experiment. Only the pupillometry experiment, however, revealed greater adaptation to nonnative- relative to native-accented speech. An exploratory analysis of the dual-task data that attempted to minimise practice effects revealed weak evidence for greater adaptation to the nonnative accent. These results suggest that even when speech is fully intelligible, resolving deviations between the acoustic input and stored lexical representations incurs a processing cost, and adaptation may attenuate this cost.
Collapse
Affiliation(s)
- Violet A Brown
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, MO, USA
| | - Drew J McLaughlin
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, MO, USA
| | - Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Kristin J Van Engen
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, MO, USA
| |
Collapse
|
25
|
Paulus M, Hazan V, Adank P. The relationship between talker acoustics, intelligibility, and effort in degraded listening conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3348. [PMID: 32486777 DOI: 10.1121/10.0001212] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 04/20/2020] [Indexed: 06/11/2023]
Abstract
Listening to degraded speech is associated with decreased intelligibility and increased effort. However, listeners are generally able to adapt to certain types of degradations. While intelligibility of degraded speech is modulated by talker acoustics, it is unclear whether talker acoustics also affect effort and adaptation. Moreover, it has been demonstrated that talker differences are preserved across spectral degradations, but it is not known whether this effect extends to temporal degradations and which acoustic-phonetic characteristics are responsible. In a listening experiment combined with pupillometry, participants were presented with speech in quiet as well as in masking noise, time-compressed, and noise-vocoded speech by 16 Southern British English speakers. Results showed that intelligibility, but not adaptation, was modulated by talker acoustics. Talkers who were more intelligible under noise-vocoding were also more intelligible under masking and time-compression. This effect was linked to acoustic-phonetic profiles with greater vowel space dispersion (VSD) and energy in mid-range frequencies, as well as slower speaking rate. While pupil dilation indicated increasing effort with decreasing intelligibility, this study also linked reduced effort in quiet to talkers with greater VSD. The results emphasize the relevance of talker acoustics for intelligibility and effort in degraded listening conditions.
Collapse
Affiliation(s)
- Maximillian Paulus
- Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Valerie Hazan
- Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Patti Adank
- Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| |
Collapse
|
26
|
Kennedy-Higgins D, Devlin JT, Adank P. Cognitive mechanisms underpinning successful perception of different speech distortions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:2728. [PMID: 32359293 DOI: 10.1121/10.0001160] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 04/08/2020] [Indexed: 06/11/2023]
Abstract
Few studies thus far have investigated whether perception of distorted speech is consistent across different types of distortion. This study investigated whether participants show a consistent perceptual profile across three speech distortions: time-compressed, noise-vocoded, and speech in noise. Additionally, this study investigated whether/how individual differences in performance on a battery of audiological and cognitive tasks links to perception. Eighty-eight participants completed a speeded sentence-verification task with increases in accuracy and reductions in response times used to indicate performance. Audiological and cognitive task measures include pure tone audiometry, speech recognition threshold, working memory, vocabulary knowledge, attention switching, and pattern analysis. Despite previous studies suggesting that temporal and spectral/environmental perception require different lexical or phonological mechanisms, this study shows significant positive correlations in accuracy and response time performance across all distortions. Results of a principal component analysis and multiple linear regressions suggest that a component based on vocabulary knowledge and working memory predicted performance in the speech in quiet, time-compressed and speech in noise conditions. These results suggest that listeners employ a similar cognitive strategy to perceive different temporal and spectral/environmental speech distortions and that this mechanism is supported by vocabulary knowledge and working memory.
Collapse
Affiliation(s)
- Dan Kennedy-Higgins
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, United Kingdom
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, United Kingdom
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, United Kingdom
| |
Collapse
|
27
|
Hau JA, Holt CM, Finch S, Dowell RC. The Adaptation to Mandarin-Accented English by Older, Hearing-Impaired Listeners Following Brief Exposure to the Accent. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:858-871. [PMID: 32109171 DOI: 10.1044/2019_jslhr-19-00136] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose The language processing of Mandarin-accented English (MAE) by older hearing-impaired (OHI), older normally hearing (NH), and younger NH listeners was explored. We examined whether OHI adults have more difficulty than NH listeners in recognizing and adapting to MAE speech productions after receiving brief training with the accent. Method Talker-independent adaptation was evaluated in an exposure training study design. Listeners were trained either by four MAE talkers or four Australian English talkers (control group) before listening to sentences presented by a novel MAE talker. Speech recognition for both the training sentences and the experimental sentences were compared between listener groups and between the training accents. Results Listeners in all three groups (OHI, older NH, younger NH) who had been trained by the MAE talkers showed higher odds of speech recognition than listeners trained by the Australian English talkers. The OHI listeners adapted to MAE to the same degree as the NH groups despite returning lower overall odds of recognizing MAE speech. Conclusions Older listeners with mild-to-moderate hearing loss were able to benefit as much from brief exposure to MAE as did the NH groups. This encouraging result suggests that OHI listeners have access to and can exploit the information present in a relatively brief sample of accented speech and generalize their learning to a novel MAE talker.
Collapse
Affiliation(s)
- Jutta A Hau
- Department of Audiology and Speech Pathology, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Victoria, Australia
| | - Colleen M Holt
- Department of Audiology and Speech Pathology, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Victoria, Australia
- Office of the Pro Vice-Chancellor, College of Science, Health and Engineering, La Trobe University, Bundoora, Victoria, Australia
| | - Sue Finch
- Statistical Consulting Centre, The University of Melbourne, Victoria, Australia
| | - Richard C Dowell
- Department of Audiology and Speech Pathology, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Victoria, Australia
| |
Collapse
|
28
|
Big data suggest strong constraints of linguistic similarity on adult language learning. Cognition 2019; 194:104056. [PMID: 31733600 DOI: 10.1016/j.cognition.2019.104056] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Revised: 08/13/2019] [Accepted: 08/19/2019] [Indexed: 01/09/2023]
Abstract
When adults learn new languages, their speech often remains noticeably non-native even after years of exposure. These non-native variants ('accents') can have far-reaching socio-economic consequences for learners. Many factors have been found to contribute to a learners' proficiency in the new language. Here we examine a factor that is outside of the control of the learner, linguistic similarities between the learner's native language (L1) and the new language (Ln). We analyze the (open access) speaking proficiencies of about 50,000 Ln learners of Dutch with 62 diverse L1s. We find that a learner's L1 accounts for 9-22% of the variance in Ln speaking proficiency. This corresponds to 28-69% of the variance explained by a model with controls for other factors known to affect language learning, such as education, age of acquisition and length of exposure. We also find that almost 80% of the effect of L1 can be explained by combining measures of phonological, morphological, and lexical similarity between the L1 and the Ln. These results highlight the constraints that a learner's native language imposes on language learning, and inform theories of L1-to-Ln transfer during Ln learning and use. As predicted by some proposals, we also find that L1-Ln phonological similarity is better captured when subcategorical properties (phonological features) are considered in the calculation of phonological similarities.
Collapse
|
29
|
Fletcher A, McAuliffe M, Kerr S, Sinex D. Effects of Vocabulary and Implicit Linguistic Knowledge on Speech Recognition in Adverse Listening Conditions. Am J Audiol 2019; 28:742-755. [PMID: 32271121 DOI: 10.1044/2019_aja-heal18-18-0169] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose This study aims to examine the combined influence of vocabulary knowledge and statistical properties of language on speech recognition in adverse listening conditions. Furthermore, it aims to determine whether any effects identified are more salient at particular levels of signal degradation. Method One hundred three young healthy listeners transcribed phrases presented at 4 different signal-to-noise ratios, which were coded for recognition accuracy. Participants also completed tests of hearing acuity, vocabulary knowledge, nonverbal intelligence, processing speed, and working memory. Results Vocabulary knowledge and working memory demonstrated independent effects on word recognition accuracy when controlling for hearing acuity, nonverbal intelligence, and processing speed. These effects were strongest at the same moderate level of signal degradation. Although listener variables were statistically significant, their effects were subtle in comparison to the influence of word frequency and phonological content. These language-based factors had large effects on word recognition at all signal-to-noise ratios. Discussion Language experience and working memory may have complementary effects on accurate word recognition. However, adequate glimpses of acoustic information appear necessary for speakers to leverage vocabulary knowledge when processing speech in adverse conditions.
Collapse
Affiliation(s)
- Annalise Fletcher
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton
| | - Megan McAuliffe
- Department of Communication Disorders, University of Canterbury, Christchurch, New Zealand
| | - Sarah Kerr
- Department of Communication Disorders, University of Canterbury, Christchurch, New Zealand
| | - Donal Sinex
- Department of Speech, Language, and Hearing Science, University of Florida, Gainesville
| |
Collapse
|
30
|
Hernández M, Ventura-Campos N, Costa A, Miró-Padilla A, Ávila C. Brain networks involved in accented speech processing. BRAIN AND LANGUAGE 2019; 194:12-22. [PMID: 30959385 DOI: 10.1016/j.bandl.2019.03.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Revised: 03/19/2019] [Accepted: 03/21/2019] [Indexed: 06/09/2023]
Abstract
We investigated the neural correlates of accented speech processing (ASP) with an fMRI study that overcame prior limitations in this line of research: we preserved intelligibility by using two regional accents that differ in prosody but only mildly in phonetics (Latin American and Castilian Spanish), and we used independent component analysis to identify brain networks as opposed to isolated regions. ASP engaged a speech perception network composed primarily of structures related with the processing of prosody (cerebellum, putamen, and thalamus). This network also included anterior fronto-temporal areas associated with lexical-semantic processing and a portion of the inferior frontal gyrus linked to executive control. ASP also recruited domain-general executive control networks related with cognitive demands (dorsal attentional and default mode networks) and the processing of salient events (salience network). Finally, the reward network showed a preference for the native accent, presumably revealing people's sense of social belonging.
Collapse
Affiliation(s)
- Mireia Hernández
- Section of Cognitive Processes, Department of Cognition, Development, and Educational Psychology, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, Spain.
| | - Noelia Ventura-Campos
- Neuropsychology and Functional Imaging Group, Department of Basic Psychology, Clinical Psychology, and Psychobiology, Universitat Jaume I, Castellón, Spain; Department of Education and Specific Didactics, Universitat Jaume I, Castellón, Spain
| | - Albert Costa
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Anna Miró-Padilla
- Neuropsychology and Functional Imaging Group, Department of Basic Psychology, Clinical Psychology, and Psychobiology, Universitat Jaume I, Castellón, Spain
| | - César Ávila
- Neuropsychology and Functional Imaging Group, Department of Basic Psychology, Clinical Psychology, and Psychobiology, Universitat Jaume I, Castellón, Spain
| |
Collapse
|
31
|
Vaughn CR. Expectations about the source of a speaker's accent affect accent adaptation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:3218. [PMID: 31153344 DOI: 10.1121/1.5108831] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 04/30/2019] [Indexed: 06/09/2023]
Abstract
When encountering speakers whose accents differ from the listener's own, listeners initially show a processing cost, but that cost can be attenuated after short term exposure. The extent to which processing foreign accents (L2-accents) and within-language accents (L1-accents) is similar is still an open question. This study considers whether listeners' expectations about the source of a speaker's accent-whether the speaker is purported to be an L1 or an L2 speaker-affect intelligibility. Prior work has indirectly manipulated expectations about a speaker's accent through photographs, but the present study primes listeners with a description of the speaker's accent itself. In experiment 1, native English listeners transcribed Spanish-accented English sentences in noise under three different conditions (speaker's accent: monolingual L1 Latinx English, L1-Spanish/L2-English, no information given). Results indicate that, by the end of the experiment, listeners given some information about the accent outperformed listeners given no information, and listeners told the speaker was L1-accented outperformed listeners told to expect L2-accented speech. Findings are interpreted in terms of listeners' expectations about task difficulty, and a follow-up experiment (experiment 2) found that priming listeners to expect that their ability to understand L2-accented speech can improve does in fact improve intelligibility.
Collapse
Affiliation(s)
- Charlotte R Vaughn
- Department of Linguistics, University of Oregon, 1290 University of Oregon, Eugene, Oregon 97403-1290, USA
| |
Collapse
|
32
|
Gordon-Salant S, Yeni-Komshian GH, Bieber RE, Jara Ureta DA, Freund MS, Fitzgibbons PJ. Effects of Listener Age and Native Language Experience on Recognition of Accented and Unaccented English Words. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1131-1143. [PMID: 31026190 PMCID: PMC6802876 DOI: 10.1044/2018_jslhr-h-ascc7-18-0122] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Revised: 07/19/2018] [Accepted: 11/26/2018] [Indexed: 06/09/2023]
Abstract
Purpose Older native speakers of English have difficulty in understanding Spanish-accented English compared to younger native English speakers. However, it is unclear if this age effect would be observed among native speakers of Spanish. The current study investigates the effects of age and native language experience with Spanish on the ability to recognize words spoken in English by Spanish-accented and unaccented talkers. Method English monosyllabic words, recorded by native speakers of English and Spanish, were presented to 4 groups of listeners with normal hearing: younger native Spanish listeners ( n = 15), older native Spanish listeners ( n = 16), younger native English listeners ( n = 15), and older native English listeners ( n = 15). Speech recognition accuracy was assessed for the unaccented and accented words in both quiet and noise. Results In all conditions, the native English listeners performed better than the native Spanish listeners. More specifically, the native speakers of Spanish consistently recognized accented English less accurately than the native speakers of English, demonstrating no advantage of shared native language experience between nonnative listeners and accented talkers. Older listeners in the native Spanish language group also performed less accurately than their younger counterparts, for English words spoken by both unaccented and accented talkers. Finally, whereas listeners who were native speakers of English showed marked declines in recognition of Spanish-accented English relative to unaccented English, listeners who were native speakers of Spanish (both younger and older) showed less decline. Conclusions The general pattern of results suggests that both native language experience in a language other than English and age limit the ability to recognize Spanish-accented English. The implication of the overall findings is that older nonnative listeners will have considerable difficulty in understanding English, regardless of the talker's accent, in both clinical and everyday listening situations.
Collapse
Affiliation(s)
| | | | - Rebecca E. Bieber
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - David A. Jara Ureta
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Maya S. Freund
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | | |
Collapse
|
33
|
Cieśla K, Wolak T, Lorens A, Heimler B, Skarżyński H, Amedi A. Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution. Restor Neurol Neurosci 2019; 37:155-166. [PMID: 31006700 PMCID: PMC6598101 DOI: 10.3233/rnn-190898] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
BACKGROUND Hearing loss is becoming a real social and health problem. Its prevalence in the elderly is an epidemic. The risk of developing hearing loss is also growing among younger people. If left untreated, hearing loss can perpetuate development of neurodegenerative diseases, including dementia. Despite recent advancements in hearing aid (HA) and cochlear implant (CI) technologies, hearing impaired users still encounter significant practical and social challenges, with or without aids. In particular, they all struggle with understanding speech in challenging acoustic environments, especially in presence of a competing speaker. OBJECTIVES In the current proof-of-concept study we tested whether multisensory stimulation, pairing audition and a minimal-size touch device would improve intelligibility of speech in noise. METHODS To this aim we developed an audio-to-tactile sensory substitution device (SSD) transforming low-frequency speech signals into tactile vibrations delivered on two finger tips. Based on the inverse effectiveness law, i.e., multisensory enhancement is strongest when signal-to-noise ratio is lowest between senses, we embedded non-native language stimuli in speech-like noise and paired it with a low-frequency input conveyed through touch. RESULTS We found immediate and robust improvement in speech recognition (i.e. in the Signal-To-Noise-ratio) in the multisensory condition without any training, at a group level as well as in every participant. The reported improvement at the group-level of 6 dB was indeed major considering that an increase of 10 dB represents a doubling of the perceived loudness. CONCLUSIONS These results are especially relevant when compared to previous SSD studies showing effects in behavior only after a demanding cognitive training. We discuss the implications of our results for development of SSDs and of specific rehabilitation programs for the hearing impaired either using or not using HAs or CIs. We also discuss the potential application of such a set-up for sense augmentation, such as when learning a new language.
Collapse
Affiliation(s)
- Katarzyna Cieśla
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Tomasz Wolak
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Artur Lorens
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Benedetta Heimler
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Henryk Skarżyński
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
34
|
Bieber RE, Yeni-Komshian GH, Freund MS, Fitzgibbons PJ, Gordon-Salant S. Effects of listener age and native language on perception of accented and unaccented sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:3191. [PMID: 30599683 PMCID: PMC6286185 DOI: 10.1121/1.5081711] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 11/06/2018] [Accepted: 11/13/2018] [Indexed: 06/09/2023]
Abstract
Degradations to auditory input have deleterious effects on speech recognition performance, especially by older listeners. Alterations to timing information in speech, such as occurs in rapid or foreign-accented speech, can be particularly difficult for older people to resolve. It is currently unclear how prior language experience modulates performance with temporally altered sentence-length speech utterances. The principal hypothesis is that prior experience with a foreign language affords an advantage for recognition of accented English when the talker and listener share the same native language, which may minimize age-related differences in performance with temporally altered speech. A secondary hypothesis is that native language experience with a syllable-timed language (Spanish) is advantageous for recognizing rapid English speech. Native speakers of English and Spanish completed speech recognition tasks with both accented and unaccented English sentences presented in various degrees of time compression (TC). Native English listeners showed higher or equivalent recognition of accented and unaccented English speech compared to native Spanish listeners in all TC conditions. Additionally, significant effects of aging were seen for native Spanish listeners on all tasks. Overall, the results did not support the hypotheses for a benefit of shared language experience for non-native speakers of English, particularly older native Spanish listeners.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Grace H Yeni-Komshian
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Maya S Freund
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Peter J Fitzgibbons
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
35
|
Bent T. Development of unfamiliar accent comprehension continues through adolescence. JOURNAL OF CHILD LANGUAGE 2018; 45:1400-1411. [PMID: 29619915 DOI: 10.1017/s0305000918000053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
School-age children's understanding of unfamiliar accents is not adult-like and the age at which this ability fully matures is unknown. To address this gap, eight- to fifteen-year-old children's (n = 74) understanding of native- and non-native-accented sentences in quiet and noise was assessed. Children's performance was adult-like by eleven to twelve years for the native accent in noise and by fourteen to fifteen years for the non-native accent in quiet. However, fourteen- to fifteen-year old's performance was not adult-like for the non-native accent in noise. Thus, adult-like comprehension of unfamiliar accents may require greater exposure to linguistic variability or additional cognitive-linguistic growth.
Collapse
Affiliation(s)
- Tessa Bent
- Department of Speech and Hearing Sciences,Indiana University,200 S. Jordan Ave.,Bloomington,IN, 47405
| |
Collapse
|
36
|
Coping with adversity: Individual differences in the perception of noisy and accented speech. Atten Percept Psychophys 2018; 80:1559-1570. [DOI: 10.3758/s13414-018-1537-4] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
Van Engen KJ. Clear speech and lexical competition in younger and older adult listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1067. [PMID: 28863602 DOI: 10.1121/1.4998708] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This study investigated whether clear speech reduces the cognitive demands of lexical competition by crossing speaking style with lexical difficulty. Younger and older adults identified more words in clear versus conversational speech and more easy words than hard words. An initial analysis suggested that the effect of lexical difficulty was reduced in clear speech, but more detailed analyses within each age group showed this interaction was significant only for older adults. The results also showed that both groups improved over the course of the task and that clear speech was particularly helpful for individuals with poorer hearing: for younger adults, clear speech eliminated hearing-related differences that affected performance on conversational speech. For older adults, clear speech was generally more helpful to listeners with poorer hearing. These results suggest that clear speech affords perceptual benefits to all listeners and, for older adults, mitigates the cognitive challenge associated with identifying words with many phonological neighbors.
Collapse
Affiliation(s)
- Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, One Brookings Drive, St. Louis, Missouri 63130, USA
| |
Collapse
|
38
|
Mainz N, Shao Z, Brysbaert M, Meyer AS. Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds. Front Psychol 2017; 8:1164. [PMID: 28751871 PMCID: PMC5507948 DOI: 10.3389/fpsyg.2017.01164] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Accepted: 06/26/2017] [Indexed: 11/13/2022] Open
Abstract
Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i) using a battery of vocabulary tests instead of just one test, and (ii) testing not only university students (Experiment 1) but young adults from a broader range of educational backgrounds (Experiment 2). Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed.
Collapse
Affiliation(s)
- Nina Mainz
- Psychology of Language Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| | - Zeshu Shao
- Psychology of Language Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| | - Marc Brysbaert
- Department of Experimental Psychology, Ghent UniversityGhent, Belgium
| | - Antje S. Meyer
- Psychology of Language Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| |
Collapse
|
39
|
Tao L, Taft M. Influences of Cognitive Processing Capacities on Speech Perception in Young Adults. Front Psychol 2017; 8:266. [PMID: 28286491 PMCID: PMC5323404 DOI: 10.3389/fpsyg.2017.00266] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Accepted: 02/10/2017] [Indexed: 11/29/2022] Open
Abstract
Foreign accent in speech often presents listeners with challenging listening conditions. Consequently, listeners may need to draw on additional cognitive resources in order to perceive and comprehend such speech. Previous research has shown that, for older adults, executive functions predicted perception of speech material spoken in a novel, artificially created (and therefore unfamiliar) accent. The present study investigates the influences of executive functions, information processing speed, and working memory on perception of unfamiliar foreign accented speech, in healthy young adults. The results showed that the executive processes of inhibition and switching, as well as information processing speed predict response times to both accented and standard sentence stimuli, while inhibition and information processing speed predict speed of responding to accented word stimuli. Inhibition and switching further predict accuracy in responding to accented word and standard sentence stimuli that has increased processing demand (i.e., nonwords and sentences with unexpected semantic content). These findings suggest that stronger abilities in aspects of cognitive functioning may be helpful for matching variable pronunciations of speech sounds to stored representations, for example by being able to manage the activation of incorrect competing representations and shifting to other possible matches.
Collapse
Affiliation(s)
- Lily Tao
- Institute of Cognitive Neuroscience, East China Normal UniversityShanghai, China; School of Psychology, University of New South WalesSydney, NSW, Australia
| | - Marcus Taft
- School of Psychology, University of New South Wales Sydney, NSW, Australia
| |
Collapse
|
40
|
Borrie SA, Lansford KL, Barrett TS. Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:561-570. [PMID: 28241307 DOI: 10.1044/2016_jslhr-s-16-0094] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Accepted: 09/11/2016] [Indexed: 05/13/2023]
Abstract
PURPOSE The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria. METHOD Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words. RESULTS Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement. CONCLUSIONS Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.
Collapse
Affiliation(s)
- Stephanie A Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| | - Kaitlin L Lansford
- School of Communication Sciences and Disorders, Florida State University, Tallahassee
| | | |
Collapse
|
41
|
Bent T, Atagi E. Perception of Nonnative-accented Sentences by 5- to 8-Year-olds and Adults: The Role of Phonological Processing Skills. LANGUAGE AND SPEECH 2017; 60:110-122. [PMID: 28326989 DOI: 10.1177/0023830916645374] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
To acquire language and successfully communicate in multicultural and multilingual societies, children must learn to understand speakers with various accents and dialects. This study investigated adults' and 5- to 8-year-old children's perception of native- and nonnative-accented English sentences in noise. Participants' phonological memory and phonological awareness were assessed to investigate factors associated with individual differences in word recognition. Although both adults and children performed less accurately with nonnative talkers than native talkers, children showed greater performance decrements. Further, phonological memory was more closely tied to perception of native talkers whereas phonological awareness was more closely related to perception of nonnative talkers. These results suggest that the ability to recognize words produced in unfamiliar accents continues to develop beyond the early school-age years. Additionally, the linguistic skills most related to word recognition in adverse listening conditions may differ depending on the source of the challenge (i.e., noise, talker, or a combination).
Collapse
|
42
|
Finke M, Sandmann P, Bönitz H, Kral A, Büchner A. Consequences of Stimulus Type on Higher-Order Processing in Single-Sided Deaf Cochlear Implant Users. Audiol Neurootol 2016; 21:305-315. [DOI: 10.1159/000452123] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2016] [Accepted: 09/20/2016] [Indexed: 11/19/2022] Open
Abstract
Single-sided deaf subjects with a cochlear implant (CI) provide the unique opportunity to compare central auditory processing of the electrical input (CI ear) and the acoustic input (normal-hearing, NH, ear) within the same individual. In these individuals, sensory processing differs between their two ears, while cognitive abilities are the same irrespectively of the sensory input. To better understand perceptual-cognitive factors modulating speech intelligibility with a CI, this electroencephalography study examined the central-auditory processing of words, the cognitive abilities, and the speech intelligibility in 10 postlingually single-sided deaf CI users. We found lower hit rates and prolonged response times for word classification during an oddball task for the CI ear when compared with the NH ear. Also, event-related potentials reflecting sensory (N1) and higher-order processing (N2/N4) were prolonged for word classification (targets versus nontargets) with the CI ear compared with the NH ear. Our results suggest that speech processing via the CI ear and the NH ear differs both at sensory (N1) and cognitive (N2/N4) processing stages, thereby affecting the behavioral performance for speech discrimination. These results provide objective evidence for cognition to be a key factor for speech perception under adverse listening conditions, such as the degraded speech signal provided from the CI.
Collapse
|
43
|
Bent T, Baese-Berk M, Borrie SA, McKee M. Individual differences in the perception of regional, nonnative, and disordered speech varieties. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:3775. [PMID: 27908060 DOI: 10.1121/1.4966677] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Speech perception abilities vary substantially across listeners, particularly in adverse conditions including those stemming from environmental degradation (e.g., noise) or from talker-related challenges (e.g., nonnative or disordered speech). This study examined adult listeners' recognition of words in phrases produced by six talkers representing three speech varieties: a nonnative accent (Spanish-accented English), a regional dialect (Irish English), and a disordered variety (ataxic dysarthria). Semantically anomalous phrases from these talkers were presented in a transcription task and intelligibility scores, percent words correct, were compared across the three speech varieties. Three cognitive-linguistic areas-receptive vocabulary, cognitive flexibility, and inhibitory control of attention-were assessed as possible predictors of individual word recognition performance. Intelligibility scores for the Spanish accent were significantly correlated with scores for the Irish English and ataxic dysarthria. Scores for the Irish English and dysarthric speech, in contrast, were not correlated. Furthermore, receptive vocabulary was the only cognitive-linguistic assessment that significantly predicted intelligibility scores. These results suggest that, rather than a global skill of perceiving speech that deviates from native dialect norms, listeners may possess specific abilities to overcome particular types of acoustic-phonetic deviation. Furthermore, vocabulary size offers performance benefits for intelligibility of speech that deviates from one's typical dialect norms.
Collapse
Affiliation(s)
- Tessa Bent
- Department of Speech and Hearing Sciences, Indiana University, Bloomington, Indiana 47405, USA
| | - Melissa Baese-Berk
- Department of Linguistics, University of Oregon, Eugene, Oregon 97403, USA
| | - Stephanie A Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84332, USA
| | - Megan McKee
- Department of Speech and Hearing Sciences, Indiana University, Bloomington, Indiana 47405, USA
| |
Collapse
|
44
|
Thiel CM, Özyurt J, Nogueira W, Puschmann S. Effects of Age on Long Term Memory for Degraded Speech. Front Hum Neurosci 2016; 10:473. [PMID: 27708570 PMCID: PMC5030220 DOI: 10.3389/fnhum.2016.00473] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 09/07/2016] [Indexed: 12/15/2022] Open
Abstract
Prior research suggests that acoustical degradation impacts encoding of items into memory, especially in elderly subjects. We here aimed to investigate whether acoustically degraded items that are initially encoded into memory are more prone to forgetting as a function of age. Young and old participants were tested with a vocoded and unvocoded serial list learning task involving immediate and delayed free recall. We found that degraded auditory input increased forgetting of previously encoded items, especially in older participants. We further found that working memory capacity predicted forgetting of degraded information in young participants. In old participants, verbal IQ was the most important predictor for forgetting acoustically degraded information. Our data provide evidence that acoustically degraded information, even if encoded, is especially vulnerable to forgetting in old age.
Collapse
Affiliation(s)
- Christiane M Thiel
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany; Research Center Neurosensory Science, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Jale Özyurt
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| | - Waldo Nogueira
- Cluster of Excellence "Hearing4all", Department of Otolaryngology, Medical University Hannover Hannover, Germany
| | - Sebastian Puschmann
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| |
Collapse
|
45
|
Banai K, Lavner Y. The effects of exposure and training on the perception of time-compressed speech in native versus nonnative listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1686. [PMID: 27914374 DOI: 10.1121/1.4962499] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The present study investigated the effects of language experience on the perceptual learning induced by either brief exposure to or more intensive training with time-compressed speech. Native (n = 30) and nonnative (n = 30) listeners were each divided to three groups with different experiences with time-compressed speech: A trained group who trained on the semantic verification of time-compressed sentences for three sessions, an exposure group briefly exposed to 20 time-compressed sentences, and a group of naive listeners. Recognition was assessed with three sets of time-compressed sentences intended to evaluate exposure-induced and training-induced learning as well as across-token and across-talker generalization. Learning profiles differed between native and nonnative listeners. Exposure had a weaker effect in nonnative than in native listeners. Furthermore, native and nonnative trained listeners significantly outperformed their untrained counterparts when tested with sentences taken from the training set. However, only trained native listeners outperformed naive native listeners when tested with new sentences. These findings suggest that the perceptual learning of speech is sensitive to linguistic experience. That rapid learning is weaker in nonnative listeners is consistent with their difficulties in real-life conditions. Furthermore, nonnative listeners may require longer periods of practice to achieve native-like learning outcomes.
Collapse
Affiliation(s)
- Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Mt. Carmel, Haifa 34988, Israel
| | - Yizhar Lavner
- Department of Computer Science, Tel-Hai College, Tel-Hai 12208, Israel
| |
Collapse
|
46
|
Füllgrabe C, Rosen S. On The (Un)importance of Working Memory in Speech-in-Noise Processing for Listeners with Normal Hearing Thresholds. Front Psychol 2016; 7:1268. [PMID: 27625615 PMCID: PMC5003928 DOI: 10.3389/fpsyg.2016.01268] [Citation(s) in RCA: 107] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 08/09/2016] [Indexed: 12/29/2022] Open
Abstract
With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in the processing of speech in noise (SiN). The psychological construct that has received much interest in recent years is working memory. Empirical evidence indeed confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. However, some theoretical models propose that variations in WMC are an important predictor for variations in speech processing abilities in adverse perceptual conditions for all listeners, and this notion has become widely accepted within the field. To assess whether WMC also plays a role when listeners without hearing loss process speech in adverse listening conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification, using sentence material routinely used in audiological and hearing research. A meta-analysis revealed that, for young listeners with audiometrically normal hearing, individual variations in WMC are estimated to account for, on average, less than 2% of the variance in SiN identification scores. This result cautions against the (intuitively appealing) assumption that individual variations in WMC are predictive of SiN identification independently of the age and hearing status of the listener.
Collapse
Affiliation(s)
- Christian Füllgrabe
- Medical Research Council Institute of Hearing Research, The University of NottinghamNottingham, UK
| | - Stuart Rosen
- Speech,Hearing and Phonetic Sciences, University College LondonLondon, UK
| |
Collapse
|
47
|
Carroll R, Warzybok A, Kollmeier B, Ruigendijk E. Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise. Front Psychol 2016; 7:990. [PMID: 27458400 PMCID: PMC4930932 DOI: 10.3389/fpsyg.2016.00990] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 06/16/2016] [Indexed: 11/21/2022] Open
Abstract
Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18–35 years) and 22 older (60–78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults’ poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access.
Collapse
Affiliation(s)
- Rebecca Carroll
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Institute of Dutch Studies, University of OldenburgOldenburg, Germany
| | - Anna Warzybok
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Medizinische Physik, University of OldenburgOldenburg, Germany
| | - Birger Kollmeier
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Medizinische Physik, University of OldenburgOldenburg, Germany
| | - Esther Ruigendijk
- Cluster of Excellence 'Hearing4all', University of OldenburgOldenburg, Germany; Institute of Dutch Studies, University of OldenburgOldenburg, Germany
| |
Collapse
|
48
|
Finke M, Büchner A, Ruigendijk E, Meyer M, Sandmann P. On the relationship between auditory cognition and speech intelligibility in cochlear implant users: An ERP study. Neuropsychologia 2016; 87:169-181. [DOI: 10.1016/j.neuropsychologia.2016.05.019] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Revised: 05/09/2016] [Accepted: 05/18/2016] [Indexed: 10/21/2022]
|
49
|
Bent T, Atagi E. Children's perception of nonnative-accented sentences in noise and quiet. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:3985-3993. [PMID: 26723352 DOI: 10.1121/1.4938228] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Adult listeners' word recognition is remarkably robust under a variety of adverse listening conditions. However, the combination of two simultaneous listening challenges (e.g., nonnative speaker in noise) can cause significant word recognition decrements. This study investigated how talker-related (native vs nonnative) and environment-related (noise vs quiet) adverse conditions impact children's and adults' word recognition. Five- and six-year-old children and adults identified sentences produced by one native and one nonnative talker in both quiet and noise-added conditions. Children's word recognition declined significantly more than adults' in conditions with one source of listening adversity (i.e., native speaker in noise or nonnative speaker in quiet). Children's performance when the listening challenges were combined (nonnative talker in noise) was particularly poor. Immature speech-in-noise perception may be a result of children's difficulties with signal segregation or selective attention. In contrast, the explanation for children's difficulty in the mapping of unfamiliar pronunciations to known words in quiet listening conditions must rest on children's limited cognitive or linguistic skills and experiences. These results demonstrate that children's word recognition abilities under both environmental- and talker-related adversity are still developing in the early school-age years.
Collapse
Affiliation(s)
- Tessa Bent
- Department of Speech and Hearing Sciences, Indiana University, Bloomington, Indiana 47405, USA
| | - Eriko Atagi
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02453, USA
| |
Collapse
|
50
|
Adank P, Nuttall HE, Banks B, Kennedy-Higgins D. Neural bases of accented speech perception. Front Hum Neurosci 2015; 9:558. [PMID: 26500526 PMCID: PMC4594029 DOI: 10.3389/fnhum.2015.00558] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2015] [Accepted: 09/22/2015] [Indexed: 02/02/2023] Open
Abstract
The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Floccia et al., 2006; Adank et al., 2009). Despite the frequency with which we encounter such accents, the neural mechanisms supporting successful perception of accented speech are poorly understood. Nonetheless, candidate neural substrates involved in processing speech in challenging listening conditions, including accented speech, are beginning to be identified. This review will outline neural bases associated with perception of accented speech in the light of current models of speech perception, and compare these data to brain areas associated with processing other speech distortions. We will subsequently evaluate competing models of speech processing with regards to neural processing of accented speech. See Cristia et al. (2012) for an in-depth overview of behavioral aspects of accent processing.
Collapse
Affiliation(s)
- Patti Adank
- Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK ; School of Psychological Sciences, University of Manchester Manchester, UK
| | - Helen E Nuttall
- Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK
| | - Briony Banks
- School of Psychological Sciences, University of Manchester Manchester, UK
| | - Daniel Kennedy-Higgins
- Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK
| |
Collapse
|