1
|
Lansford KL, Hirsch ME, Barrett TS, Borrie SA. Cognitive Predictors of Perception and Adaption to Dysarthric Speech in Older Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025:1-18. [PMID: 39772701 DOI: 10.1044/2024_jslhr-24-00345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2025]
Abstract
PURPOSE In effortful listening conditions, speech perception and adaptation abilities are constrained by aging and often linked to age-related hearing loss and cognitive decline. Given that older adults are frequent communication partners of individuals with dysarthria, the current study examines cognitive-linguistic and hearing predictors of dysarthric speech perception and adaptation in older listeners. METHOD Fifty-eight older adult listeners (aged 55-80 years) completed a battery of hearing and cognitive tasks administered via the National Institutes of Health Toolbox. Participants also completed a three-phase familiarization task (pretest, training, and posttest) with one of two speakers with dysarthria. Elastic net regression models of initial intelligibility (pretest) and intelligibility improvement (posttest) were constructed for each speaker with dysarthria to identify important cognitive and hearing predictors. RESULTS Overall, the regression models indicated that intelligibility outcomes were optimized for older listeners with better words-in-noise thresholds, vocabulary knowledge, working memory capacity, and cognitive flexibility. Despite some convergence across models, unique constellations of cognitive-linguistic and hearing parameters and their two-way interactions predicted speech perception and adaptation outcomes for the two speakers with dysarthria, who varied in terms of their severity and perceptual characteristics. CONCLUSION Here, we add to an extensive body of work in related disciplines by demonstrating age-related declines in speech perception and adaptation to dysarthric speech can be traced back to specific hearing and cognitive-linguistic factors.
Collapse
Affiliation(s)
- Kaitlin L Lansford
- Department of Communication Science and Disorders, Florida State University, Tallahassee
| | - Micah E Hirsch
- Department of Communication Science and Disorders, Florida State University, Tallahassee
| | - Tyson S Barrett
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| | - Stephanie A Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
2
|
Chui YT, Qin Z. Distributional Learning and Overnight Consolidation of Nonnative Tonal Contrasts by Tonal Language Speakers. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:2038-2052. [PMID: 38861399 DOI: 10.1044/2024_jslhr-23-00711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
PURPOSE Previous studies have reported the success of distributional learning for adult speakers across segmental and suprasegmental categories immediately after training. On the other hand, second language (L2) perception models posit that the ease with which learners perceive a nonnative speech contrast depends on the perceptual mapping between the contrast and learners' first language (L1) categories. This study examined whether a difference in perceptual mapping patterns for different L2-Mandarin tonal contrasts might result in a difference in distributional learning effectiveness for tonal speakers and whether an interval of sleep enhanced the knowledge through consolidation. METHOD Following a pretest-training-posttest design, 66 L1-Cantonese participants with fewer than 9 years of Mandarin training were assigned to either the bimodal or unimodal distribution conditions. The participants of each group were asked to discriminate Mandarin level-falling (T1-T4) and level-rising (T1-T2) tone pairs on novel syllables in a within-subject design. All participants were trained in the evening, tested after training, and returned after 12 hr for overnight consolidation assessment. RESULTS A significant distributional learning effect was observed for Mandarin T1-T4, but only after sleep. No significant distributional learning effect was observed for Mandarin T1-T2, either after training or after sleep. CONCLUSIONS The findings may imply that distributional learning is contingent on perceptual mapping patterns of the target contrasts and that sleep may play a role in the consolidation of knowledge in an implicit statistical learning paradigm. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25970008.
Collapse
Affiliation(s)
- Yin-To Chui
- Division of Humanities, The Hong Kong University of Science and Technology, China
| | - Zhen Qin
- Division of Humanities, The Hong Kong University of Science and Technology, China
| |
Collapse
|
3
|
Drown L, Giovannone N, Pisoni DB, Theodore RM. Validation of two measures for assessing English vocabulary knowledge on web-based testing platforms: long-form assessments. LINGUISTICS VANGUARD : MULTIMODAL ONLINE JOURNAL 2023; 9:113-124. [PMID: 38173913 PMCID: PMC10758597 DOI: 10.1515/lingvan-2022-0115] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 03/22/2023] [Indexed: 01/05/2024]
Abstract
The goal of the current work was to develop and validate web-based measures for assessing English vocabulary knowledge. Two existing paper-and-pencil assessments, the Vocabulary Size Test (VST) and the Word Familiarity Test (WordFAM), were modified for web-based administration. In Experiment 1, participants (n = 100) completed the web-based VST. In Experiment 2, participants (n = 100) completed the web-based WordFAM. Results from these experiments confirmed that both tasks (1) could be completed online, (2) showed expected sensitivity to English frequency patterns, (3) exhibited high internal consistency, and (4) showed an expected range of item discrimination scores, with low frequency items exhibiting higher item discrimination scores compared to high frequency items. This work provides open-source English vocabulary knowledge assessments with normative data that researchers can use to foster high quality data collection in web-based environments.
Collapse
Affiliation(s)
- Lee Drown
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, USA
| | - Nikole Giovannone
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, USA
| | - David B. Pisoni
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Rachel M. Theodore
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
4
|
Drown L, Giovannone N, Pisoni DB, Theodore RM. Validation of two measures for assessing English vocabulary knowledge on web-based testing platforms: brief assessments. LINGUISTICS VANGUARD : MULTIMODAL ONLINE JOURNAL 2023; 9:99-111. [PMID: 38173912 PMCID: PMC10758598 DOI: 10.1515/lingvan-2022-0116] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 02/27/2023] [Indexed: 01/05/2024]
Abstract
Two measures for assessing English vocabulary knowledge, the Vocabulary Size Test (VST) and the Word Familiarity Test (WordFAM), were recently validated for web-based administration. An analysis of the psychometric properties of these assessments revealed high internal consistency, suggesting that stable assessment could be achieved with fewer test items. Because researchers may use these assessments in conjunction with other experimental tasks, the utility may be enhanced if they are shorter in duration. To this end, two "brief" versions of the VST and the WordFAM were developed and submitted to validation testing. Each version consisted of approximately half of the items from the full assessment, with novel items across each brief version. Participants (n = 85) completed one brief version of both the VST and the WordFAM at session one, followed by the other brief version of each assessment at session two. The results showed high test-retest reliability for both the VST (r = 0.68) and the WordFAM (r = 0.82). The assessments also showed moderate convergent validity (ranging from r = 0.38 to 0.59), indicative of assessment validity. This work provides open-source English vocabulary knowledge assessments with normative data that researchers can use to foster high quality data collection in web-based environments.
Collapse
Affiliation(s)
- Lee Drown
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, USA
| | - Nikole Giovannone
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, USA
| | - David B. Pisoni
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Rachel M. Theodore
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
5
|
Colby SE, McMurray B. Efficiency of spoken word recognition slows across the adult lifespan. Cognition 2023; 240:105588. [PMID: 37586157 PMCID: PMC10530619 DOI: 10.1016/j.cognition.2023.105588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 07/26/2023] [Accepted: 08/03/2023] [Indexed: 08/18/2023]
Abstract
Spoken word recognition is a critical hub during language processing, linking hearing and perception to meaning and syntax. Words must be recognized quickly and efficiently as speech unfolds to be successfully integrated into conversation. This makes word recognition a computationally challenging process even for young, normal hearing adults. Older adults often experience declines in hearing and cognition, which could be linked by age-related declines in the cognitive processes specific to word recognition. However, it is unclear whether changes in word recognition across the lifespan can be accounted for by hearing or domain-general cognition. Participants (N = 107) responded to spoken words in a Visual World Paradigm task while their eyes were tracked to assess the real-time dynamics of word recognition. We examined several indices of word recognition from early adolescence through older adulthood (ages 11-78). The timing and proportion of eye fixations to target and competitor images reveals that spoken word recognition became more efficient through age 25 and began to slow in middle age, accompanied by declines in the ability to resolve competition (e.g., suppressing sandwich to recognize sandal). There was a unique effect of age even after accounting for differences in inhibitory control, processing speed, and hearing thresholds. This suggests a limited age range where listeners are peak performers.
Collapse
Affiliation(s)
- Sarah E Colby
- Department of Psychological and Brain Sciences, University of Iowa, Psychological and Brain Sciences Building, Iowa City, IA, 52242, USA; Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA.
| | - Bob McMurray
- Department of Psychological and Brain Sciences, University of Iowa, Psychological and Brain Sciences Building, Iowa City, IA, 52242, USA; Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Wendell Johnson Speech and Hearing Center, Iowa City, IA, 52242, USA; Department of Linguistics, University of Iowa, Phillips Hall, Iowa City, IA 52242, USA
| |
Collapse
|
6
|
Lansford KL, Barrett TS, Borrie SA. Cognitive Predictors of Perception and Adaptation to Dysarthric Speech in Young Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:30-47. [PMID: 36480697 PMCID: PMC10023189 DOI: 10.1044/2022_jslhr-22-00391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2022] [Accepted: 09/02/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE Although recruitment of cognitive-linguistic resources to support dysarthric speech perception and adaptation is presumed by theoretical accounts of effortful listening and supported by cross-disciplinary empirical findings, prospective relationships have received limited attention in the disordered speech literature. This study aimed to examine the predictive relationships between cognitive-linguistic parameters and intelligibility outcomes associated with familiarization with dysarthric speech in young adult listeners. METHOD A cohort of 156 listener participants between the ages of 18 and 50 years completed a three-phase perceptual training protocol (pretest, training, and posttest) with one of three speakers with dysarthria. Additionally, listeners completed the National Institutes of Health Toolbox Cognition Battery to obtain measures of the following cognitive-linguistic constructs: working memory, inhibitory control of attention, cognitive flexibility, processing speed, and vocabulary knowledge. RESULTS Elastic net regression models revealed that select cognitive-linguistic measures and their two-way interactions predicted both initial intelligibility and intelligibility improvement of dysarthric speech. While some consistency across models was shown, unique constellations of select cognitive factors and their interactions predicted initial intelligibility and intelligibility improvement of the three different speakers with dysarthria. CONCLUSIONS Current findings extend empirical support for theoretical models of speech perception in adverse listening conditions to dysarthric speech signals. Although predictive relationships were complex, vocabulary knowledge, working memory, and cognitive flexibility often emerged as important variables across the models.
Collapse
Affiliation(s)
- Kaitlin L. Lansford
- School of Communication Science & Disorders, Florida State University, Tallahassee
| | | | - Stephanie A. Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
7
|
Bieber RE, Tinnemore AR, Yeni-Komshian G, Gordon-Salant S. Younger and older adults show non-linear, stimulus-dependent performance during early stages of auditory training for non-native English. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:4348. [PMID: 34241442 PMCID: PMC8214469 DOI: 10.1121/10.0005279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 05/24/2021] [Accepted: 05/25/2021] [Indexed: 06/13/2023]
Abstract
Older adults often report difficulty understanding speech produced by non-native talkers. These listeners can achieve rapid adaptation to non-native speech, but few studies have assessed auditory training protocols to improve non-native speech recognition in older adults. In this study, a word-level training paradigm was employed, targeting improved recognition of Spanish-accented English. Younger and older adults were trained on Spanish-accented monosyllabic word pairs containing four phonemic contrasts (initial s/z, initial f/v, final b/p, final d/t) produced in English by multiple male native Spanish speakers. Listeners completed pre-testing, training, and post-testing over two sessions. Statistical methods, such as growth curve modeling and generalized additive mixed models, were employed to describe the patterns of rapid adaptation and how they varied between listener groups and phonemic contrasts. While the training protocol failed to elicit post-test improvements for recognition of Spanish-accented speech, examination of listeners' performance during the pre-testing period showed patterns of rapid adaptation that differed, depending on the nature of the phonemes to be learned and the listener group. Normal-hearing younger and older adults showed a faster rate of adaptation for non-native stimuli that were more nativelike in their productions, while older adults with hearing impairment did not realize this benefit.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| | - Anna R Tinnemore
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| | - Grace Yeni-Komshian
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| |
Collapse
|
8
|
Luthra S, Magnuson JS, Myers EB. Boosting lexical support does not enhance lexically guided perceptual learning. J Exp Psychol Learn Mem Cogn 2021; 47:685-704. [PMID: 33983786 PMCID: PMC8287971 DOI: 10.1037/xlm0000945] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A challenge for listeners is to learn the appropriate mapping between acoustics and phonetic categories for an individual talker. Lexically guided perceptual learning (LGPL) studies have shown that listeners can leverage lexical knowledge to guide this process. For instance, listeners learn to interpret ambiguous /s/-/∫/ blends as /s/ if they have previously encountered them in /s/-biased contexts like epi?ode. Here, we examined whether the degree of preceding lexical support might modulate the extent of perceptual learning. In Experiment 1, we first demonstrated that perceptual learning could be obtained in a modified LGPL paradigm where listeners were first biased to interpret ambiguous tokens as one phoneme (e.g., /s/) and then later as another (e.g., /∫/). In subsequent experiments, we tested whether the extent of learning differed depending on whether targets encountered predictive contexts or neutral contexts prior to the auditory target (e.g., epi?ode). Experiment 2 used auditory sentence contexts (e.g., "I love The Walking Dead and eagerly await every new . . ."), whereas Experiment 3 used written sentence contexts. In Experiment 4, participants did not receive sentence contexts but rather saw the written form of the target word (episode) or filler text (########) prior to hearing the critical auditory token. While we consistently observed effects of context on in-the-moment processing of critical words, the size of the learning effect was not modulated by the type of context. We hypothesize that boosting lexical support through preceding context may not strongly influence perceptual learning when ambiguous speech sounds can be identified solely from lexical information. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
9
|
Giovannone N, Theodore RM. Individual Differences in Lexical Contributions to Speech Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:707-724. [PMID: 33606960 PMCID: PMC8608212 DOI: 10.1044/2020_jslhr-20-00283] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 10/05/2020] [Accepted: 11/26/2020] [Indexed: 05/28/2023]
Abstract
Purpose The extant literature suggests that individual differences in speech perception can be linked to broad receptive language phenotype. For example, a recent study found that individuals with a smaller receptive vocabulary showed diminished lexically guided perceptual learning compared to individuals with a larger receptive vocabulary. Here, we examined (a) whether such individual differences stem from variation in reliance on lexical information or variation in perceptual learning itself and (b) whether a relationship exists between lexical recruitment and lexically guided perceptual learning more broadly, as predicted by current models of lexically guided perceptual learning. Method In Experiment 1, adult participants (n = 70) completed measures of receptive and expressive language ability, lexical recruitment, and lexically guided perceptual learning. In Experiment 2, adult participants (n = 120) completed the same lexical recruitment and lexically guided perceptual learning tasks to provide a high-powered replication of the primary findings from Experiment 1. Results In Experiment 1, individuals with weaker receptive language ability showed increased lexical recruitment relative to individuals with higher receptive language ability; however, receptive language ability did not predict the magnitude of lexically guided perceptual learning. Moreover, the results of both experiments converged to show no evidence indicating a relationship between lexical recruitment and lexically guided perceptual learning. Conclusion The current findings suggest that (a) individuals with weaker language ability demonstrate increased reliance on lexical information for speech perception compared to those with stronger receptive language ability; (b) individuals with weaker language ability maintain an intact perceptual learning mechanism; and, (c) to the degree that the measures used here accurately capture individual differences in lexical recruitment and lexically guided perceptual learning, there is no graded relationship between these two constructs.
Collapse
Affiliation(s)
- Nikole Giovannone
- Department of Speech, Language and Hearing Sciences, University of Connecticut, Storrs
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs
| | - Rachel M. Theodore
- Department of Speech, Language and Hearing Sciences, University of Connecticut, Storrs
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs
| |
Collapse
|
10
|
Bieber RE, Gordon-Salant S. Improving older adults' understanding of challenging speech: Auditory training, rapid adaptation and perceptual learning. Hear Res 2021; 402:108054. [PMID: 32826108 PMCID: PMC7880302 DOI: 10.1016/j.heares.2020.108054] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 07/21/2020] [Accepted: 08/02/2020] [Indexed: 12/13/2022]
Abstract
The literature surrounding auditory perceptual learning and auditory training for challenging speech signals in older adult listeners is highly varied, in terms of both study methodology and reported outcomes. In this review, we discuss some of the pertinent features of listener, stimulus, and training protocol. Literature regarding the elicitation of auditory perceptual learning for time-compressed speech, non-native speech, and noise-vocoded speech is reviewed, as are auditory training protocols designed to improve speech-in-noise recognition. The literature is synthesized to establish some over-arching findings for the aging population, including an intact capacity for auditory perceptual learning, but a limited transfer of learning to untrained stimuli.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, University of Maryland, 0100 LeFrak Hall, 7251 Preinkert Drive, College Park, MD 20742, United States.
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, 0100 LeFrak Hall, 7251 Preinkert Drive, College Park, MD 20742, United States
| |
Collapse
|
11
|
Rotman T, Lavie L, Banai K. Rapid Perceptual Learning: A Potential Source of Individual Differences in Speech Perception Under Adverse Conditions? Trends Hear 2020; 24:2331216520930541. [PMID: 32552477 PMCID: PMC7303778 DOI: 10.1177/2331216520930541] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Challenging listening situations (e.g., when speech is rapid or noisy) result in substantial individual differences in speech perception. We propose that rapid auditory perceptual learning is one of the factors contributing to those individual differences. To explore this proposal, we assessed rapid perceptual learning of time-compressed speech in young adults with normal hearing and in older adults with age-related hearing loss. We also assessed the contribution of this learning as well as that of hearing and cognition (vocabulary, working memory, and selective attention) to the recognition of natural-fast speech (NFS; both groups) and speech in noise (younger adults). In young adults, rapid learning and vocabulary were significant predictors of NFS and speech in noise recognition. In older adults, hearing thresholds, vocabulary, and rapid learning were significant predictors of NFS recognition. In both groups, models that included learning fitted the speech data better than models that did not include learning. Therefore, under adverse conditions, rapid learning may be one of the skills listeners could employ to support speech recognition.
Collapse
Affiliation(s)
- Tali Rotman
- Department of Communication Sciences and Disorders, University of Haifa
| | - Limor Lavie
- Department of Communication Sciences and Disorders, University of Haifa
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa
| |
Collapse
|
12
|
Theodore RM, Monto NR, Graham S. Individual Differences in Distributional Learning for Speech: What's Ideal for Ideal Observers? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1-13. [PMID: 31841364 PMCID: PMC7213488 DOI: 10.1044/2019_jslhr-s-19-0152] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2019] [Revised: 06/10/2019] [Accepted: 07/02/2019] [Indexed: 05/29/2023]
Abstract
Purpose Speech perception is facilitated by listeners' ability to dynamically modify the mapping to speech sounds given systematic variation in speech input. For example, the degree to which listeners show categorical perception of speech input changes as a function of distributional variability in the input, with perception becoming less categorical as the input, becomes more variable. Here, we test the hypothesis that higher level receptive language ability is linked to the ability to adapt to low-level distributional cues in speech input. Method Listeners (n = 58) completed a distributional learning task consisting of 2 blocks of phonetic categorization for words beginning with /g/ and /k/. In 1 block, the distributions of voice onset time values specifying /g/ and /k/ had narrow variances (i.e., minimal variability). In the other block, the distributions of voice onset times specifying /g/ and /k/ had wider variances (i.e., increased variability). In addition, all listeners completed an assessment battery for receptive language, nonverbal intelligence, and reading fluency. Results As predicted by an ideal observer computational framework, the participants in aggregate showed identification responses that were more categorical for consistent compared to inconsistent input, indicative of distributional learning. However, the magnitude of learning across participants showed wide individual variability, which was predicted by receptive language ability but not by nonverbal intelligence or by reading fluency. Conclusion The results suggest that individual differences in distributional learning for speech are linked, at least in part, to receptive language ability, reflecting a decreased ability among those with weaker receptive language to capitalize on consistent input distributions.
Collapse
Affiliation(s)
- Rachel M. Theodore
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs
| | - Nicholas R. Monto
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs
| | - Stephen Graham
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs
| |
Collapse
|
13
|
Colby S, Shiller DM, Clayards M, Baum S. Different Responses to Altered Auditory Feedback in Younger and Older Adults Reflect Differences in Lexical Bias. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1144-1151. [PMID: 31026194 DOI: 10.1044/2018_jslhr-h-ascc7-18-0124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Previous work has found that both young and older adults exhibit a lexical bias in categorizing speech stimuli. In young adults, this has been argued to be an automatic influence of the lexicon on perceptual category boundaries. Older adults exhibit more top-down biases than younger adults, including an increased lexical bias. We investigated the nature of the increased lexical bias using a sensorimotor adaptation task designed to evaluate whether automatic processes drive this bias in older adults. Method A group of older adults ( n = 27) and younger adults ( n = 35) participated in an altered auditory feedback production task. Participants produced target words and nonwords under altered feedback that affected the 1st formant of the vowel. There were 2 feedback conditions that affected the lexical status of the target, such that target words were shifted to sound more like nonwords (e.g., less-liss) and target nonwords to sound more like words (e.g., kess-kiss). Results A mixed-effects linear regression was used to investigate the magnitude of compensation to altered auditory feedback between age groups and lexical conditions. Over the course of the experiment, older adults compensated (by shifting their production of 1st formant) more to altered auditory feedback when producing words that were shifted toward nonwords ( less-liss) than when producing nonwords that were shifted toward words ( kess-kiss). This is in contrast to younger adults who compensated more to nonwords that were shifted toward words compared to words that were shifted toward nonwords. Conclusion We found no evidence that the increased lexical bias previously observed in older adults is driven by a greater sensitivity to top-down lexical influence on perceptual category boundaries. We suggest the increased lexical bias in older adults is driven by postperceptual processes that arise as a result of age-related cognitive and sensory changes.
Collapse
Affiliation(s)
- Sarah Colby
- School of Communication Sciences & Disorders, McGill University, Montreal, Québec, Canada
- Centre for Research on Brain, Language, and Music, Montreal, Québec, Canada
| | - Douglas M Shiller
- Centre for Research on Brain, Language, and Music, Montreal, Québec, Canada
- École d'orthophonie et d'audiologie, Université de Montréal, Québec, Canada
- CHU Sainte-Justine Hospital Research Centre, Montreal, Québec, Canada
| | - Meghan Clayards
- School of Communication Sciences & Disorders, McGill University, Montreal, Québec, Canada
- Centre for Research on Brain, Language, and Music, Montreal, Québec, Canada
- Department of Linguistics, McGill University, Montreal, Québec, Canada
| | - Shari Baum
- School of Communication Sciences & Disorders, McGill University, Montreal, Québec, Canada
- Centre for Research on Brain, Language, and Music, Montreal, Québec, Canada
| |
Collapse
|