1
|
Whitley A, Naylor G, Hadley LV. Used to Be a Dime, Now It's a Dollar: Revised Speech Perception in Noise Key Word Predictability Revisited 40 Years On. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1229-1242. [PMID: 38563688 PMCID: PMC11005954 DOI: 10.1044/2024_jslhr-23-00615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/11/2023] [Accepted: 01/12/2024] [Indexed: 04/04/2024]
Abstract
PURPOSE Almost 40 years after its development, in this article, we reexamine the relevance and validity of the ubiquitously used Revised Speech Perception in Noise (R-SPiN) sentence corpus. The R-SPiN corpus includes "high-context" and "low-context" sentences and has been widely used in the field of hearing research to examine the benefit derived from semantic context across English-speaking listeners, but research investigating age differences has yielded somewhat inconsistent findings. We assess the appropriateness of the corpus for use today in different English-language cultures (i.e., British and American) as well as for older and younger adults. METHOD Two hundred forty participants, including older (60-80 years) and younger (19-31 years) adult groups in the the United Kingdom and United States, completed a cloze task consisting of R-SPiN sentences with the final word removed. Cloze, as a measure of predictability, and entropy, as a measure of response uncertainty, were compared between culture and age groups. RESULTS Most critically, of the 200 "high-context" stimuli, only around half were assessed as highly predictable for older adults (United Kingdom: 109; United States: 107); and fewer still, for younger adults (United Kingdom: 75; United States: 81). We also found dominant responses to these "high-context" stimuli varied between cultures, with U.S. responses being more likely to match the original R-SPiN target. CONCLUSIONS Our findings highlight the issue of incomplete transferability of corpus items across English-language cultures as well as diminished equivalency for older and younger adults. By identifying relevant items for each population, this work could facilitate the interpretation of inconsistent findings in the literature, particularly relating to age effects.
Collapse
Affiliation(s)
- Alexina Whitley
- Hearing Sciences – Scottish Section, University of Nottingham, United Kingdom
| | - Graham Naylor
- Hearing Sciences – Scottish Section, University of Nottingham, United Kingdom
| | - Lauren V. Hadley
- Hearing Sciences – Scottish Section, University of Nottingham, United Kingdom
| |
Collapse
|
2
|
van Schoonhoven J, Rhebergen KS, Dreschler WA. A context-based approach to predict intelligibility of meaningful and nonsense words in interrupted noise: Model evaluation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:2476-2488. [PMID: 37862572 DOI: 10.1121/10.0021302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 09/17/2023] [Indexed: 10/22/2023]
Abstract
The context-based Extended Speech Transmission Index (cESTI) by Van Schoonhoven et al. (2022) was successfully used to predict the intelligibility of meaningful, monosyllabic words in interrupted noise. However, it is not clear how the model behaves when using different degrees of context. In the current paper, intelligibility of meaningful and nonsense CVC words in stationary and interrupted noise was measured in fourteen normally hearing adults. Intelligibility of nonsense words in interrupted noise at -18 dB SNR was relatively poor, possibly because listeners did not profit from coarticulatory cues as they did in stationary noise. With 75% of the total variance explained, the cESTI model performed better than the original ESTI model (R2 = 27%), especially due to better predictions at low interruption rates. However, predictions for meaningful word scores were relatively poor (R2 = 38%), mainly due to remaining inaccuracies at interruption rates below 4 Hz and a large effect of forward masking. Adjusting parameters of the forward masking function improved the accuracy of the model to a total explained variance of 83%, while the predicted power of previously published cESTI data remained similar.
Collapse
Affiliation(s)
- Jelmer van Schoonhoven
- Department of Clinical and Experimental Audiology, Amsterdam University Medical Center, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - Koenraad S Rhebergen
- Department of Otorhinolaryngology and Head & Neck Surgery, Rudolf Magnus Institute of Neuroscience, University Medical Center Utrecht, The Netherlands
| | - Wouter A Dreschler
- Department of Clinical and Experimental Audiology, Amsterdam University Medical Center, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Gianakas SP, Fitzgerald MB, Winn MB. Identifying Listeners Whose Speech Intelligibility Depends on a Quiet Extra Moment After a Sentence. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4852-4865. [PMID: 36472938 PMCID: PMC9934912 DOI: 10.1044/2022_jslhr-21-00622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 05/29/2022] [Accepted: 08/16/2022] [Indexed: 06/03/2023]
Abstract
PURPOSE An extra moment after a sentence is spoken may be important for listeners with hearing loss to mentally repair misperceptions during listening. The current audiologic test battery cannot distinguish between a listener who repaired a misperception versus a listener who heard the speech accurately with no need for repair. This study aims to develop a behavioral method to identify individuals who are at risk for relying on a quiet moment after a sentence. METHOD Forty-three individuals with hearing loss (32 cochlear implant users, 11 hearing aid users) heard sentences that were followed by either 2 s of silence or 2 s of babble noise. Both high- and low-context sentences were used in the task. RESULTS Some individuals showed notable benefit in accuracy scores (particularly for high-context sentences) when given an extra moment of silent time following the sentence. This benefit was highly variable across individuals and sometimes absent altogether. However, the group-level patterns of results were mainly explained by the use of context and successful perception of the words preceding sentence-final words. CONCLUSIONS These results suggest that some but not all individuals improve their speech recognition score by relying on a quiet moment after a sentence, and that this fragility of speech recognition cannot be assessed using one isolated utterance at a time. Reliance on a quiet moment to repair perceptions would potentially impede the perception of an upcoming utterance, making continuous communication in real-world scenarios difficult especially for individuals with hearing loss. The methods used in this study-along with some simple modifications if necessary-could potentially identify patients with hearing loss who retroactively repair mistakes by using clinically feasible methods that can ultimately lead to better patient-centered hearing health care. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21644801.
Collapse
|
4
|
Liu M, Chen Y, Schiller NO. Context Matters for Tone and Intonation Processing in Mandarin. LANGUAGE AND SPEECH 2022; 65:52-72. [PMID: 33482696 DOI: 10.1177/0023830920986174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In tonal languages such as Mandarin, both lexical tone and sentence intonation are primarily signaled by F0. Their F0 encodings are sometimes in conflict and sometimes in congruency. The present study investigated how tone and intonation, with F0 encodings in conflict or in congruency, are processed and how semantic context may affect their processing. To this end, tone and intonation identification experiments were conducted in both semantically neutral and constraining contexts. Results showed that the overall performance of tone identification was better than that of intonation. Specifically, tone identification was seldom affected by intonation information irrespective of semantic contexts. However, intonation identification, particularly question intonation, was susceptible to the final lexical tone identity and affected by the semantic context. In the semantically neutral context, questions ending with a rising tone and a falling tone were equally difficult to identify. In the semantically constraining context, questions ending with a falling tone were much better identified than those ending with a rising tone. This perceptual asymmetry suggests that top-down information provided by the semantically constraining context can play a facilitating role for listeners to disentangle intonational information from tonal information, but mainly in sentences with the lexical falling tone in the final position.
Collapse
Affiliation(s)
- Min Liu
- College of Chinese Language and Culture & Institute of Applied Linguistics, Jinan University, China
| | | | - Niels O Schiller
- Leiden University Centre for Linguistics & Leiden Institute for Brain and Cognition, Leiden University, the Netherlands
| |
Collapse
|
5
|
Winn MB, Teece KH. Slower Speaking Rate Reduces Listening Effort Among Listeners With Cochlear Implants. Ear Hear 2021; 42:584-595. [PMID: 33002968 PMCID: PMC8005496 DOI: 10.1097/aud.0000000000000958] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 08/12/2020] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Slowed speaking rate was examined for its effects on speech intelligibility, its interaction with the benefit of contextual cues, and the impact of these factors on listening effort in adults with cochlear implants. DESIGN Participants (n = 21 cochlear implant users) heard high- and low-context sentences that were played at the original speaking rate, as well as a slowed (1.4× duration) speaking rate, using uniform pitch-synchronous time warping. In addition to intelligibility measures, changes in pupil dilation were measured as a time-varying index of processing load or listening effort. Slope of pupil size recovery to baseline after the sentence was used as an index of resolution of perceptual ambiguity. RESULTS Speech intelligibility was better for high-context compared to low-context sentences and slightly better for slower compared to original-rate speech. Speech rate did not affect magnitude and latency of peak pupil dilation relative to sentence offset. However, baseline pupil size recovered more substantially for slower-rate sentences, suggesting easier processing in the moment after the sentence was over. The effect of slowing speech rate was comparable to changing a sentence from low context to high context. The effect of context on pupil dilation was not observed until after the sentence was over, and one of two analyses suggested that context had greater beneficial effects on listening effort when the speaking rate was slower. These patterns maintained even at perfect sentence intelligibility, suggesting that correct speech repetition does not guarantee efficient or effortless processing. With slower speaking rates, there was less variability in pupil dilation slopes following the sentence, implying mitigation of some of the difficulties shown by individual listeners who would otherwise demonstrate prolonged effort after a sentence is heard. CONCLUSIONS Slowed speaking rate provides release from listening effort when hearing an utterance, particularly relieving effort that would have lingered after a sentence is over. Context arguably provides even more release from listening effort when speaking rate is slower. The pattern of prolonged pupil dilation for faster speech is consistent with increased need to mentally correct errors, although that exact interpretation cannot be verified with intelligibility data alone or with pupil data alone. A pattern of needing to dwell on a sentence to disambiguate misperceptions likely contributes to difficulty in running conversation where there are few opportunities to pause and resolve recently heard utterances.
Collapse
Affiliation(s)
- Matthew B. Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, USA
| | - Katherine H. Teece
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, USA
| |
Collapse
|
6
|
Jaekel BN, Weinstein S, Newman RS, Goupell MJ. Access to semantic cues does not lead to perceptual restoration of interrupted speech in cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1488. [PMID: 33765790 PMCID: PMC7935498 DOI: 10.1121/10.0003573] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 02/01/2021] [Accepted: 02/04/2021] [Indexed: 05/19/2023]
Abstract
Cochlear-implant (CI) users experience less success in understanding speech in noisy, real-world listening environments than normal-hearing (NH) listeners. Perceptual restoration is one method NH listeners use to repair noise-interrupted speech. Whereas previous work has reported that CI users can use perceptual restoration in certain cases, they failed to do so under listening conditions in which NH listeners can successfully restore. Providing increased opportunities to use top-down linguistic knowledge is one possible method to increase perceptual restoration use in CI users. This work tested perceptual restoration abilities in 18 CI users and varied whether a semantic cue (presented visually) was available prior to the target sentence (presented auditorily). Results showed that whereas access to a semantic cue generally improved performance with interrupted speech, CI users failed to perceptually restore speech regardless of the semantic cue availability. The lack of restoration in this population directly contradicts previous work in this field and raises questions of whether restoration is possible in CI users. One reason for speech-in-noise understanding difficulty in CI users could be that they are unable to use tools like restoration to process noise-interrupted speech effectively.
Collapse
Affiliation(s)
- Brittany N Jaekel
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Sarah Weinstein
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Rochelle S Newman
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
7
|
Patro C, Mendel LL. Semantic influences on the perception of degraded speech by individuals with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1778. [PMID: 32237796 DOI: 10.1121/10.0000934] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 03/02/2020] [Indexed: 06/11/2023]
Abstract
This study investigated whether speech intelligibility in cochlear implant (CI) users is affected by semantic context. Three groups participated in two experiments: Two groups of listeners with normal hearing (NH) listened to either full spectrum speech or vocoded speech, and one CI group listened to full spectrum speech. Experiment 1 measured participants' sentence recognition as a function of target-to-masker ratio (four-talker babble masker), and experiment 2 measured perception of interrupted speech as a function of duty cycles (long/short uninterrupted speech). Listeners were presented with both semantic congruent/incongruent targets. Results from the two experiments suggested that NH listeners benefitted more from the semantic cues as the listening conditions became more challenging (lower signal-to-noise ratios and interrupted speech with longer silent intervals). However, the CI group received minimal benefit from context, and therefore performed poorly in such conditions. On the contrary, in the conditions that were less challenging, CI users benefitted greatly from the semantic context, and NH listeners did not rely on such cues. The results also confirmed that such differential use of semantic cues appears to originate from the spectro-temporal degradations experienced by CI users, which could be a contributing factor for their poor performance in suboptimal environments.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55414, USA
| | - Lisa Lucks Mendel
- School of Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee 38152, USA
| |
Collapse
|
8
|
Winn MB. Accommodation of gender-related phonetic differences by listeners with cochlear implants and in a variety of vocoder simulations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:174. [PMID: 32006986 PMCID: PMC7341679 DOI: 10.1121/10.0000566] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 12/06/2019] [Accepted: 12/13/2019] [Indexed: 06/01/2023]
Abstract
Speech perception requires accommodation of a wide range of acoustic variability across talkers. A classic example is the perception of "sh" and "s" fricative sounds, which are categorized according to spectral details of the consonant itself, and also by the context of the voice producing it. Because women's and men's voices occupy different frequency ranges, a listener is required to make a corresponding adjustment of acoustic-phonetic category space for these phonemes when hearing different talkers. This pattern is commonplace in everyday speech communication, and yet might not be captured in accuracy scores for whole words, especially when word lists are spoken by a single talker. Phonetic accommodation for fricatives "s" and "sh" was measured in 20 cochlear implant (CI) users and in a variety of vocoder simulations, including those with noise carriers with and without peak picking, simulated spread of excitation, and pulsatile carriers. CI listeners showed strong phonetic accommodation as a group. Each vocoder produced phonetic accommodation except the 8-channel noise vocoder, despite its historically good match with CI users in word intelligibility. Phonetic accommodation is largely independent of linguistic factors and thus might offer information complementary to speech intelligibility tests which are partially affected by language processing.
Collapse
Affiliation(s)
- Matthew B Winn
- Department of Speech & Hearing Sciences, University of Minnesota, 164 Pillsbury Drive Southeast, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
9
|
Gianakas SP, Winn MB. Lexical bias in word recognition by cochlear implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:3373. [PMID: 31795696 PMCID: PMC6948217 DOI: 10.1121/1.5132938] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 10/04/2019] [Accepted: 10/14/2019] [Indexed: 06/03/2023]
Abstract
When hearing an ambiguous speech sound, listeners show a tendency to perceive it as a phoneme that would complete a real word, rather than completing a nonsense/fake word. For example, a sound that could be heard as either /b/ or /ɡ/ is perceived as /b/ when followed by _ack but perceived as /ɡ/ when followed by "_ap." Because the target sound is acoustically identical across both environments, this effect demonstrates the influence of top-down lexical processing in speech perception. Degradations in the auditory signal were hypothesized to render speech stimuli more ambiguous, and therefore promote increased lexical bias. Stimuli included three speech continua that varied by spectral cues of varying speeds, including stop formant transitions (fast), fricative spectra (medium), and vowel formants (slow). Stimuli were presented to listeners with cochlear implants (CIs), and also to listeners with normal hearing with clear spectral quality, or with varying amounts of spectral degradation using a noise vocoder. Results indicated an increased lexical bias effect with degraded speech and for CI listeners, for whom the effect size was related to segment duration. This method can probe an individual's reliance on top-down processing even at the level of simple lexical/phonetic perception.
Collapse
Affiliation(s)
- Steven P Gianakas
- Department of Speech-Language-Hearing Sciences, University of Minnesota, 164 Pillsbury Drive SE, Minneapolis, Minnesota 55455, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, 164 Pillsbury Drive SE, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
10
|
Winn MB, Moore AN. Pupillometry Reveals That Context Benefit in Speech Perception Can Be Disrupted by Later-Occurring Sounds, Especially in Listeners With Cochlear Implants. Trends Hear 2019; 22:2331216518808962. [PMID: 30375282 PMCID: PMC6207967 DOI: 10.1177/2331216518808962] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Contextual cues can be used to improve speech recognition, especially for people with hearing impairment. However, previous work has suggested that when the auditory signal is degraded, context might be used more slowly than when the signal is clear. This potentially puts the hearing-impaired listener in a dilemma of continuing to process the last sentence when the next sentence has already begun. This study measured the time course of the benefit of context using pupillary responses to high- and low-context sentences that were followed by silence or various auditory distractors (babble noise, ignored digits, or attended digits). Participants were listeners with cochlear implants or normal hearing using a 12-channel noise vocoder. Context-related differences in pupil dilation were greater for normal hearing than for cochlear implant listeners, even when scaled for differences in pupil reactivity. The benefit of context was systematically reduced for both groups by the presence of the later-occurring sounds, including virtually complete negation when sentences were followed by another attended utterance. These results challenge how we interpret the benefit of context in experiments that present just one utterance at a time. If a listener uses context to “repair” part of a sentence, and later-occurring auditory stimuli interfere with that repair process, the benefit of context might not survive outside the idealized laboratory or clinical environment. Elevated listening effort in hearing-impaired listeners might therefore result not just from poor auditory encoding but also inefficient use of context and prolonged processing of misperceived utterances competing with perception of incoming speech.
Collapse
Affiliation(s)
- Matthew B Winn
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Ashley N Moore
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
11
|
Patro C, Mendel LL. Gated Word Recognition by Postlingually Deafened Adults With Cochlear Implants: Influence of Semantic Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:145-158. [PMID: 29242894 DOI: 10.1044/2017_jslhr-h-17-0141] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 08/28/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. METHOD Listeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words. RESULTS The results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to vocoded speech had longer IPs than participants with NH who listened to full-spectrum speech. In addition, there was a clear disadvantage due to lack of semantic context in all groups regardless of the spectral composition of the target speech (full spectrum or vocoded). Finally, we showed that CI users (and users with NH with vocoded speech) can overcome such word processing difficulties with the help of semantic context and perform as well as listeners with NH. CONCLUSION Word recognition occurs even before the entire word is heard because listeners with NH associate an acoustic input with its mental representation to understand speech. The results of this study provide insight into the role of spectral degradation on the processing of spoken words in isolation and the potential benefits of semantic context. These results may also explain why CI users rely substantially on semantic context.
Collapse
Affiliation(s)
| | - Lisa Lucks Mendel
- School of Communication Sciences & Disorders, University of Memphis, TN
| |
Collapse
|
12
|
Nagaraj NK, Magimairaj BM. Role of working memory and lexical knowledge in perceptual restoration of interrupted speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:3756. [PMID: 29289104 DOI: 10.1121/1.5018429] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The role of working memory (WM) capacity and lexical knowledge in perceptual restoration (PR) of missing speech was investigated using the interrupted speech perception paradigm. Speech identification ability, which indexed PR, was measured using low-context sentences periodically interrupted at 1.5 Hz. PR was measured for silent gated, low-frequency speech noise filled, and low-frequency fine-structure and envelope filled interrupted conditions. WM capacity was measured using verbal and visuospatial span tasks. Lexical knowledge was assessed using both receptive vocabulary and meaning from context tests. Results showed that PR was better for speech noise filled condition than other conditions tested. Both receptive vocabulary and verbal WM capacity explained unique variance in PR for the speech noise filled condition, but were unrelated to performance in the silent gated condition. It was only receptive vocabulary that uniquely predicted PR for fine-structure and envelope filled conditions. These findings suggest that the contribution of lexical knowledge and verbal WM during PR depends crucially on the information content that replaced the silent intervals. When perceptual continuity was partially restored by filler speech noise, both lexical knowledge and verbal WM capacity facilitated PR. Importantly, for fine-structure and envelope filled interrupted conditions, lexical knowledge was crucial for PR.
Collapse
Affiliation(s)
- Naveen K Nagaraj
- Cognitive Hearing Science Lab, University of Arkansas for Medical Sciences and University of Arkansas at Little Rock, Little Rock, Arkansas 72204, USA
| | - Beula M Magimairaj
- Cognition and Language Lab, Communication Sciences and Disorders, University of Central Arkansas, Conway, Arkansas 72035, USA
| |
Collapse
|