1
|
Krason A, Vigliocco G, Mailend ML, Stoll H, Varley R, Buxbaum LJ. Benefit of visual speech information for word comprehension in post-stroke aphasia. Cortex 2023; 165:86-100. [PMID: 37271014 PMCID: PMC10850036 DOI: 10.1016/j.cortex.2023.04.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 03/13/2023] [Accepted: 04/22/2023] [Indexed: 06/06/2023]
Abstract
Aphasia is a language disorder that often involves speech comprehension impairments affecting communication. In face-to-face settings, speech is accompanied by mouth and facial movements, but little is known about the extent to which they benefit aphasic comprehension. This study investigated the benefit of visual information accompanying speech for word comprehension in people with aphasia (PWA) and the neuroanatomic substrates of any benefit. Thirty-six PWA and 13 neurotypical matched control participants performed a picture-word verification task in which they indicated whether a picture of an animate/inanimate object matched a subsequent word produced by an actress in a video. Stimuli were either audiovisual (with visible mouth and facial movements) or auditory-only (still picture of a silhouette) with audio being clear (unedited) or degraded (6-band noise-vocoding). We found that visual speech information was more beneficial for neurotypical participants than PWA, and more beneficial for both groups when speech was degraded. A multivariate lesion-symptom mapping analysis for the degraded speech condition showed that lesions to superior temporal gyrus, underlying insula, primary and secondary somatosensory cortices, and inferior frontal gyrus were associated with reduced benefit of audiovisual compared to auditory-only speech, suggesting that the integrity of these fronto-temporo-parietal regions may facilitate cross-modal mapping. These findings provide initial insights into our understanding of the impact of audiovisual information on comprehension in aphasia and the brain regions mediating any benefit.
Collapse
Affiliation(s)
- Anna Krason
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA.
| | - Gabriella Vigliocco
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | - Marja-Liisa Mailend
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Special Education, University of Tartu, Tartu Linn, Estonia
| | - Harrison Stoll
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Applied Cognitive and Brain Science, Drexel University, Philadelphia, PA, USA
| | | | - Laurel J Buxbaum
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Rehabilitation Medicine, Thomas Jefferson University, Philadelphia, PA, USA
| |
Collapse
|
2
|
Fitzhugh MC, LaCroix AN, Rogalsky C. Distinct Contributions of Working Memory and Attentional Control to Sentence Comprehension in Noise in Persons With Stroke. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3230-3241. [PMID: 34284642 PMCID: PMC8740654 DOI: 10.1044/2021_jslhr-20-00694] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 03/26/2021] [Accepted: 04/22/2021] [Indexed: 06/13/2023]
Abstract
Purpose Sentence comprehension deficits are common following a left hemisphere stroke and have primarily been investigated under optimal listening conditions. However, ample work in neurotypical controls indicates that background noise affects sentence comprehension and the cognitive resources it engages. The purpose of this study was to examine how background noise affects sentence comprehension poststroke using both energetic and informational maskers. We further sought to identify whether sentence comprehension in noise abilities are related to poststroke cognitive abilities, specifically working memory and/or attentional control. Method Twenty persons with chronic left hemisphere stroke completed a sentence-picture matching task where they listened to sentences presented in three types of maskers: multispeakers, broadband noise, and silence (control condition). Working memory, attentional control, and hearing thresholds were also assessed. Results A repeated-measures analysis of variance identified participants to have the greatest difficulty with the multispeakers condition, followed by broadband noise and then silence. Regression analyses, after controlling for age and hearing ability, identified working memory as a significant predictor of listening engagement (i.e., mean reaction time) in broadband noise and multispeakers and attentional control as a significant predictor of informational masking effects (computed as a reaction time difference score where broadband noise is subtracted from multispeakers). Conclusions The results from this study indicate that background noise impacts sentence comprehension abilities poststroke and that these difficulties may arise due to deficits in the cognitive resources supporting sentence comprehension and not other factors such as age or hearing. These findings also highlight a relationship between working memory abilities and sentence comprehension in background noise. We further suggest that attentional control abilities contribute to sentence comprehension by supporting the additional demands associated with informational masking. Supplemental Material https://doi.org/10.23641/asha.14984511.
Collapse
Affiliation(s)
- Megan C. Fitzhugh
- Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA
| | | | | |
Collapse
|
3
|
Talker-familiarity benefit in non-native recognition memory and word identification: The role of listening conditions and proficiency. Atten Percept Psychophys 2019; 81:1675-1697. [DOI: 10.3758/s13414-018-01657-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
4
|
Luthra S, Fox NP, Blumstein SE. Speaker information affects false recognition of unstudied lexical-semantic associates. Atten Percept Psychophys 2018; 80:894-912. [PMID: 29473144 PMCID: PMC6003774 DOI: 10.3758/s13414-018-1485-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker's identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items. In Experiment 1, false recognition of semantically associated critical intruders was sensitive to speaker information, though only when subjects attended to talker identity during encoding. Results from Experiment 2 also provide some evidence that talker information affects the false recognition of critical intruders. Taken together, the present findings indicate that indexical information is able to contact the lexical-semantic network to affect the processing of unheard words.
Collapse
Affiliation(s)
- Sahil Luthra
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, 190 Thayer St., Box 1821, Providence, RI, 02912, USA.
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Rd, Unit 1020, Storrs, CT, 06269, USA.
| | - Neal P Fox
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, 190 Thayer St., Box 1821, Providence, RI, 02912, USA
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, Room 535, San Francisco, CA, 94143, USA
| | - Sheila E Blumstein
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, 190 Thayer St., Box 1821, Providence, RI, 02912, USA
- Brown Institute for Brain Science, Brown University, 2 Stimson Ave, Providence, RI, 02912, USA
| |
Collapse
|
5
|
Zhang M, Pratt SR, Doyle PJ, McNeil MR, Durrant JD, Roxberg J, Ortmann A. Audiological Assessment of Word Recognition Skills in Persons With Aphasia. Am J Audiol 2018; 27:1-18. [PMID: 29222555 DOI: 10.1044/2017_aja-17-0041] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 08/01/2017] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to evaluate the ability of persons with aphasia, with and without hearing loss, to complete a commonly used open-set word recognition test that requires a verbal response. Furthermore, phonotactic probabilities and neighborhood densities of word recognition errors were assessed to explore potential underlying linguistic complexities that might differentially influence performance among groups. METHOD Four groups of adult participants were tested: participants with no brain injury with normal hearing, participants with no brain injury with hearing loss, participants with brain injury with aphasia and normal hearing, and participants with brain injury with aphasia and hearing loss. The Northwestern University Auditory Test No. 6 (NU-6; Tillman & Carhart, 1966) was administered. Those participants who were unable to respond orally (repeating words as heard) were assessed with the Picture Identification Task (Wilson & Antablin, 1980), permitting a picture-pointing response instead. Error patterns from the NU-6 were assessed to determine whether phonotactic probability influenced performance. RESULTS All participants with no brain injury and 72.7% of the participants with aphasia (24 out of 33) completed the NU-6. Furthermore, all participants who were unable to complete the NU-6 were able to complete the Picture Identification Task. There were significant group differences on NU-6 performance. The 2 groups with normal hearing had significantly higher scores than the 2 groups with hearing loss, but the 2 groups with normal hearing and the 2 groups with hearing loss did not differ from one another, implying that their performance was largely determined by hearing loss rather than by brain injury or aphasia. The neighborhood density, but not phonotactic probabilities, of the participants' errors differed across groups with and without aphasia. CONCLUSIONS Because the vast majority of the participants with aphasia examined could be tested readily using an instrument such as the NU-6, clinicians should not be reticent to use this test if patients are able to repeat single words, but routine use of alternative tests is encouraged for populations of people with brain injuries.
Collapse
Affiliation(s)
- Min Zhang
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Sheila R. Pratt
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Patrick J. Doyle
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Malcolm R. McNeil
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - John D. Durrant
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Jillyn Roxberg
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
| | - Amanda Ortmann
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| |
Collapse
|
6
|
Lee CY, Zhang Y. Processing Lexical and Speaker Information in Repetition and Semantic/Associative Priming. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2018; 47:65-78. [PMID: 28752195 DOI: 10.1007/s10936-017-9514-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The purpose of this study is to investigate the interaction between processing lexical and speaker-specific information in spoken word recognition. The specific question is whether repetition and semantic/associative priming is reduced when the prime and target are produced by different speakers. In Experiment 1, the prime and target were repeated (e.g., queen-queen) or unrelated (e.g., bell-queen). In Experiment 2, the prime and target were semantically/associatively related (e.g., king-queen) or unrelated (e.g., bell-queen). In both experiments, the prime and target were either produced by the same male speaker or two different male speakers. Two interstimulus intervals between the prime and target were used to examine the time course of processing speaker information. The tasks for the participants included judging the lexical status of the target (lexical decision), followed by judging whether the prime and target were produced by the same speaker or different speakers (speaker discrimination). The results showed that both lexical decision and speaker discrimination were facilitated to a smaller extent when the prime and target were produced by different speakers, indicating reduced repetition priming by speaker variability. In contrast, semantic/associative priming was not affected by speaker variability. The ISI between the prime and target did not affect either type of priming. In conclusion, speaker variability affects accessing a word's form but not its meaning, suggesting that speaker-specific information is processed at a relatively shallow level.
Collapse
Affiliation(s)
- Chao-Yang Lee
- Division of Communication Sciences and Disorders, Ohio University, Athens, OH, 45701, USA.
| | - Yu Zhang
- Division of Communication Sciences and Disorders, Ohio University, Athens, OH, 45701, USA
| |
Collapse
|
7
|
Lee CY, Zhang Y. Processing speaker variability in repetition and semantic/associative priming. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2015; 44:237-250. [PMID: 24989850 DOI: 10.1007/s10936-014-9307-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The effect of speaker variability on accessing the form and meaning of spoken words was evaluated in two short-term priming experiments. In the repetition priming experiment, participants listened to repeated or unrelated prime-target pairs, in which the prime and target were produced by the same speaker or different speakers. The results showed robust repetition priming, but only partial evidence of reduction of priming by speaker variability. In the semantic/associative priming experiment, participants listened to semantically/associatively related or unrelated prime-target pairs, in which the prime and target were produced by the same speaker or different speakers. The results showed robust semantic/associative priming, but the reduction of priming by speaker variability took place only for targets produced by the female speaker. There is no evidence that the speaker variability effect varied as a function of inter-stimulus interval. These findings suggest that speaker variability could affect access to word form and meaning, but the impact is relatively weak.
Collapse
Affiliation(s)
- Chao-Yang Lee
- Division of Communication Sciences and Disorders, Ohio University, W239, Grover Center, Athens, OH , 45701, USA
| | | |
Collapse
|
8
|
Schwartz K, Ringleb SI, Sandberg H, Raymer A, Watson GS. Development of Trivia Game for speech understanding in background noise. INTERNATIONAL JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2014; 17:357-366. [PMID: 25417843 DOI: 10.3109/17549507.2014.979875] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
PURPOSE Listening in noise is an everyday activity and poses a challenge for many people. To improve the ability to understand speech in noise, a computerized auditory rehabilitation game was developed. In Trivia Game players are challenged to answer trivia questions spoken aloud. As players progress through the game, the level of background noise increases. A study using Trivia Game was conducted as a proof-of-concept investigation in healthy participants. METHOD College students with normal hearing were randomly assigned to a control (n = 13) or a treatment (n = 14) group. Treatment participants played Trivia Game 12 times over a 4-week period. All participants completed objective (auditory-only and audiovisual formats) and subjective listening in noise measures at baseline and 4 weeks later. RESULT There were no statistical differences between the groups at baseline. At post-test, the treatment group significantly improved their overall speech understanding in noise in the audiovisual condition and reported significant benefits in their functional listening abilities. CONCLUSION Playing Trivia Game improved speech understanding in noise in healthy listeners. Significant findings for the audiovisual condition suggest that participants improved face-reading abilities. Trivia Game may be a platform for investigating changes in speech understanding in individuals with sensory, linguistic and cognitive impairments.
Collapse
Affiliation(s)
- Kathryn Schwartz
- Communication Disorders & Special Education, Old Dominion University , Norfolk, VA , USA
| | | | | | | | | |
Collapse
|