1
|
Svirsky MA, Neukam JD, Capach NH, Amichetti NM, Lavender A, Wingfield A. Communication Under Sharply Degraded Auditory Input and the "2-Sentence" Problem. Ear Hear 2024; 45:1045-1058. [PMID: 38523125 DOI: 10.1097/aud.0000000000001500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/26/2024]
Abstract
OBJECTIVES Despite performing well in standard clinical assessments of speech perception, many cochlear implant (CI) users report experiencing significant difficulties when listening in real-world environments. We hypothesize that this disconnect may be related, in part, to the limited ecological validity of tests that are currently used clinically and in research laboratories. The challenges that arise from degraded auditory information provided by a CI, combined with the listener's finite cognitive resources, may lead to difficulties when processing speech material that is more demanding than the single words or single sentences that are used in clinical tests. DESIGN Here, we investigate whether speech identification performance and processing effort (indexed by pupil dilation measures) are affected when CI users or normal-hearing control subjects are asked to repeat two sentences presented sequentially instead of just one sentence. RESULTS Response accuracy was minimally affected in normal-hearing listeners, but CI users showed a wide range of outcomes, from no change to decrements of up to 45 percentage points. The amount of decrement was not predictable from the CI users' performance in standard clinical tests. Pupillometry measures tracked closely with task difficulty in both the CI group and the normal-hearing group, even though the latter had speech perception scores near ceiling levels for all conditions. CONCLUSIONS Speech identification performance is significantly degraded in many (but not all) CI users in response to input that is only slightly more challenging than standard clinical tests; specifically, when two sentences are presented sequentially before requesting a response, instead of presenting just a single sentence at a time. This potential "2-sentence problem" represents one of the simplest possible scenarios that go beyond presentation of the single words or sentences used in most clinical tests of speech perception, and it raises the possibility that even good performers in single-sentence tests may be seriously impaired by other ecologically relevant manipulations. The present findings also raise the possibility that a clinical version of a 2-sentence test may provide actionable information for counseling and rehabilitating CI users, and for people who interact with them closely.
Collapse
Affiliation(s)
- Mario A Svirsky
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
- Neuroscience Institute, New York University School of Medicine, New York, New York, USA
| | - Jonathan D Neukam
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
| | - Nicole Hope Capach
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
| | - Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | - Annette Lavender
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
- Cochlear Americas, Denver, Colorado, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
2
|
Shen J, Sun J, Zhang Z, Sun B, Li H, Liu Y. The Effect of Hearing Loss and Working Memory Capacity on Context Use and Reliance on Context in Older Adults. Ear Hear 2024; 45:787-800. [PMID: 38273447 DOI: 10.1097/aud.0000000000001470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
OBJECTIVES Older adults often complain of difficulty in communicating in noisy environments. Contextual information is considered an important cue for identifying everyday speech. To date, it has not been clear exactly how context use (CU) and reliance on context in older adults are affected by hearing status and cognitive function. The present study examined the effects of semantic context on the performance of speech recognition, recall, perceived listening effort (LE), and noise tolerance, and further explored the impacts of hearing loss and working memory capacity on CU and reliance on context among older adults. DESIGN Fifty older adults with normal hearing and 56 older adults with mild-to-moderate hearing loss between the ages of 60 and 95 years participated in this study. A median split of the backward digit span further classified the participants into high working memory (HWM) and low working memory (LWM) capacity groups. Each participant performed high- and low-context Repeat and Recall tests, including a sentence repeat and delayed recall task, subjective assessments of LE, and tolerable time under seven signal to noise ratios (SNRs). CU was calculated as the difference between high- and low-context sentences for each outcome measure. The proportion of context use (PCU) in high-context performance was taken as the reliance on context to explain the degree to which participants relied on context when they repeated and recalled high-context sentences. RESULTS Semantic context helps improve the performance of speech recognition and delayed recall, reduces perceived LE, and prolongs noise tolerance in older adults with and without hearing loss. In addition, the adverse effects of hearing loss on the performance of repeat tasks were more pronounced in low context than in high context, whereas the effects on recall tasks and noise tolerance time were more significant in high context than in low context. Compared with other tasks, the CU and PCU in repeat tasks were more affected by listening status and working memory capacity. In the repeat phase, hearing loss increased older adults' reliance on the context of a relatively challenging listening environment, as shown by the fact that when the SNR was 0 and -5 dB, the PCU (repeat) of the hearing loss group was significantly greater than that of the normal-hearing group, whereas there was no significant difference between the two hearing groups under the remaining SNRs. In addition, older adults with LWM had significantly greater CU and PCU in repeat tasks than those with HWM, especially at SNRs with moderate task demands. CONCLUSIONS Taken together, semantic context not only improved speech perception intelligibility but also released cognitive resources for memory encoding in older adults. Mild-to-moderate hearing loss and LWM capacity in older adults significantly increased the use and reliance on semantic context, which was also modulated by the level of SNR.
Collapse
Affiliation(s)
- Jiayuan Shen
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Zhejiang, China
| | - Jiayu Sun
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Training Department, Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haitao Li
- Department of Neurology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| |
Collapse
|
3
|
Devlin SP, Brown NL, Drollinger S, Sibley C, Alami J, Riggs SL. Scan-based eye tracking measures are predictive of workload transition performance. APPLIED ERGONOMICS 2022; 105:103829. [PMID: 35930898 DOI: 10.1016/j.apergo.2022.103829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 06/06/2022] [Accepted: 06/08/2022] [Indexed: 06/15/2023]
Abstract
Given there is no unifying theory or design guidance for workload transitions, this work investigated how visual attention allocation patterns could inform both topics, by understanding if scan-based eye tracking metrics could predict workload transition performance trends in a context-relevant domain. The eye movements of sixty Naval flight students were tracked as workload transitioned at a slow, medium, and fast pace in an unmanned aerial vehicle testbed. Four scan-based metrics were significant predictors across the different growth curve models of response time and accuracy. Stationary gaze entropy (a measure of how dispersed visual attention transitions are across tasks) was predictive across all three transition rates. The other three predictive scan-based metrics captured different aspects of visual attention, including its spread, directness, and duration. The findings specify several missing details in both theory and design guidance, which is unprecedented, and serves as a basis of future workload transition research.
Collapse
Affiliation(s)
- Shannon P Devlin
- U.S. Naval Research Laboratory, Washington, D.C, USA; University of Virginia, Charlottesville, VA, USA.
| | | | | | - Ciara Sibley
- U.S. Naval Research Laboratory, Washington, D.C, USA
| | - Jawad Alami
- University of Virginia, Charlottesville, VA, USA
| | - Sara L Riggs
- University of Virginia, Charlottesville, VA, USA
| |
Collapse
|
4
|
Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol 2022; 13:967260. [PMID: 36118435 PMCID: PMC9477118 DOI: 10.3389/fpsyg.2022.967260] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
5
|
Failes E, Sommers MS. Using Eye-Tracking to Investigate an Activation-Based Account of False Hearing in Younger and Older Adults. Front Psychol 2022; 13:821044. [PMID: 35651579 PMCID: PMC9150819 DOI: 10.3389/fpsyg.2022.821044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Accepted: 03/28/2022] [Indexed: 11/28/2022] Open
Abstract
Several recent studies have demonstrated context-based, high-confidence misperceptions in hearing, referred to as false hearing. These studies have unanimously found that older adults are more susceptible to false hearing than are younger adults, which the authors have attributed to an age-related decline in the ability to inhibit the activation of a contextually predicted (but incorrect) response. However, no published work has investigated this activation-based account of false hearing. In the present study, younger and older adults listened to sentences in which the semantic context provided by the sentence was either unpredictive, highly predictive and valid, or highly predictive and misleading with relation to a sentence-final word in noise. Participants were tasked with clicking on one of four images to indicate which image depicted the sentence-final word in noise. We used eye-tracking to investigate how activation, as revealed in patterns of fixations, of different response options changed in real-time over the course of sentences. We found that both younger and older adults exhibited anticipatory activation of the target word when highly predictive contextual cues were available. When these contextual cues were misleading, younger adults were able to suppress the activation of the contextually predicted word to a greater extent than older adults. These findings are interpreted as evidence for an activation-based model of speech perception and for the role of inhibitory control in false hearing.
Collapse
Affiliation(s)
- Eric Failes
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, United States
| | - Mitchell S Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, United States
| |
Collapse
|
6
|
Saryazdi R, Nuque J, Chambers CG. Pragmatic inferences in aging and human-robot communication. Cognition 2022; 223:105017. [PMID: 35131577 DOI: 10.1016/j.cognition.2022.105017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 06/12/2021] [Accepted: 01/05/2022] [Indexed: 12/30/2022]
Abstract
Despite the increase in research on older adults' communicative behavior, little work has explored patterns of age-related change in pragmatic inferencing and how these patterns are adapted depending on the situation-specific context. In two eye-tracking experiments, participants followed instructions like "Click on the greenhouse", which were either played over speakers or spoken live by a co-present robot partner. Implicit inferential processes were measured by exploring the extent to which listeners temporarily (mis)understood the unfolding noun to be a modified phrase referring to a competitor object in the display (green hat). This competitor was accompanied by either another member of the same category or an unrelated item (tan hat vs. dice). Experiment 1 (no robot) showed clear evidence of contrastive inferencing in both younger and older adults (more looks to the green hat when the tan hat was also present). Experiment 2 explored the ability to suppress these contrastive inferences when the robot talker was known to lack any color perception, making descriptions like "green hat" implausible. Younger but not older listeners were able to suppress contrastive inferences in this context, suggesting older adults could not keep the relevant limitations in mind and/or were more likely to spontaneously ascribe human attributes to the robot. Together, the findings enhance our understanding of pragmatic inferencing in aging.
Collapse
Affiliation(s)
- Raheleh Saryazdi
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Department of Psychology, University of Toronto, Mississauga, Ontario, Canada.
| | - Joanne Nuque
- Department of Psychology, University of Toronto, Mississauga, Ontario, Canada
| | - Craig G Chambers
- Department of Psychology, University of Toronto, Mississauga, Ontario, Canada
| |
Collapse
|
7
|
Harel-Arbeli T, Wingfield A, Palgi Y, Ben-David BM. Age-Related Differences in the Online Processing of Spoken Semantic Context and the Effect of Semantic Competition: Evidence From Eye Gaze. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:315-327. [PMID: 33561353 DOI: 10.1044/2020_jslhr-20-00142] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The study examined age-related differences in the use of semantic context and in the effect of semantic competition in spoken sentence processing. We used offline (response latency) and online (eye gaze) measures, using the "visual world" eye-tracking paradigm. Method Thirty younger and 30 older adults heard sentences related to one of four images presented on a computer monitor. They were asked to touch the image corresponding to the final word of the sentence (target word). Three conditions were used: a nonpredictive sentence, a predictive sentence suggesting one of the four images on the screen (semantic context), and a predictive sentence suggesting two possible images (semantic competition). Results Online eye gaze data showed no age-related differences with nonpredictive sentences, but revealed slowed processing for older adults when context was presented. With the addition of semantic competition to context, older adults were slower to look at the target word after it had been heard. In contrast, offline latency analysis did not show age-related differences in the effects of context and competition. As expected, older adults were generally slower to touch the image than younger adults. Conclusions Traditional offline measures were not able to reveal the complex effect of aging on spoken semantic context processing. Online eye gaze measures suggest that older adults were slower than younger adults to predict an indicated object based on semantic context. Semantic competition affected online processing for older adults more than for younger adults, with no accompanying age-related differences in latency. This supports an early age-related inhibition deficit, interfering with processing, and not necessarily with response execution.
Collapse
Affiliation(s)
- Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Israel
- Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Israel
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA
| | - Yuval Palgi
- Department of Gerontology, University of Haifa, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University Health Networks, Ontario, Canada
| |
Collapse
|