1
|
Svirsky MA, Neukam JD, Capach NH, Amichetti NM, Lavender A, Wingfield A. Communication Under Sharply Degraded Auditory Input and the "2-Sentence" Problem. Ear Hear 2024; 45:1045-1058. [PMID: 38523125 DOI: 10.1097/aud.0000000000001500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/26/2024]
Abstract
OBJECTIVES Despite performing well in standard clinical assessments of speech perception, many cochlear implant (CI) users report experiencing significant difficulties when listening in real-world environments. We hypothesize that this disconnect may be related, in part, to the limited ecological validity of tests that are currently used clinically and in research laboratories. The challenges that arise from degraded auditory information provided by a CI, combined with the listener's finite cognitive resources, may lead to difficulties when processing speech material that is more demanding than the single words or single sentences that are used in clinical tests. DESIGN Here, we investigate whether speech identification performance and processing effort (indexed by pupil dilation measures) are affected when CI users or normal-hearing control subjects are asked to repeat two sentences presented sequentially instead of just one sentence. RESULTS Response accuracy was minimally affected in normal-hearing listeners, but CI users showed a wide range of outcomes, from no change to decrements of up to 45 percentage points. The amount of decrement was not predictable from the CI users' performance in standard clinical tests. Pupillometry measures tracked closely with task difficulty in both the CI group and the normal-hearing group, even though the latter had speech perception scores near ceiling levels for all conditions. CONCLUSIONS Speech identification performance is significantly degraded in many (but not all) CI users in response to input that is only slightly more challenging than standard clinical tests; specifically, when two sentences are presented sequentially before requesting a response, instead of presenting just a single sentence at a time. This potential "2-sentence problem" represents one of the simplest possible scenarios that go beyond presentation of the single words or sentences used in most clinical tests of speech perception, and it raises the possibility that even good performers in single-sentence tests may be seriously impaired by other ecologically relevant manipulations. The present findings also raise the possibility that a clinical version of a 2-sentence test may provide actionable information for counseling and rehabilitating CI users, and for people who interact with them closely.
Collapse
Affiliation(s)
- Mario A Svirsky
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
- Neuroscience Institute, New York University School of Medicine, New York, New York, USA
| | - Jonathan D Neukam
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
| | - Nicole Hope Capach
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
| | - Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | - Annette Lavender
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
- Cochlear Americas, Denver, Colorado, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
2
|
Brown VA, Sewell K, Villanueva J, Strand JF. Noisy speech impairs retention of previously heard information only at short time scales. Mem Cognit 2024:10.3758/s13421-024-01583-y. [PMID: 38758512 DOI: 10.3758/s13421-024-01583-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/24/2024] [Indexed: 05/18/2024]
Abstract
When speech is presented in noise, listeners must recruit cognitive resources to resolve the mismatch between the noisy input and representations in memory. A consequence of this effortful listening is impaired memory for content presented earlier. In the first study on effortful listening, Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248 (1968; Experiment 2) found that recall for a list of digits was poorer when subsequent digits were presented with masking noise than without. Experiment 3 of that study extended this effect to more naturalistic, passage-length materials. Although the findings of Rabbitt's Experiment 2 have been replicated multiple times, no work has assessed the robustness of Experiment 3. We conducted a replication attempt of Rabbitt's Experiment 3 at three signal-to-noise ratios (SNRs). Results at one of the SNRs (Experiment 1a of the current study) were in the opposite direction from what Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248, (1968) reported - that is, speech was recalled more accurately when it was followed by speech presented in noise rather than in the clear - and results at the other two SNRs showed no effect of noise (Experiments 1b and 1c). In addition, reanalysis of a replication of Rabbitt's seminal finding in his second experiment showed that the effect of effortful listening on previously presented information is transient. Thus, effortful listening caused by noise appears to only impair memory for information presented immediately before the noise, which may account for our finding that noise in the second-half of a long passage did not impair recall of information presented in the first half of the passage.
Collapse
Affiliation(s)
- Violet A Brown
- Department of Psychology, Carleton College, Northfield, MN, USA.
| | - Katrina Sewell
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Jed Villanueva
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN, USA
| |
Collapse
|
3
|
Shen J, Sun J, Zhang Z, Sun B, Li H, Liu Y. The Effect of Hearing Loss and Working Memory Capacity on Context Use and Reliance on Context in Older Adults. Ear Hear 2024; 45:787-800. [PMID: 38273447 DOI: 10.1097/aud.0000000000001470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
OBJECTIVES Older adults often complain of difficulty in communicating in noisy environments. Contextual information is considered an important cue for identifying everyday speech. To date, it has not been clear exactly how context use (CU) and reliance on context in older adults are affected by hearing status and cognitive function. The present study examined the effects of semantic context on the performance of speech recognition, recall, perceived listening effort (LE), and noise tolerance, and further explored the impacts of hearing loss and working memory capacity on CU and reliance on context among older adults. DESIGN Fifty older adults with normal hearing and 56 older adults with mild-to-moderate hearing loss between the ages of 60 and 95 years participated in this study. A median split of the backward digit span further classified the participants into high working memory (HWM) and low working memory (LWM) capacity groups. Each participant performed high- and low-context Repeat and Recall tests, including a sentence repeat and delayed recall task, subjective assessments of LE, and tolerable time under seven signal to noise ratios (SNRs). CU was calculated as the difference between high- and low-context sentences for each outcome measure. The proportion of context use (PCU) in high-context performance was taken as the reliance on context to explain the degree to which participants relied on context when they repeated and recalled high-context sentences. RESULTS Semantic context helps improve the performance of speech recognition and delayed recall, reduces perceived LE, and prolongs noise tolerance in older adults with and without hearing loss. In addition, the adverse effects of hearing loss on the performance of repeat tasks were more pronounced in low context than in high context, whereas the effects on recall tasks and noise tolerance time were more significant in high context than in low context. Compared with other tasks, the CU and PCU in repeat tasks were more affected by listening status and working memory capacity. In the repeat phase, hearing loss increased older adults' reliance on the context of a relatively challenging listening environment, as shown by the fact that when the SNR was 0 and -5 dB, the PCU (repeat) of the hearing loss group was significantly greater than that of the normal-hearing group, whereas there was no significant difference between the two hearing groups under the remaining SNRs. In addition, older adults with LWM had significantly greater CU and PCU in repeat tasks than those with HWM, especially at SNRs with moderate task demands. CONCLUSIONS Taken together, semantic context not only improved speech perception intelligibility but also released cognitive resources for memory encoding in older adults. Mild-to-moderate hearing loss and LWM capacity in older adults significantly increased the use and reliance on semantic context, which was also modulated by the level of SNR.
Collapse
Affiliation(s)
- Jiayuan Shen
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Zhejiang, China
| | - Jiayu Sun
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Training Department, Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haitao Li
- Department of Neurology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| |
Collapse
|
4
|
Hansen TA, O’Leary RM, Svirsky MA, Wingfield A. Self-pacing ameliorates recall deficit when listening to vocoded discourse: a cochlear implant simulation. Front Psychol 2023; 14:1225752. [PMID: 38054180 PMCID: PMC10694252 DOI: 10.3389/fpsyg.2023.1225752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 11/07/2023] [Indexed: 12/07/2023] Open
Abstract
Introduction In spite of its apparent ease, comprehension of spoken discourse represents a complex linguistic and cognitive operation. The difficulty of such an operation can increase when the speech is degraded, as is the case with cochlear implant users. However, the additional challenges imposed by degraded speech may be mitigated to some extent by the linguistic context and pace of presentation. Methods An experiment is reported in which young adults with age-normal hearing recalled discourse passages heard with clear speech or with noise-band vocoding used to simulate the sound of speech produced by a cochlear implant. Passages were varied in inter-word predictability and presented either without interruption or in a self-pacing format that allowed the listener to control the rate at which the information was delivered. Results Results showed that discourse heard with clear speech was better recalled than discourse heard with vocoded speech, discourse with a higher average inter-word predictability was better recalled than discourse with a lower average inter-word predictability, and self-paced passages were recalled better than those heard without interruption. Of special interest was the semantic hierarchy effect: the tendency for listeners to show better recall for main ideas than mid-level information or detail from a passage as an index of listeners' ability to understand the meaning of a passage. The data revealed a significant effect of inter-word predictability, in that passages with lower predictability had an attenuated semantic hierarchy effect relative to higher-predictability passages. Discussion Results are discussed in terms of broadening cochlear implant outcome measures beyond current clinical measures that focus on single-word and sentence repetition.
Collapse
Affiliation(s)
- Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, NY, United States
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, MA, United States
| |
Collapse
|
5
|
Baese-Berk MM, Levi SV, Van Engen KJ. Intelligibility as a measure of speech perception: Current approaches, challenges, and recommendations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:68. [PMID: 36732227 DOI: 10.1121/10.0016806] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 12/18/2022] [Indexed: 06/18/2023]
Abstract
Intelligibility measures, which assess the number of words or phonemes a listener correctly transcribes or repeats, are commonly used metrics for speech perception research. While these measures have many benefits for researchers, they also come with a number of limitations. By pointing out the strengths and limitations of this approach, including how it fails to capture aspects of perception such as listening effort, this article argues that the role of intelligibility measures must be reconsidered in fields such as linguistics, communication disorders, and psychology. Recommendations for future work in this area are presented.
Collapse
Affiliation(s)
| | - Susannah V Levi
- Department of Communicative Sciences and Disorders, New York University, New York, New York 10012, USA
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130, USA
| |
Collapse
|
6
|
Gianakas SP, Fitzgerald MB, Winn MB. Identifying Listeners Whose Speech Intelligibility Depends on a Quiet Extra Moment After a Sentence. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4852-4865. [PMID: 36472938 PMCID: PMC9934912 DOI: 10.1044/2022_jslhr-21-00622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 05/29/2022] [Accepted: 08/16/2022] [Indexed: 06/03/2023]
Abstract
PURPOSE An extra moment after a sentence is spoken may be important for listeners with hearing loss to mentally repair misperceptions during listening. The current audiologic test battery cannot distinguish between a listener who repaired a misperception versus a listener who heard the speech accurately with no need for repair. This study aims to develop a behavioral method to identify individuals who are at risk for relying on a quiet moment after a sentence. METHOD Forty-three individuals with hearing loss (32 cochlear implant users, 11 hearing aid users) heard sentences that were followed by either 2 s of silence or 2 s of babble noise. Both high- and low-context sentences were used in the task. RESULTS Some individuals showed notable benefit in accuracy scores (particularly for high-context sentences) when given an extra moment of silent time following the sentence. This benefit was highly variable across individuals and sometimes absent altogether. However, the group-level patterns of results were mainly explained by the use of context and successful perception of the words preceding sentence-final words. CONCLUSIONS These results suggest that some but not all individuals improve their speech recognition score by relying on a quiet moment after a sentence, and that this fragility of speech recognition cannot be assessed using one isolated utterance at a time. Reliance on a quiet moment to repair perceptions would potentially impede the perception of an upcoming utterance, making continuous communication in real-world scenarios difficult especially for individuals with hearing loss. The methods used in this study-along with some simple modifications if necessary-could potentially identify patients with hearing loss who retroactively repair mistakes by using clinically feasible methods that can ultimately lead to better patient-centered hearing health care. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21644801.
Collapse
|
7
|
Sherafati A, Dwyer N, Bajracharya A, Hassanpour MS, Eggebrecht AT, Firszt JB, Culver JP, Peelle JE. Prefrontal cortex supports speech perception in listeners with cochlear implants. eLife 2022; 11:e75323. [PMID: 35666138 PMCID: PMC9225001 DOI: 10.7554/elife.75323] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 06/04/2022] [Indexed: 12/14/2022] Open
Abstract
Cochlear implants are neuroprosthetic devices that can restore hearing in people with severe to profound hearing loss by electrically stimulating the auditory nerve. Because of physical limitations on the precision of this stimulation, the acoustic information delivered by a cochlear implant does not convey the same level of acoustic detail as that conveyed by normal hearing. As a result, speech understanding in listeners with cochlear implants is typically poorer and more effortful than in listeners with normal hearing. The brain networks supporting speech understanding in listeners with cochlear implants are not well understood, partly due to difficulties obtaining functional neuroimaging data in this population. In the current study, we assessed the brain regions supporting spoken word understanding in adult listeners with right unilateral cochlear implants (n=20) and matched controls (n=18) using high-density diffuse optical tomography (HD-DOT), a quiet and non-invasive imaging modality with spatial resolution comparable to that of functional MRI. We found that while listening to spoken words in quiet, listeners with cochlear implants showed greater activity in the left prefrontal cortex than listeners with normal hearing, specifically in a region engaged in a separate spatial working memory task. These results suggest that listeners with cochlear implants require greater cognitive processing during speech understanding than listeners with normal hearing, supported by compensatory recruitment of the left prefrontal cortex.
Collapse
Affiliation(s)
- Arefeh Sherafati
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
| | - Noel Dwyer
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Aahana Bajracharya
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | | | - Adam T Eggebrecht
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Electrical & Systems Engineering, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
| | - Jill B Firszt
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Joseph P Culver
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
- Department of Physics, Washington University in St. LouisSt. LouisUnited States
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| |
Collapse
|
8
|
Pupillometry reveals cognitive demands of lexical competition during spoken word recognition in young and older adults. Psychon Bull Rev 2021; 29:268-280. [PMID: 34405386 DOI: 10.3758/s13423-021-01991-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/27/2021] [Indexed: 12/27/2022]
Abstract
In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological "neighbors" with similar acoustic properties (e.g., "cap" vs. "cat"). Thus, recognizing words with more competitors should come at a greater cognitive cost relative to recognizing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in quiet. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words with many or fewer neighbors, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than did young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.
Collapse
|
9
|
Text Captioning Buffers Against the Effects of Background Noise and Hearing Loss on Memory for Speech. Ear Hear 2021; 43:115-127. [PMID: 34260436 DOI: 10.1097/aud.0000000000001079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Everyday speech understanding frequently occurs in perceptually demanding environments, for example, due to background noise and normal age-related hearing loss. The resulting degraded speech signals increase listening effort, which gives rise to negative downstream effects on subsequent memory and comprehension, even when speech is intelligible. In two experiments, we explored whether the presentation of realistic assistive text captioned speech offsets the negative effects of background noise and hearing impairment on multiple measures of speech memory. DESIGN In Experiment 1, young normal-hearing adults (N = 48) listened to sentences for immediate recall and delayed recognition memory. Speech was presented in quiet or in two levels of background noise. Sentences were either presented as speech only or as text captioned speech. Thus, the experiment followed a 2 (caption vs no caption) × 3 (no noise, +7 dB signal-to-noise ratio, +3 dB signal-to-noise ratio) within-subjects design. In Experiment 2, a group of older adults (age range: 61 to 80, N = 31), with varying levels of hearing acuity completed the same experimental task as in Experiment 1. For both experiments, immediate recall, recognition memory accuracy, and recognition memory confidence were analyzed via general(ized) linear mixed-effects models. In addition, we examined individual differences as a function of hearing acuity in Experiment 2. RESULTS In Experiment 1, we found that the presentation of realistic text-captioned speech in young normal-hearing listeners showed improved immediate recall and delayed recognition memory accuracy and confidence compared with speech alone. Moreover, text captions attenuated the negative effects of background noise on all speech memory outcomes. In Experiment 2, we replicated the same pattern of results in a sample of older adults with varying levels of hearing acuity. Moreover, we showed that the negative effects of hearing loss on speech memory in older adulthood were attenuated by the presentation of text captions. CONCLUSIONS Collectively, these findings strongly suggest that the simultaneous presentation of text can offset the negative effects of effortful listening on speech memory. Critically, captioning benefits extended from immediate word recall to long-term sentence recognition memory, a benefit that was observed not only for older adults with hearing loss but also young normal-hearing listeners. These findings suggest that the text captioning benefit to memory is robust and has potentially wide applications for supporting speech listening in acoustically challenging environments.
Collapse
|
10
|
Silcox JW, Payne BR. The costs (and benefits) of effortful listening on context processing: A simultaneous electrophysiology, pupillometry, and behavioral study. Cortex 2021; 142:296-316. [PMID: 34332197 DOI: 10.1016/j.cortex.2021.06.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 04/02/2021] [Accepted: 06/10/2021] [Indexed: 11/24/2022]
Abstract
There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words ("The prisoners were planning their escape/party") or were low-constraint sentences with unexpected sentence-final words ("All day she thought about the party"). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.
Collapse
Affiliation(s)
| | - Brennan R Payne
- Department of Psychology, University of Utah, USA; Interdepartmental Neuroscience Program, University of Utah, USA
| |
Collapse
|
11
|
McLaughlin DJ, Braver TS, Peelle JE. Measuring the Subjective Cost of Listening Effort Using a Discounting Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:337-347. [PMID: 33439751 PMCID: PMC8632478 DOI: 10.1044/2020_jslhr-20-00086] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Purpose Objective measures of listening effort have been gaining prominence, as they provide metrics to quantify the difficulty of understanding speech under a variety of circumstances. A key challenge has been to develop paradigms that enable the complementary measurement of subjective listening effort in a quantitatively precise manner. In this study, we introduce a novel decision-making paradigm to examine age-related and individual differences in subjective effort during listening. Method Older and younger adults were presented with spoken sentences mixed with speech-shaped noise at multiple signal-to-noise ratios (SNRs). On each trial, subjects were offered the choice between completing an easier listening trial (presented at +20 dB SNR) for a smaller monetary reward and completing a harder listening trial (presented at either +4, 0, -4, -8, or -12 dB SNR) for a greater monetary reward. By varying the amount of the reward offered for the easier option, the subjective value of performing effortful listening trials at each SNR could be assessed. Results Older adults discounted the value of effortful listening to a greater degree than young adults, opting to accept less money in order to avoid more difficult SNRs. Additionally, older adults with poorer hearing and smaller working memory capacities were more likely to choose easier trials; however, in younger adults, no relationship with hearing or working memory was found. Self-reported measures of economic status did not affect these relationships. Conclusions These findings suggest that subjective listening effort depends on factors including, but not necessarily limited to, hearing and working memory. Additionally, this study demonstrates that economic decision-making paradigms can be a useful approach for assessing subjective listening effort and may prove beneficial in future research.
Collapse
Affiliation(s)
- Drew J. McLaughlin
- Department of Psychological & Brain Sciences, Washington University in St. Louis, MO
| | - Todd S. Braver
- Department of Psychological & Brain Sciences, Washington University in St. Louis, MO
| | | |
Collapse
|
12
|
Lewis GA, Bidelman GM. Autonomic Nervous System Correlates of Speech Categorization Revealed Through Pupillometry. Front Neurosci 2020; 13:1418. [PMID: 31998068 PMCID: PMC6967406 DOI: 10.3389/fnins.2019.01418] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 12/16/2019] [Indexed: 02/06/2023] Open
Abstract
Human perception requires the many-to-one mapping between continuous sensory elements and discrete categorical representations. This grouping operation underlies the phenomenon of categorical perception (CP)-the experience of perceiving discrete categories rather than gradual variations in signal input. Speech perception requires CP because acoustic cues do not share constant relations with perceptual-phonetic representations. Beyond facilitating perception of unmasked speech, we reasoned CP might also aid the extraction of target speech percepts from interfering sound sources (i.e., noise) by generating additional perceptual constancy and reducing listening effort. Specifically, we investigated how noise interference impacts cognitive load and perceptual identification of unambiguous (i.e., categorical) vs. ambiguous stimuli. Listeners classified a speech vowel continuum (/u/-/a/) at various signal-to-noise ratios (SNRs [unmasked, 0 and -5 dB]). Continuous recordings of pupil dilation measured processing effort, with larger, later dilations reflecting increased listening demand. Critical comparisons were between time-locked changes in eye data in response to unambiguous (i.e., continuum endpoints) tokens vs. ambiguous tokens (i.e., continuum midpoint). Unmasked speech elicited faster responses and sharper psychometric functions, which steadily declined in noise. Noise increased pupil dilation across stimulus conditions, but not straightforwardly. Noise-masked speech modulated peak pupil size (i.e., [0 and -5 dB] > unmasked). In contrast, peak dilation latency varied with both token and SNR. Interestingly, categorical tokens elicited earlier pupil dilation relative to ambiguous tokens. Our pupillary data suggest CP reconstructs auditory percepts under challenging listening conditions through interactions between stimulus salience and listeners' internalized effort and/or arousal.
Collapse
Affiliation(s)
- Gwyneth A Lewis
- Institute for Intelligent Systems, The University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
| | - Gavin M Bidelman
- Institute for Intelligent Systems, The University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, United States
| |
Collapse
|
13
|
Guang C, Lefkowitz E, Dillman-Hasso N, Brown VA, Strand JF. Recall of Speech is Impaired by Subsequent Masking Noise: A Replication of Experiment 2. ACTA ACUST UNITED AC 2020; 3:158-167. [PMID: 34240010 DOI: 10.1080/25742442.2021.1896908] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Introduction The presence of masking noise can impair speech intelligibility and increase the attentional and cognitive resources necessary to understand speech. The first study to demonstrate the negative cognitive effects of noisy speech found that participants had poorer recall for aurally-presented digits early in a list when later digits were presented in noise relative to quiet (Rabbitt, 1968). However, despite being cited nearly 500 times and providing the foundation for a wealth of subsequent research on the topic, the original study has never been directly replicated. Methods This study replicated Rabbitt (1968) with a large online sample and tested its robustness to a variety of analytical and scoring techniques. Results We replicated Rabbitt's key finding that listening to speech in noise impairs recall for items that came earlier in the list. The results were consistent when we used the original analytical technique (an ANOVA) and a more powerful analytical technique (generalized linear mixed effects models) that was not available when the original paper was published. Discussion These findings support the claim that effortful listening can interfere with encoding or rehearsal of previously presented information.
Collapse
Affiliation(s)
| | | | | | - Violet A Brown
- Washington University in St. Louis, Department of Psychological and Brain Sciences
| | | |
Collapse
|
14
|
Loughrey DG, Mihelj E, Lawlor BA. Age-related hearing loss associated with altered response efficiency and variability on a visual sustained attention task. AGING NEUROPSYCHOLOGY AND COGNITION 2019; 28:1-25. [PMID: 31868123 DOI: 10.1080/13825585.2019.1704393] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study investigated the association between age-related hearing loss (ARHL) and differences in response efficiency and variability on a sustained attention task. The study population comprised 32 participants in a hearing loss group (HLG) and 34 controls without hearing loss (CG). Mean reaction time (RT) and accuracy were recorded to assess response efficiency. RT variability was decomposed to examine temporal aspects of variability associated with neural arousal and top-down executive control of vigilant attention. The HLG had a significantly longer mean RT, possibly reflecting a strategic approach to maintain accuracy. The HLG also demonstrated altered variability (indicative of greater decline in neural arousal) but maintained executive control that was significantly predictive of poorer response efficiency. Adults with ARHL may rely on higher-order attention networks to compensate for decline in both peripheral sensory function and in subcortical arousal systems which mediate lower-order automatic neurocognitive processes.
Collapse
Affiliation(s)
- David G Loughrey
- Global Brain Health Institute, Trinity College Dublin, Ireland/University of California , San Francisco, CA, USA
| | - Ernest Mihelj
- Institute of Human Movement Sciences and Sport, Eidgenössische Technische Hochschule Zürich , Switzerland
| | - Brian A Lawlor
- Global Brain Health Institute, Trinity College Dublin, Ireland/University of California, San Francisco. Mercer's Institute for Successful Ageing, St James Hospital , Dublin, Ireland
| |
Collapse
|
15
|
Loughrey DG, Pakhomov SVS, Lawlor BA. Altered verbal fluency processes in older adults with age-related hearing loss. Exp Gerontol 2019; 130:110794. [PMID: 31790801 DOI: 10.1016/j.exger.2019.110794] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 10/27/2019] [Accepted: 11/24/2019] [Indexed: 11/28/2022]
Abstract
Epidemiological studies have linked age-related hearing loss (ARHL) with an increased risk of neurocognitive decline. Difficulties in speech perception with subsequent changes in brain morphometry, including regions important for lexical-semantic memory, are thought to be a possible mechanism for this relationship. This study investigated differences in automatic and executive lexical-semantic processes on verbal fluency tasks in individuals with acquired hearing loss. The primary outcomes were indices of automatic (clustering/word retrieval at start of task) and executive (switching/word retrieval after start of the task) processes from semantic and phonemic fluency tasks. To extract indices of clustering and switching, we used both manual and computerised methods. There were no differences between groups on indices of executive fluency processes or on any indices from the semantic fluency task. The hearing loss group demonstrated weaker automatic processes on the phonemic fluency task. Further research into differences in lexical-semantic processes with ARHL is warranted.
Collapse
Affiliation(s)
- David G Loughrey
- Global Brain Health Institute, Trinity College Dublin, Ireland; Global Brain Health Institute, University of California, San Francisco, USA; Trinity College Institute of Neuroscience, Trinity College Dublin.
| | | | - Brian A Lawlor
- Global Brain Health Institute, Trinity College Dublin, Ireland; Global Brain Health Institute, University of California, San Francisco, USA; Mercer's Institute for Successful Ageing, St James Hospital, Dublin, Ireland
| |
Collapse
|
16
|
Romero-Rivas C, Thorley C, Skelton K, Costa A. Foreign accents reduce false recognition rates in the DRM paradigm. JOURNAL OF COGNITIVE PSYCHOLOGY 2019. [DOI: 10.1080/20445911.2019.1634576] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Carlos Romero-Rivas
- Department of Evolutive and Educational Psychology, Universidad Autónoma de Madrid, Spain
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - Craig Thorley
- Department of Psychology, James Cook University, Douglas, Australia
| | - Katie Skelton
- Department of Psychological Sciences, University of Liverpool, Liverpool, UK
| | - Albert Costa
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
17
|
Ayasse ND, Penn LR, Wingfield A. Variations Within Normal Hearing Acuity and Speech Comprehension: An Exploratory Study. Am J Audiol 2019; 28:369-375. [PMID: 31091111 DOI: 10.1044/2019_aja-18-0173] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Many young adults with a mild hearing loss can appear unaware or unconcerned about their loss or its potential effects. A question that has not been raised in prior research is whether slight variability, even within the range of clinically normal hearing, may have a detrimental effect on comprehension of spoken sentences, especially when attempting to understand the meaning of sentences that offer an additional cognitive challenge. The purpose of this study was to address this question. Method An exploratory analysis was conducted on data from 3 published studies that included young adults, ages 18 to 29 years, with audiometrically normal hearing acuity (pure-tone average < 15 dB HL) tested for comprehension of sentences that conveyed the sentence meaning with simpler or more complex linguistic structures. A product-moment correlation was conducted between individuals' hearing acuity and their comprehension accuracy. Results A significant correlation appeared between hearing acuity and comprehension accuracy for syntactically complex sentences, but not for sentences with a simpler syntactic structure. Partial correlations confirmed this relationship to hold independent of participant age within this relatively narrow age range. Conclusion These findings suggest that slight elevations in hearing thresholds, even among young adults who pass a screen for normal hearing, can affect comprehension accuracy for spoken sentences when combined with cognitive demands imposed by sentences that convey their meaning with a complex linguistic structure. These findings support limited resource models of attentional allocation and argue for routine baseline hearing evaluations of young adults with current age-normal hearing acuity.
Collapse
Affiliation(s)
- Nicole D. Ayasse
- Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA
| | - Lana R. Penn
- Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA
| | - Arthur Wingfield
- Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA
| |
Collapse
|
18
|
Winn MB, Moore AN. Pupillometry Reveals That Context Benefit in Speech Perception Can Be Disrupted by Later-Occurring Sounds, Especially in Listeners With Cochlear Implants. Trends Hear 2019; 22:2331216518808962. [PMID: 30375282 PMCID: PMC6207967 DOI: 10.1177/2331216518808962] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Contextual cues can be used to improve speech recognition, especially for people with hearing impairment. However, previous work has suggested that when the auditory signal is degraded, context might be used more slowly than when the signal is clear. This potentially puts the hearing-impaired listener in a dilemma of continuing to process the last sentence when the next sentence has already begun. This study measured the time course of the benefit of context using pupillary responses to high- and low-context sentences that were followed by silence or various auditory distractors (babble noise, ignored digits, or attended digits). Participants were listeners with cochlear implants or normal hearing using a 12-channel noise vocoder. Context-related differences in pupil dilation were greater for normal hearing than for cochlear implant listeners, even when scaled for differences in pupil reactivity. The benefit of context was systematically reduced for both groups by the presence of the later-occurring sounds, including virtually complete negation when sentences were followed by another attended utterance. These results challenge how we interpret the benefit of context in experiments that present just one utterance at a time. If a listener uses context to “repair” part of a sentence, and later-occurring auditory stimuli interfere with that repair process, the benefit of context might not survive outside the idealized laboratory or clinical environment. Elevated listening effort in hearing-impaired listeners might therefore result not just from poor auditory encoding but also inefficient use of context and prolonged processing of misperceived utterances competing with perception of incoming speech.
Collapse
Affiliation(s)
- Matthew B Winn
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Ashley N Moore
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
19
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 309] [Impact Index Per Article: 61.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
20
|
Ayasse ND, Wingfield A. A Tipping Point in Listening Effort: Effects of Linguistic Complexity and Age-Related Hearing Loss on Sentence Comprehension. Trends Hear 2019; 22:2331216518790907. [PMID: 30235973 PMCID: PMC6154259 DOI: 10.1177/2331216518790907] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
In recent years, there has been a growing interest in the relationship between effort and performance. Early formulations implied that, as the challenge of a task increases, individuals will exert more effort, with resultant maintenance of stable performance. We report an experiment in which normal-hearing young adults, normal-hearing older adults, and older adults with age-related mild-to-moderate hearing loss were tested for comprehension of recorded sentences that varied the comprehension challenge in two ways. First, sentences were constructed that expressed their meaning either with a simpler subject-relative syntactic structure or a more computationally demanding object-relative structure. Second, for each sentence type, an adjectival phrase was inserted that created either a short or long gap in the sentence between the agent performing an action and the action being performed. The measurement of pupil dilation as an index of processing effort showed effort to increase with task difficulty until a difficulty tipping point was reached. Beyond this point, the measurement of pupil size revealed a commitment of effort by the two groups of older adults who failed to keep pace with task demands as evidenced by reduced comprehension accuracy. We take these pupillometry data as revealing a complex relationship between task difficulty, effort, and performance that might not otherwise appear from task performance alone.
Collapse
Affiliation(s)
- Nicole D Ayasse
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Arthur Wingfield
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
21
|
Guijo LM, Horiuti MB, Cardoso ACV. Mensuração do esforço auditivo com o uso de um paradigma de tarefa dupla do Português Brasileiro: estudo-piloto. Codas 2019; 31:e20180181. [DOI: 10.1590/2317-1782/20192018181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 01/07/2019] [Indexed: 11/21/2022] Open
Abstract
RESUMO Objetivo Mensurar o esforço auditivo com o uso de um paradigma de tarefa dupla de memória operacional e analisar a significância clínica do desempenho de indivíduos normo-ouvintes. Método Participaram 10 adultos jovens, entre 18 e 30 anos, de ambos os gêneros, normo-ouvintes classificados segundo a média quadritonal (500, 1000, 2000 e 4000Hz) e com nível sociocultural similar. Os participantes foram submetidos à anamnese audiológica, meatoscopia e audiometria tonal limiar. Para a mensuração do esforço auditivo, utilizou-se um paradigma de tarefa dupla, composto por tarefas de percepção de fala e memória operacional de logatomas, palavras reais e sentenças sem sentido. Anteriormente à mensuração, o paradigma de tarefa dupla foi realizado no silêncio com o intuito de treinar os participantes a desempenharem as tarefas adequadamente. Após a fase de treinamento, este paradigma foi realizado em duas situações de escuta distintas, nas relações sinal/ruído de +5 e -5dB, com o ruído do tipo White Noise. Resultados A comparação do desempenho por orelha, direita ou esquerda, nas duas relações sinal-ruído demonstrou efeito significante para as tarefas de percepção de fala de logatomas e sentenças sem sentido em ambas as orelhas, porém para a tarefa de esforço auditivo e memória operacional houve diferença significante apenas para a orelha direita. Conclusão Foi possível mensurar o esforço auditivo com o uso do paradigma proposto e este instrumento demonstrou ser sensível para a quantificação deste parâmetro auditivo.
Collapse
|
22
|
Chan KY, Chiu MM, Dailey BA, Jalil DM. Effect of Foreign Accent on Immediate Serial Recall. Exp Psychol 2019; 66:40-57. [DOI: 10.1027/1618-3169/a000430] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. This study disentangled factors contributing to impaired memory for foreign-accented words – misperception and disruption of encoding. When native English and Cantonese-accented words were presented auditorily for serial recall (Experiment 1), intrusion errors for accented words were higher across all serial positions (SPs). Participants made more intrusion errors during auditory presentation than visual and auditory presentation, and more errors for accented words than native words. Lengthening the interstimulus intervals in Experiment 2 reduced intrusion, repetition, order, and omission errors in the middle and late SPs during accented word recall, suggesting that extra time is required for identification and encoding of accented words into memory. Analyses of the intrusions showed that a majority of them were misperceptions and sounded similar to the stimulus words. These findings suggest that effortful perceptual processing of accented speech can induce perceptual difficulty and interfere with downstream memory processes by exhausting the shared pool of working memory.
Collapse
Affiliation(s)
- Kit Ying Chan
- Department of Social and Behavioural Sciences, City University of Hong Kong, Hong Kong
| | - Ming Ming Chiu
- Department of Special Education and Counselling, The Education University of Hong Kong, Hong Kong
| | - Brady A. Dailey
- Department of Linguistics, Boston University, Boston, MA, USA
| | - Daroon M. Jalil
- Department of Psychology, Old Dominion University, Norfolk, VA, USA
| |
Collapse
|
23
|
Differences in Hearing Acuity among "Normal-Hearing" Young Adults Modulate the Neural Basis for Speech Comprehension. eNeuro 2018; 5:eN-NWR-0263-17. [PMID: 29911176 PMCID: PMC6001266 DOI: 10.1523/eneuro.0263-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 04/17/2018] [Accepted: 04/18/2018] [Indexed: 12/11/2022] Open
Abstract
In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18–41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause. All participants self-reported normal hearing, confirmed by audiometric testing, with some variation within a clinically normal range. We found that participants showed activity related to sentence processing in a left-lateralized frontotemporal network. Although accuracy was generally high, participants still made some errors, which were associated with increased activity in bilateral cingulo-opercular and frontoparietal attention networks. A whole-brain regression analysis revealed that activity in a right anterior middle frontal gyrus (aMFG) component of the frontoparietal attention network was related to individual differences in hearing acuity, such that listeners with poorer hearing showed greater recruitment of this region when successfully understanding a sentence. The activity in right aMFGs for listeners with poor hearing did not differ as a function of sentence type, suggesting a general mechanism that is independent of linguistic processing demands. Our results suggest that even modest variations in hearing ability impact the systems supporting auditory speech comprehension, and that auditory sentence comprehension entails the coordination of a left perisylvian network that is sensitive to linguistic variation with an executive attention network that responds to acoustic challenge.
Collapse
|
24
|
Van Engen KJ, McLaughlin DJ. Eyes and ears: Using eye tracking and pupillometry to understand challenges to speech recognition. Hear Res 2018; 369:56-66. [PMID: 29801981 DOI: 10.1016/j.heares.2018.04.013] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 04/12/2018] [Accepted: 04/25/2018] [Indexed: 11/16/2022]
Abstract
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition.
Collapse
|
25
|
Koeritzer MA, Rogers CS, Van Engen KJ, Peelle JE. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:740-751. [PMID: 29450493 PMCID: PMC5963044 DOI: 10.1044/2017_jslhr-h-17-0077] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 08/28/2017] [Accepted: 09/20/2017] [Indexed: 05/20/2023]
Abstract
PURPOSE The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. METHOD We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. RESULTS Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. CONCLUSIONS Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5848059.
Collapse
Affiliation(s)
- Margaret A Koeritzer
- Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
| | - Chad S Rogers
- Department of Otolaryngology, Washington University in St. Louis, MO
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, MO
| |
Collapse
|
26
|
Verger A, Roman S, Chaudat RM, Felician O, Ceccaldi M, Didic M, Guedj E. Changes of metabolism and functional connectivity in late-onset deafness: Evidence from cerebral 18F-FDG-PET. Hear Res 2017; 353:8-16. [PMID: 28759745 DOI: 10.1016/j.heares.2017.07.011] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 07/18/2017] [Accepted: 07/24/2017] [Indexed: 10/19/2022]
Abstract
Hearing loss is known to impact brain function. The aim of this study was to characterize cerebral metabolic Positron Emission Tomography (PET) changes in elderly patients fulfilling criteria for cochlear implant and investigate the impact of hearing loss on functional connectivity. Statistical Parametric Mapping-T-scores-maps comparisons of 18F-FDG-PET of 27 elderly patients fulfilling criteria for cochlear implant for hearing loss (best-aided speech intelligibility lower or equal to 50%) and 27 matched healthy subjects (p < 0.005, corrected for volume extent) were performed. Metabolic connectivity was evaluated through interregional correlation analysis. Patients were found to have decreased metabolism within the right associative auditory cortex, while increased metabolism was found in prefrontal areas, pre- and post-central areas, the cingulum and the left inferior parietal gyrus. The right associative auditory cortex was integrated into a network of increased metabolic connectivity that included pre- and post-central areas, the cingulum, the right inferior parietal gyrus, as well as the striatum on both sides. Metabolic values of the right associative auditory cortex and left inferior parietal gyrus were positively correlated with performance on neuropsychological test scores. These findings provide further insight into the reorganization of the connectome through sensory loss and compensatory mechanisms in elderly patients with severe hearing loss.
Collapse
Affiliation(s)
- Antoine Verger
- Department of Nuclear Medicine, Assistance Publique-Hôpitaux de Marseille, Aix-Marseille Université, Timone University Hospital, France; Department of Nuclear Medicine & Nancyclotep Imaging Platform, CHRU Nancy, Lorraine University, France; IADI, INSERM, UMR 947, Lorraine University, Nancy, France
| | - Stéphane Roman
- Department of Pediatric Otolaryngology and Neck Surgery, Assistance Publique-Hôpitaux de Marseille, Aix-Marseille Université, Timone University Hospital, France; Aix Marseille Univ, INSERM, UMR 1106, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Rose-May Chaudat
- Department of Neurology and Neuropsychology, Assistance Publique-Hôpitaux de Marseille, Aix-Marseille Université, Timone University Hospital, France
| | - Olivier Felician
- Department of Neurology and Neuropsychology, Assistance Publique-Hôpitaux de Marseille, Aix-Marseille Université, Timone University Hospital, France; Aix Marseille Univ, INSERM, UMR 1106, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Mathieu Ceccaldi
- Department of Neurology and Neuropsychology, Assistance Publique-Hôpitaux de Marseille, Aix-Marseille Université, Timone University Hospital, France; Aix Marseille Univ, INSERM, UMR 1106, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Mira Didic
- Department of Neurology and Neuropsychology, Assistance Publique-Hôpitaux de Marseille, Aix-Marseille Université, Timone University Hospital, France; Aix Marseille Univ, INSERM, UMR 1106, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Eric Guedj
- Department of Nuclear Medicine, Assistance Publique-Hôpitaux de Marseille, Aix-Marseille Université, Timone University Hospital, France; Aix Marseille Univ, CNRS, UMR 7289, INT, Institut de Neurosciences de la Timone, Marseille, France; CERIMED, Aix-Marseille Université, Marseille, France.
| |
Collapse
|
27
|
Ayasse ND, Lash A, Wingfield A. Effort Not Speed Characterizes Comprehension of Spoken Sentences by Older Adults with Mild Hearing Impairment. Front Aging Neurosci 2017; 8:329. [PMID: 28119598 PMCID: PMC5222878 DOI: 10.3389/fnagi.2016.00329] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Accepted: 12/19/2016] [Indexed: 12/13/2022] Open
Abstract
In spite of the rapidity of everyday speech, older adults tend to keep up relatively well in day-to-day listening. In laboratory settings older adults do not respond as quickly as younger adults in off-line tests of sentence comprehension, but the question is whether comprehension itself is actually slower. Two unique features of the human eye were used to address this question. First, we tracked eye-movements as 20 young adults and 20 healthy older adults listened to sentences that referred to one of four objects pictured on a computer screen. Although the older adults took longer to indicate the referenced object with a cursor-pointing response, their gaze moved to the correct object as rapidly as that of the younger adults. Second, we concurrently measured dilation of the pupil of the eye as a physiological index of effort. This measure revealed that although poorer hearing acuity did not slow processing, success came at the cost of greater processing effort.
Collapse
Affiliation(s)
- Nicole D Ayasse
- Volen National Center for Complex Systems, Brandeis University Waltham, MA, USA
| | - Amanda Lash
- Volen National Center for Complex Systems, Brandeis University Waltham, MA, USA
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University Waltham, MA, USA
| |
Collapse
|
28
|
Ward CM, Rogers CS, Van Engen KJ, Peelle JE. Effects of Age, Acoustic Challenge, and Verbal Working Memory on Recall of Narrative Speech. Exp Aging Res 2016; 42:97-111. [PMID: 26683044 DOI: 10.1080/0361073x.2016.1108785] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND/STUDY CONTEXT A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects. METHODS The authors tested 30 young adults (aged 18-28 years) and 30 older adults (aged 65-79 years) with good self-reported hearing. Participants heard short stories that were presented as normal (unprocessed) speech or acoustically degraded using a noise vocoding algorithm with 24 or 16 channels. The degraded stories were still fully intelligible. Following each story, participants were asked to repeat the story in as much detail as possible. Recall was scored using a modified idea unit scoring approach, which included separately scoring hierarchical levels of narrative detail. RESULTS Memory for acoustically degraded stories was significantly worse than for normal stories at some levels of narrative detail. Older adults' memory for the stories was significantly worse overall, but there was no interaction between age and acoustic clarity or level of narrative detail. Verbal working memory (assessed by reading span) significantly correlated with recall accuracy for both young and older adults, whereas hearing ability (better ear pure tone average) did not. CONCLUSION The present findings are consistent with a framework in which the additional cognitive demands caused by a degraded acoustic signal use resources that would otherwise be available for memory encoding for both young and older adults. Verbal working memory is a likely candidate for supporting both of these processes.
Collapse
Affiliation(s)
- Caitlin M Ward
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Chad S Rogers
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Kristin J Van Engen
- b Department of Psychology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Jonathan E Peelle
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| |
Collapse
|
29
|
Peelle JE, Wingfield A. The Neural Consequences of Age-Related Hearing Loss. Trends Neurosci 2016; 39:486-497. [PMID: 27262177 DOI: 10.1016/j.tins.2016.05.001] [Citation(s) in RCA: 152] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Revised: 05/04/2016] [Accepted: 05/09/2016] [Indexed: 01/02/2023]
Abstract
During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here, we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St Louis, St Louis, MO, USA.
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA.
| |
Collapse
|
30
|
Wingfield A, Peelle JE. The effects of hearing loss on neural processing and plasticity. Front Syst Neurosci 2015; 9:35. [PMID: 25798095 PMCID: PMC4351590 DOI: 10.3389/fnsys.2015.00035] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2015] [Accepted: 02/19/2015] [Indexed: 11/28/2022] Open
Affiliation(s)
- Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University Waltham, MA, USA
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis St. Louis, MO, USA
| |
Collapse
|
31
|
Peelle JE. Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 2014; 8:253. [PMID: 25191218 PMCID: PMC4139601 DOI: 10.3389/fnins.2014.00253] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2014] [Accepted: 07/29/2014] [Indexed: 02/06/2023] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis St. Louis, MO, USA
| |
Collapse
|
32
|
Van Engen KJ, Peelle JE. Listening effort and accented speech. Front Hum Neurosci 2014; 8:577. [PMID: 25140140 PMCID: PMC4122174 DOI: 10.3389/fnhum.2014.00577] [Citation(s) in RCA: 75] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2014] [Accepted: 07/14/2014] [Indexed: 11/25/2022] Open
Affiliation(s)
| | - Jonathan E. Peelle
- Department of Otolaryngology, Washington University in St. LouisSt. Louis, MO, USA
| |
Collapse
|