1
|
Valzolgher C, Federici A, Giovanelli E, Gessa E, Bottari D, Pavani F. Continuous tracking of effort and confidence while listening to speech-in-noise in young and older adults. Conscious Cogn 2024; 124:103747. [PMID: 39213729 DOI: 10.1016/j.concog.2024.103747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 08/21/2024] [Accepted: 08/23/2024] [Indexed: 09/04/2024]
Abstract
Reporting discomfort when noise affects listening experience suggests that listeners may be aware, at least to some extent, of adverse environmental conditions and their impact on listening experience. This involves monitoring internal states (effort and confidence). Here we quantified continuous self-report indices that track one's own internal states and investigated age-related differences in this ability. We instructed two groups of young and older adults to continuously report their confidence and effort while listening to stories in fluctuating noise. Using cross-correlation analyses between the time series of fluctuating noise and those of perceived effort or confidence, we showed that (1) participants modified their assessment of effort and confidence based on variations in the noise, with a 4 s lag; (2) there were no differences between the groups. These findings imply extending this method to other areas, expanding the definition of metacognition, and highlighting the value of this ability for older adults.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy.
| | | | - Elena Giovanelli
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| | - Elena Gessa
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| | | | - Francesco Pavani
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy; Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" - CIRCLeS, Trento, Italy
| |
Collapse
|
2
|
Großmann W. Listening with an Ageing Brain - a Cognitive Challenge. Laryngorhinootologie 2023; 102:S12-S34. [PMID: 37130528 PMCID: PMC10184676 DOI: 10.1055/a-1973-3038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Hearing impairment has been recently identified as a major modifiable risk factor for cognitive decline in later life and has been becoming of increasing scientific interest. Sensory and cognitive decline are connected by complex bottom-up and top-down processes, a sharp distinction between sensation, perception, and cognition is impossible. This review provides a comprehensive overview on the effects of healthy and pathological aging on auditory as well as cognitive functioning on speech perception and comprehension, as well as specific auditory deficits in the 2 most common neurodegenerative diseases in old age: Alzheimer disease and Parkinson syndrome. Hypotheses linking hearing loss to cognitive decline are discussed, and current knowledge on the effect of hearing rehabilitation on cognitive functioning is presented. This article provides an overview of the complex relationship between hearing and cognition in old age.
Collapse
Affiliation(s)
- Wilma Großmann
- Universitätsmedizin Rostock, Klinik und Poliklinik für Hals-Nasen-Ohrenheilkunde,Kopf- und Halschirurgie "Otto Körner"
| |
Collapse
|
3
|
Milburn E, Dickey MW, Warren T, Hayes R. Increased reliance on world knowledge during language comprehension in healthy aging: evidence from verb-argument prediction. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2023; 30:1-33. [PMID: 34353231 PMCID: PMC8818061 DOI: 10.1080/13825585.2021.1962791] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 07/27/2021] [Indexed: 12/26/2022]
Abstract
Cognitive aging negatively impacts language comprehension performance. . However, there is evidence that older adults skillfully use linguistic context and their crystallized world knowledge to offset age-related changes that negatively impact comprehension. Two visual-world paradigm experiments examined how aging changes verb-argument prediction, a comprehension process that relies on world knowledge but has rarely been examined in the cognitive-aging literature. Older adults did not differ from younger adults in their activation of an upcoming likely verb argument, particularly when cued by a semantically-rich agent+verb combination (Experiment 1). However, older adults showed elevated activation of previously-mentioned agents (Experiment 1) and of unlikely but verb-congruent referents (Experiment 2). This is novel evidence that older adults exploit semantic context and world knowledge during comprehension to successfully activate upcoming referents. However, older adults also show elevated activation of irrelevant information, consistent with previous findings demonstrating that older adults may experience greater proactive interference and competition from task-irrelevant information.
Collapse
|
4
|
Van Os M, Kray J, Demberg V. Rational speech comprehension: Interaction between predictability, acoustic signal, and noise. Front Psychol 2022; 13:914239. [PMID: 36591096 PMCID: PMC9802670 DOI: 10.3389/fpsyg.2022.914239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 11/30/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction During speech comprehension, multiple sources of information are available to listeners, which are combined to guide the recognition process. Models of speech comprehension posit that when the acoustic speech signal is obscured, listeners rely more on information from other sources. However, these models take into account only word frequency information and local contexts (surrounding syllables), but not sentence-level information. To date, empirical studies investigating predictability effects in noise did not carefully control the tested speech sounds, while the literature investigating the effect of background noise on the recognition of speech sounds does not manipulate sentence predictability. Additionally, studies on the effect of background noise show conflicting results regarding which noise type affects speech comprehension most. We address this in the present experiment. Methods We investigate how listeners combine information from different sources when listening to sentences embedded in background noise. We manipulate top-down predictability, type of noise, and characteristics of the acoustic signal, thus creating conditions which differ in the extent to which a specific speech sound is masked in a way that is grounded in prior work on the confusability of speech sounds in noise. Participants complete an online word recognition experiment. Results and discussion The results show that participants rely more on the provided sentence context when the acoustic signal is harder to process. This is the case even when interactions of the background noise and speech sounds lead to small differences in intelligibility. Listeners probabilistically combine top-down predictions based on context with noisy bottom-up information from the acoustic signal, leading to a trade-off between the different types of information that is dependent on the combination of a specific type of background noise and speech sound.
Collapse
Affiliation(s)
- Marjolein Van Os
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany,*Correspondence: Marjolein Van Os,
| | - Jutta Kray
- Department of Psychology, Saarland University, Saarbrücken, Germany
| | - Vera Demberg
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany,Department of Computer Science, Saarland University, Saarbrücken, Germany
| |
Collapse
|
5
|
Krueger M, Schulte M, Brand T. Assessing and Modeling Spatial Release From Listening Effort in Listeners With Normal Hearing: Reference Ranges and Effects of Noise Direction and Age. Trends Hear 2022; 26:23312165221129407. [PMID: 36285532 PMCID: PMC9618758 DOI: 10.1177/23312165221129407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
Listening to speech in noisy environments is challenging and effortful. Factors like the signal-to-noise ratio (SNR), the spatial separation between target speech and noise interferer(s), and possibly also the listener's age might influence perceived listening effort (LE). This study measured and modeled the effect of the spatial separation of target speech and interfering stationary speech-shaped noise on the perceived LE and its relation to the age of the listeners. Reference ranges for the relationship between subjectively perceived LE and SNR for different noise azimuths were established. For this purpose, 70 listeners with normal hearing and from three age groups rated the perceived LE using the Adaptive Categorical Listening Effort Scaling method (ACALES, Krueger et al., 2017a) with speech from the front and noise from 0°, 90°, 135°, or 180° azimuth. Based on these data, the spatial release from listening effort (SRLE) was calculated. The noise azimuth had a strong effect on SRLE, with the highest release for 135°. The binaural speech intelligibility model (BSIM2020, Hauth et al., 2020) predicted SRLE very well at negative SNRs, but overestimated for positive SNRs. No significant effect of age was found on the respective subjective ratings. Therefore, the reference ranges were determined independently of age. These reference ranges can be used for the classification of LE measurements. However, when the increase of the perceived LE with SNR was analyzed, a significant age difference was found between the listeners of the youngest and oldest group when considering the upper range of the LE function.
Collapse
Affiliation(s)
- Melanie Krueger
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany,Melanie Krueger, Hörzentrum Oldenburg gGmbH, Marie-Curie-Straße 2, D-26129 Oldenburg, Germany.
| | | | - Thomas Brand
- Medizinische Physik, Department für Medizinische Physik und Akustik, Fakultät VI, Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
6
|
Failes E, Sommers MS. Using Eye-Tracking to Investigate an Activation-Based Account of False Hearing in Younger and Older Adults. Front Psychol 2022; 13:821044. [PMID: 35651579 PMCID: PMC9150819 DOI: 10.3389/fpsyg.2022.821044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Accepted: 03/28/2022] [Indexed: 11/28/2022] Open
Abstract
Several recent studies have demonstrated context-based, high-confidence misperceptions in hearing, referred to as false hearing. These studies have unanimously found that older adults are more susceptible to false hearing than are younger adults, which the authors have attributed to an age-related decline in the ability to inhibit the activation of a contextually predicted (but incorrect) response. However, no published work has investigated this activation-based account of false hearing. In the present study, younger and older adults listened to sentences in which the semantic context provided by the sentence was either unpredictive, highly predictive and valid, or highly predictive and misleading with relation to a sentence-final word in noise. Participants were tasked with clicking on one of four images to indicate which image depicted the sentence-final word in noise. We used eye-tracking to investigate how activation, as revealed in patterns of fixations, of different response options changed in real-time over the course of sentences. We found that both younger and older adults exhibited anticipatory activation of the target word when highly predictive contextual cues were available. When these contextual cues were misleading, younger adults were able to suppress the activation of the contextually predicted word to a greater extent than older adults. These findings are interpreted as evidence for an activation-based model of speech perception and for the role of inhibitory control in false hearing.
Collapse
Affiliation(s)
- Eric Failes
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, United States
| | - Mitchell S Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, United States
| |
Collapse
|
7
|
Predictive Sentence Context Reduces Listening Effort in Older Adults With and Without Hearing Loss and With High and Low Working Memory Capacity. Ear Hear 2022; 43:1164-1177. [PMID: 34983897 PMCID: PMC9232842 DOI: 10.1097/aud.0000000000001192] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
OBJECTIVES Listening effort is needed to understand speech that is degraded by hearing loss, a noisy environment, or both. This in turn reduces cognitive spare capacity, the amount of cognitive resources available for allocation to concurrent tasks. Predictive sentence context enables older listeners to perceive speech more accurately, but how does contextual information affect older adults' listening effort? The current study examines the impacts of sentence context and cognitive (memory) load on sequential dual-task behavioral performance in older adults. To assess whether effects of context and memory load differ as a function of older listeners' hearing status, baseline working memory capacity, or both, effects were compared across separate groups of participants with and without hearing loss and with high and low working memory capacity. DESIGN Participants were older adults (age 60-84 years; n = 63) who passed a screen for cognitive impairment. A median split classified participants into groups with high and low working memory capacity. On each trial, participants listened to spoken sentences in noise and reported sentence-final words that were either predictable or unpredictable based on sentence context, and also recalled short (low-load) or long (high-load) sequences of digits that were presented visually before each spoken sentence. Speech intelligibility was quantified as word identification accuracy, and measures of listening effort included digit recall accuracy, and response time to words and digits. Correlations of context benefit in each dependent measure with working memory and vocabulary were also examined. RESULTS Across all participant groups, accuracy and response time for both word identification and digit recall were facilitated by predictive context, indicating that in addition to an improvement in intelligibility, listening effort was also reduced when sentence-final words were predictable. Effects of predictability on all listening effort measures were observed whether or not trials with an incorrect word identification response were excluded, indicating that the effects of predictability on listening effort did not depend on speech intelligibility. In addition, although cognitive load did not affect word identification accuracy, response time for word identification and digit recall, as well as accuracy for digit recall, were impaired under the high-load condition, indicating that cognitive load reduced the amount of cognitive resources available for speech processing. Context benefit in speech intelligibility was positively correlated with vocabulary. However, context benefit was not related to working memory capacity. CONCLUSIONS Predictive sentence context reduces listening effort in cognitively healthy older adults resulting in greater cognitive spare capacity available for other mental tasks, irrespective of the presence or absence of hearing loss and baseline working memory capacity.
Collapse
|
8
|
Amichetti NM, Neukam J, Kinney AJ, Capach N, March SU, Svirsky MA, Wingfield A. Adults with cochlear implants can use prosody to determine the clausal structure of spoken sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:4315. [PMID: 34972310 PMCID: PMC8674009 DOI: 10.1121/10.0008899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 11/04/2021] [Accepted: 11/08/2021] [Indexed: 06/14/2023]
Abstract
Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort.
Collapse
Affiliation(s)
- Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Jonathan Neukam
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Alexander J Kinney
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Nicole Capach
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Samantha U March
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Mario A Svirsky
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| |
Collapse
|
9
|
Van Os M, Kray J, Demberg V. Mishearing as a Side Effect of Rational Language Comprehension in Noise. Front Psychol 2021; 12:679278. [PMID: 34552526 PMCID: PMC8450506 DOI: 10.3389/fpsyg.2021.679278] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 07/23/2021] [Indexed: 11/27/2022] Open
Abstract
Language comprehension in noise can sometimes lead to mishearing, due to the noise disrupting the speech signal. Some of the difficulties in dealing with the noisy signal can be alleviated by drawing on the context – indeed, top-down predictability has shown to facilitate speech comprehension in noise. Previous studies have furthermore shown that strong reliance on the top-down predictions can lead to increased rates of mishearing, especially in older adults, which are attributed to general deficits in cognitive control in older adults. We here propose that the observed mishearing may be a simple consequence of rational language processing in noise. It should not be related to failure on the side of the older comprehenders, but instead would be predicted by rational processing accounts. To test this hypothesis, we extend earlier studies by running an online listening experiment with younger and older adults, carefully controlling the target and direct competitor in our stimuli. We show that mishearing is directly related to the perceptibility of the signal. We furthermore add an analysis of wrong responses, which shows that results are at odds with the idea that participants overly strongly rely on context in this task, as most false answers are indeed close to the speech signal, and not to the semantics of the context.
Collapse
Affiliation(s)
- Marjolein Van Os
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
| | - Jutta Kray
- Department of Psychology, Saarland University, Saarbrücken, Germany
| | - Vera Demberg
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany.,Department of Computer Science, Saarland University, Saarbrücken, Germany
| |
Collapse
|
10
|
Silcox JW, Payne BR. The costs (and benefits) of effortful listening on context processing: A simultaneous electrophysiology, pupillometry, and behavioral study. Cortex 2021; 142:296-316. [PMID: 34332197 DOI: 10.1016/j.cortex.2021.06.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 04/02/2021] [Accepted: 06/10/2021] [Indexed: 11/24/2022]
Abstract
There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words ("The prisoners were planning their escape/party") or were low-constraint sentences with unexpected sentence-final words ("All day she thought about the party"). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.
Collapse
Affiliation(s)
| | - Brennan R Payne
- Department of Psychology, University of Utah, USA; Interdepartmental Neuroscience Program, University of Utah, USA
| |
Collapse
|
11
|
Luthra S, Peraza‐Santiago G, Beeson K, Saltzman D, Crinnion AM, Magnuson JS. Robust Lexically Mediated Compensation for Coarticulation: Christmash Time Is Here Again. Cogn Sci 2021; 45:e12962. [PMID: 33877697 PMCID: PMC8243960 DOI: 10.1111/cogs.12962] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 02/10/2021] [Accepted: 02/19/2021] [Indexed: 11/30/2022]
Abstract
A long-standing question in cognitive science is how high-level knowledge is integrated with sensory input. For example, listeners can leverage lexical knowledge to interpret an ambiguous speech sound, but do such effects reflect direct top-down influences on perception or merely postperceptual biases? A critical test case in the domain of spoken word recognition is lexically mediated compensation for coarticulation (LCfC). Previous LCfC studies have shown that a lexically restored context phoneme (e.g., /s/ in Christma#) can alter the perceived place of articulation of a subsequent target phoneme (e.g., the initial phoneme of a stimulus from a tapes-capes continuum), consistent with the influence of an unambiguous context phoneme in the same position. Because this phoneme-to-phoneme compensation for coarticulation is considered sublexical, scientists agree that evidence for LCfC would constitute strong support for top-down interaction. However, results from previous LCfC studies have been inconsistent, and positive effects have often been small. Here, we conducted extensive piloting of stimuli prior to testing for LCfC. Specifically, we ensured that context items elicited robust phoneme restoration (e.g., that the final phoneme of Christma# was reliably identified as /s/) and that unambiguous context-final segments (e.g., a clear /s/ at the end of Christmas) drove reliable compensation for coarticulation for a subsequent target phoneme. We observed robust LCfC in a well-powered, preregistered experiment with these pretested items (N = 40) as well as in a direct replication study (N = 40). These results provide strong evidence in favor of computational models of spoken word recognition that include top-down feedback.
Collapse
Affiliation(s)
| | | | | | | | | | - James S. Magnuson
- Psychological SciencesUniversity of Connecticut
- BCBL, Basque Center on Cognition Brain and Language
- Ikerbasque, Basque Foundation for Science
| |
Collapse
|
12
|
Harel-Arbeli T, Wingfield A, Palgi Y, Ben-David BM. Age-Related Differences in the Online Processing of Spoken Semantic Context and the Effect of Semantic Competition: Evidence From Eye Gaze. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:315-327. [PMID: 33561353 DOI: 10.1044/2020_jslhr-20-00142] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The study examined age-related differences in the use of semantic context and in the effect of semantic competition in spoken sentence processing. We used offline (response latency) and online (eye gaze) measures, using the "visual world" eye-tracking paradigm. Method Thirty younger and 30 older adults heard sentences related to one of four images presented on a computer monitor. They were asked to touch the image corresponding to the final word of the sentence (target word). Three conditions were used: a nonpredictive sentence, a predictive sentence suggesting one of the four images on the screen (semantic context), and a predictive sentence suggesting two possible images (semantic competition). Results Online eye gaze data showed no age-related differences with nonpredictive sentences, but revealed slowed processing for older adults when context was presented. With the addition of semantic competition to context, older adults were slower to look at the target word after it had been heard. In contrast, offline latency analysis did not show age-related differences in the effects of context and competition. As expected, older adults were generally slower to touch the image than younger adults. Conclusions Traditional offline measures were not able to reveal the complex effect of aging on spoken semantic context processing. Online eye gaze measures suggest that older adults were slower than younger adults to predict an indicated object based on semantic context. Semantic competition affected online processing for older adults more than for younger adults, with no accompanying age-related differences in latency. This supports an early age-related inhibition deficit, interfering with processing, and not necessarily with response execution.
Collapse
Affiliation(s)
- Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Israel
- Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Israel
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA
| | - Yuval Palgi
- Department of Gerontology, University of Haifa, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University Health Networks, Ontario, Canada
| |
Collapse
|
13
|
Signoret C, Andersen LM, Dahlström Ö, Blomberg R, Lundqvist D, Rudner M, Rönnberg J. The Influence of Form- and Meaning-Based Predictions on Cortical Speech Processing Under Challenging Listening Conditions: A MEG Study. Front Neurosci 2020; 14:573254. [PMID: 33100961 PMCID: PMC7546411 DOI: 10.3389/fnins.2020.573254] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Accepted: 09/01/2020] [Indexed: 01/07/2023] Open
Abstract
Under adverse listening conditions, prior linguistic knowledge about the form (i.e., phonology) and meaning (i.e., semantics) help us to predict what an interlocutor is about to say. Previous research has shown that accurate predictions of incoming speech increase speech intelligibility, and that semantic predictions enhance the perceptual clarity of degraded speech even when exact phonological predictions are possible. In addition, working memory (WM) is thought to have specific influence over anticipatory mechanisms by actively maintaining and updating the relevance of predicted vs. unpredicted speech inputs. However, the relative impact on speech processing of deviations from expectations related to form and meaning is incompletely understood. Here, we use MEG to investigate the cortical temporal processing of deviations from the expected form and meaning of final words during sentence processing. Our overall aim was to observe how deviations from the expected form and meaning modulate cortical speech processing under adverse listening conditions and investigate the degree to which this is associated with WM capacity. Results indicated that different types of deviations are processed differently in the auditory N400 and Mismatch Negativity (MMN) components. In particular, MMN was sensitive to the type of deviation (form or meaning) whereas the N400 was sensitive to the magnitude of the deviation rather than its type. WM capacity was associated with the ability to process phonological incoming information and semantic integration.
Collapse
Affiliation(s)
- Carine Signoret
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Lau M Andersen
- The National Research Facility for Magnetoencephalography, Department of Clinical Neuroscience, Karolinska Institutet, Solna, Sweden.,Center of Functionally Integrative Neuroscience, Institute of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Örjan Dahlström
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Rina Blomberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Daniel Lundqvist
- The National Research Facility for Magnetoencephalography, Department of Clinical Neuroscience, Karolinska Institutet, Solna, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
14
|
Blurring past and present: Using false memory to better understand false hearing in young and older adults. Mem Cognit 2020; 48:1403-1416. [PMID: 32671592 DOI: 10.3758/s13421-020-01068-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A number of recent studies have shown that older adults are more susceptible to context-based misperceptions in hearing (Rogers, Jacoby, & Sommers, Psychology and Aging, 27, 33-45, 2012; Sommers, Morton, & Rogers, Remembering: Attributions, Processes, and Control in Human Memory [Essays in Honor of Larry Jacoby], pp. 269-284, 2015) than are young adults. One explanation for these age-related increases in what we term false hearing is that older adults are less able than young individuals to inhibit a prepotent response favored by context. A similar explanation has been proposed for demonstrations of age-related increases in false memory (Jacoby, Bishara, Hessels, & Toth, Journal of Experimental Psychology: General, 134, 131-148, 2005). The present study was designed to compare susceptibility to false hearing and false memory in a group of young and older adults. In Experiment 1, we replicated the findings of past studies demonstrating increased frequency of false hearing in older, relative to young, adults. In Experiment 2, we demonstrated older adults' increased susceptibility to false memory in the same sample. Importantly, we found that participants who were more prone to false hearing also tended to be more prone to false memory, supporting the idea that the two phenomena share a common mechanism. The results are discussed within the framework of a capture model, which differentiates between context-based responding resulting from failures of cognitive control and context-based guessing.
Collapse
|
15
|
Roediger HL, Tekin E. Recognition memory: Tulving's contributions and some new findings. Neuropsychologia 2020; 139:107350. [PMID: 31978402 DOI: 10.1016/j.neuropsychologia.2020.107350] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 01/13/2020] [Accepted: 01/15/2020] [Indexed: 10/25/2022]
Abstract
Endel Tulving has provided unparalleled contributions to the study of human memory. We consider here his contributions to the study of recognition memory and celebrate his first article on recognition, a nearly forgotten but (we argue) essential paper from 1968. We next consider his distinction between remembering and knowing, its relation to confidence, and the implications of high levels of false remembering in the DRM paradigm for using phenomenal experiences as measures of memory. We next pivot to newer work, the use of confidence accuracy characteristic plots in analyzing standard recognition memory experiments. We argue they are quite useful in such research, as they are in eyewitness research. For example, we report that even with hundreds of items, high confidence in a response indicates high accuracy, just as it does in one-item eyewitness research. Finally, we argue that amnesia (rapid forgetting) occurs in all people (not just amnesic patients) for some of their experiences. We provide evidence from three experiments revealing that subjects who fail to recognize recently studied items (miss responses) do so with high confidence 15-20% of the time. Such high confidence misses constitute our definition of everyday amnesia that can occur even in college student populations.
Collapse
|
16
|
Rogers CS, Jones MS, McConkey S, Spehar B, Van Engen KJ, Sommers MS, Peelle JE. Age-Related Differences in Auditory Cortex Activity During Spoken Word Recognition. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:452-473. [PMID: 34327333 PMCID: PMC8318202 DOI: 10.1162/nol_a_00021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/22/2023]
Abstract
Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19-30 years) and 32 older adults (aged 65-81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.
Collapse
Affiliation(s)
- Chad S. Rogers
- Department of Psychology, Union College, Schenectady, NY, USA
| | - Michael S. Jones
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| | - Sarah McConkey
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| | - Brent Spehar
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| | - Kristin J. Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA
| | - Mitchell S. Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA
| | - Jonathan E. Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
17
|
What's "left"? Hemispheric sensitivity to predictability and congruity during sentence reading by older adults. Neuropsychologia 2019; 133:107173. [PMID: 31430444 DOI: 10.1016/j.neuropsychologia.2019.107173] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 08/15/2019] [Accepted: 08/16/2019] [Indexed: 11/23/2022]
Abstract
A number of studies have found that older adults' sentence processing tends not to be characterized by the prediction-related effects attested for young adults. Here, we further probed older adults' sensitivity to predictability and congruity by recording event-related brain potentials (ERPs) as adults over age 60 read pairs of sentences, which ended with either the expected word, an unexpected word from the same semantic category, or an unexpected word from a different category. Half of the contexts were highly constraining. Consistent with patterns attested when older adults listened to these same materials (Federmeier et al., 2002), N400s, on average, were smaller to expected than to unexpected words, but did not show constraint-related reductions for unexpected words that shared features with the most predictable completion (an effect well-attested in young adults). This pattern resembles that seen in young adults for right-hemisphere-biased processing. To assess whether older adults retain young-like hemispheric asymmetries but recruit right hemisphere mechanisms more, we examined responses to the target words using visual half-field presentation. Whereas young adults show an asymmetric pattern, with prediction-related N400 amplitude reductions for left- but not right-hemisphere-initiated processing (Federmeier and Kutas, 1999b), older adults showed no reliable processing asymmetries and no evidence for prediction with left hemisphere-initiated presentation. The results suggest that left hemisphere mechanisms important for prediction during language processing are less efficacious in older adulthood.
Collapse
|
18
|
Listening back in time: Does attention to memory facilitate word-in-noise identification? Atten Percept Psychophys 2019; 81:253-269. [PMID: 30187397 DOI: 10.3758/s13414-018-1586-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The ephemeral nature of spoken words creates a challenge for oral communications where incoming speech sounds must be processed in relation to representations of just-perceived sounds stored in short-term memory. This can be particularly taxing in noisy environments where perception of speech is often impaired or initially incorrect. Usage of prior contextual information (e.g., a semantically related word) has been shown to improve speech in noise identification. In three experiments, we demonstrate a comparable effect of a semantically related cue word placed after an energetically masked target word in improving accuracy of target-word identification. This effect persisted irrespective of cue modality (visual or auditory cue word) and, in the case of cues after the target, lasted even when the cue word was presented up to 4 seconds after the target. The results are framed in the context of an attention to memory model that seeks to explain the cognitive and neural mechanisms behind processing of items in auditory memory.
Collapse
|
19
|
Winn MB, Moore AN. Pupillometry Reveals That Context Benefit in Speech Perception Can Be Disrupted by Later-Occurring Sounds, Especially in Listeners With Cochlear Implants. Trends Hear 2019; 22:2331216518808962. [PMID: 30375282 PMCID: PMC6207967 DOI: 10.1177/2331216518808962] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Contextual cues can be used to improve speech recognition, especially for people with hearing impairment. However, previous work has suggested that when the auditory signal is degraded, context might be used more slowly than when the signal is clear. This potentially puts the hearing-impaired listener in a dilemma of continuing to process the last sentence when the next sentence has already begun. This study measured the time course of the benefit of context using pupillary responses to high- and low-context sentences that were followed by silence or various auditory distractors (babble noise, ignored digits, or attended digits). Participants were listeners with cochlear implants or normal hearing using a 12-channel noise vocoder. Context-related differences in pupil dilation were greater for normal hearing than for cochlear implant listeners, even when scaled for differences in pupil reactivity. The benefit of context was systematically reduced for both groups by the presence of the later-occurring sounds, including virtually complete negation when sentences were followed by another attended utterance. These results challenge how we interpret the benefit of context in experiments that present just one utterance at a time. If a listener uses context to “repair” part of a sentence, and later-occurring auditory stimuli interfere with that repair process, the benefit of context might not survive outside the idealized laboratory or clinical environment. Elevated listening effort in hearing-impaired listeners might therefore result not just from poor auditory encoding but also inefficient use of context and prolonged processing of misperceived utterances competing with perception of incoming speech.
Collapse
Affiliation(s)
- Matthew B Winn
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Ashley N Moore
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
20
|
Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences. Ear Hear 2019; 39:378-389. [PMID: 28945658 DOI: 10.1097/aud.0000000000000493] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
OBJECTIVES Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. DESIGN One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. RESULTS In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. CONCLUSIONS Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.
Collapse
|
21
|
Toscano JC, Lansing CR. Age-Related Changes in Temporal and Spectral Cue Weights in Speech. LANGUAGE AND SPEECH 2019; 62:61-79. [PMID: 29103359 DOI: 10.1177/0023830917737112] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Listeners weight acoustic cues in speech according to their reliability, but few studies have examined how cue weights change across the lifespan. Previous work has suggested that older adults have deficits in auditory temporal discrimination, which could affect the reliability of temporal phonetic cues, such as voice onset time (VOT), and in turn, impact speech perception in real-world listening environments. We addressed this by examining younger and older adults' use of VOT and onset F0 (a secondary phonetic cue) for voicing judgments (e.g., /b/ vs. /p/), using both synthetic and naturally produced speech. We found age-related differences in listeners' use of the two voicing cues, such that older adults relied more heavily on onset F0 than younger adults, even though this cue is less reliable in American English. These results suggest that phonetic cue weights continue to change across the lifespan.
Collapse
|
22
|
Amichetti NM, Atagi E, Kong YY, Wingfield A. Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults With Cochlear Implants. Ear Hear 2019; 39:101-109. [PMID: 28700448 PMCID: PMC5741484 DOI: 10.1097/aud.0000000000000469] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
OBJECTIVES The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. DESIGN Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. RESULTS Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. CONCLUSIONS Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.
Collapse
Affiliation(s)
- Nicole M. Amichetti
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Eriko Atagi
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Ying-Yee Kong
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
23
|
Payne BR, Silcox JW. Aging, context processing, and comprehension. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.07.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
24
|
|
25
|
Abstract
Contextual and sensory information are combined in speech perception. Conflict between the two can lead to false hearing, defined as a high-confidence misidentification of a spoken word. Rogers, Jacoby, and Sommers (Psychology and Aging, 27(1), 33-45, 2012) found that older adults are more susceptible to false hearing than are young adults, using a combination of semantic priming and repetition priming to create context. In this study, the type of context (repetition vs. sematic priming) responsible for false hearing was examined. Older and young adult participants read and listened to a list of paired associates (e.g., ROW-BOAT) and were told to remember the pairs for a later memory test. Following the memory test, participants identified words masked in noise that were preceded by a cue word in the clear. Targets were semantically associated to the cue (e.g., ROW-BOAT), unrelated to the cue (e.g., JAW-PASS), or phonologically related to a semantic associate of the cue (e.g., ROW-GOAT). How often each cue word and its paired associate were presented prior to the memory test was manipulated (0, 3, or 5 times) to test effects of repetition priming. Results showed repetitions had no effect on rates of context-based listening or false hearing. However, repetition did significantly increase sensory information as a basis for metacognitive judgments in young and older adults. This pattern suggests that semantic priming dominates as the basis for false hearing and highlights context and sensory information operating as qualitatively different bases for listening and metacognition.
Collapse
|
26
|
Spankovich C, Gonzalez VB, Su D, Bishop CE. Self reported hearing difficulty, tinnitus, and normal audiometric thresholds, the National Health and Nutrition Examination Survey 1999–2002. Hear Res 2018; 358:30-36. [DOI: 10.1016/j.heares.2017.12.001] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 11/13/2017] [Accepted: 12/04/2017] [Indexed: 11/29/2022]
|
27
|
Lam BPW, Xie Z, Tessmer R, Chandrasekaran B. The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:1662-1673. [PMID: 28586824 PMCID: PMC5544416 DOI: 10.1044/2017_jslhr-h-16-0133] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2016] [Revised: 09/15/2016] [Accepted: 12/15/2016] [Indexed: 06/01/2023]
Abstract
PURPOSE Although lexical information influences phoneme perception, the extent to which reliance on lexical information enhances speech processing in challenging listening environments is unclear. We examined the extent to which individual differences in lexical influences on phonemic processing impact speech processing in maskers containing varying degrees of linguistic information (2-talker babble or pink noise). METHOD Twenty-nine monolingual English speakers were instructed to ignore the lexical status of spoken syllables (e.g., gift vs. kift) and to only categorize the initial phonemes (/g/ vs. /k/). The same participants then performed speech recognition tasks in the presence of 2-talker babble or pink noise in audio-only and audiovisual conditions. RESULTS Individuals who demonstrated greater lexical influences on phonemic processing experienced greater speech processing difficulties in 2-talker babble than in pink noise. These selective difficulties were present across audio-only and audiovisual conditions. CONCLUSION Individuals with greater reliance on lexical processes during speech perception exhibit impaired speech recognition in listening conditions in which competing talkers introduce audible linguistic interferences. Future studies should examine the locus of lexical influences/interferences on phonemic processing and speech-in-speech processing.
Collapse
Affiliation(s)
- Boji P. W. Lam
- Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin
| | - Zilong Xie
- Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin
| | - Rachel Tessmer
- Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin
| | - Bharath Chandrasekaran
- Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin
- Department of Psychology, College of Liberal Arts, The University of Texas at Austin
- Institute for Mental Health Research, College of Liberal Arts, The University of Texas at Austin
- Department of Linguistics, College of Liberal Arts, The University of Texas at Austin
- Institute for Neuroscience, The University of Texas at Austin
| |
Collapse
|
28
|
Selmeczy D, Dobbins IG. Ignoring memory hints: The stubborn influence of environmental cues on recognition memory. J Exp Psychol Learn Mem Cogn 2017; 43:1448-1469. [PMID: 28252990 DOI: 10.1037/xlm0000383] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recognition judgments can benefit from the use of environmental cues that signal the general likelihood of encountering familiar versus unfamiliar stimuli. While incorporating such cues is often adaptive, there are circumstances (e.g., eyewitness testimony) in which observers should fully ignore environmental cues in order to preserve memory report fidelity. The current studies used the explicit memory cueing paradigm to examine whether participants could intentionally ignore reliable environmental cues when instructed. Three experiments demonstrated that participants could volitionally dampen the directional influence of environmental cues on their recognition judgments (i.e., whether influenced to respond "old" or "new") but did not fully eliminate their influence. Although monetary incentives diminished the mean influence of cues on responses rates, finer grained individual differences analysis, as well as confidence and RTs analyses, demonstrated that participants were still systematically influenced. These results demonstrate that environmental cues presented at test remain a potent influence on recognition decisions and subjective confidence even when ostensibly ignored. (PsycINFO Database Record
Collapse
Affiliation(s)
- Diana Selmeczy
- Center for Mind and Brain, University of California, Davis
| | - Ian G Dobbins
- Department of Psychology, Washington University in St. Louis
| |
Collapse
|
29
|
|
30
|
Ward CM, Rogers CS, Van Engen KJ, Peelle JE. Effects of Age, Acoustic Challenge, and Verbal Working Memory on Recall of Narrative Speech. Exp Aging Res 2016; 42:97-111. [PMID: 26683044 DOI: 10.1080/0361073x.2016.1108785] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND/STUDY CONTEXT A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects. METHODS The authors tested 30 young adults (aged 18-28 years) and 30 older adults (aged 65-79 years) with good self-reported hearing. Participants heard short stories that were presented as normal (unprocessed) speech or acoustically degraded using a noise vocoding algorithm with 24 or 16 channels. The degraded stories were still fully intelligible. Following each story, participants were asked to repeat the story in as much detail as possible. Recall was scored using a modified idea unit scoring approach, which included separately scoring hierarchical levels of narrative detail. RESULTS Memory for acoustically degraded stories was significantly worse than for normal stories at some levels of narrative detail. Older adults' memory for the stories was significantly worse overall, but there was no interaction between age and acoustic clarity or level of narrative detail. Verbal working memory (assessed by reading span) significantly correlated with recall accuracy for both young and older adults, whereas hearing ability (better ear pure tone average) did not. CONCLUSION The present findings are consistent with a framework in which the additional cognitive demands caused by a degraded acoustic signal use resources that would otherwise be available for memory encoding for both young and older adults. Verbal working memory is a likely candidate for supporting both of these processes.
Collapse
Affiliation(s)
- Caitlin M Ward
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Chad S Rogers
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Kristin J Van Engen
- b Department of Psychology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Jonathan E Peelle
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| |
Collapse
|
31
|
Presacco A, Simon JZ, Anderson S. Effect of informational content of noise on speech representation in the aging midbrain and cortex. J Neurophysiol 2016; 116:2356-2367. [PMID: 27605531 PMCID: PMC5110638 DOI: 10.1152/jn.00373.2016] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Accepted: 09/07/2016] [Indexed: 11/22/2022] Open
Abstract
The ability to understand speech is significantly degraded by aging, particularly in noisy environments. One way that older adults cope with this hearing difficulty is through the use of contextual cues. Several behavioral studies have shown that older adults are better at following a conversation when the target speech signal has high contextual content or when the background distractor is not meaningful. Specifically, older adults gain significant benefit in focusing on and understanding speech if the background is spoken by a talker in a language that is not comprehensible to them (i.e., a foreign language). To understand better the neural mechanisms underlying this benefit in older adults, we investigated aging effects on midbrain and cortical encoding of speech when in the presence of a single competing talker speaking in a language that is meaningful or meaningless to the listener (i.e., English vs. Dutch). Our results suggest that neural processing is strongly affected by the informational content of noise. Specifically, older listeners' cortical responses to the attended speech signal are less deteriorated when the competing speech signal is an incomprehensible language rather than when it is their native language. Conversely, temporal processing in the midbrain is affected by different backgrounds only during rapid changes in speech and only in younger listeners. Additionally, we found that cognitive decline is associated with an increase in cortical envelope tracking, suggesting an age-related over (or inefficient) use of cognitive resources that may explain their difficulty in processing speech targets while trying to ignore interfering noise.
Collapse
Affiliation(s)
- Alessandro Presacco
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland;
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
| | - Jonathan Z Simon
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland
- Department of Biology, University of Maryland, College Park, Maryland; and
- Institute for Systems Research, University of Maryland, College Park, Maryland
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
| |
Collapse
|
32
|
Wöstmann M, Obleser J. Acoustic Detail But Not Predictability of Task-Irrelevant Speech Disrupts Working Memory. Front Hum Neurosci 2016; 10:538. [PMID: 27826235 PMCID: PMC5078496 DOI: 10.3389/fnhum.2016.00538] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Accepted: 10/11/2016] [Indexed: 11/29/2022] Open
Abstract
Attended speech is comprehended better not only if more acoustic detail is available, but also if it is semantically highly predictable. But can more acoustic detail or higher predictability turn into disadvantages and distract a listener if the speech signal is to be ignored? Also, does the degree of distraction increase for older listeners who typically show a decline in attentional control ability? Adopting the irrelevant-speech paradigm, we tested whether younger (age 23–33 years) and older (60–78 years) listeners’ working memory for the serial order of spoken digits would be disrupted by the presentation of task-irrelevant speech varying in its acoustic detail (using noise-vocoding) and its semantic predictability (of sentence endings). More acoustic detail, but not higher predictability, of task-irrelevant speech aggravated memory interference. This pattern of results did not differ between younger and older listeners, despite generally lower performance in older listeners. Our findings suggest that the focus of attention determines how acoustics and predictability affect the processing of speech: first, as more acoustic detail is known to enhance speech comprehension and memory for speech, we here demonstrate that more acoustic detail of ignored speech enhances the degree of distraction. Second, while higher predictability of attended speech is known to also enhance speech comprehension under acoustically adverse conditions, higher predictability of ignored speech is unable to exert any distracting effect upon working memory performance in younger or older listeners. These findings suggest that features that make attended speech easier to comprehend do not necessarily enhance distraction by ignored speech.
Collapse
Affiliation(s)
- Malte Wöstmann
- Department of Psychology, University of Lübeck Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck Lübeck, Germany
| |
Collapse
|
33
|
Hunter CR. Is the time course of lexical activation and competition in spoken word recognition affected by adult aging? An event-related potential (ERP) study. Neuropsychologia 2016; 91:451-464. [DOI: 10.1016/j.neuropsychologia.2016.09.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2016] [Revised: 09/05/2016] [Accepted: 09/07/2016] [Indexed: 10/21/2022]
|
34
|
Taitelbaum-Swead R, Fostick L. The Effect of Age and Type of Noise on Speech Perception under Conditions of Changing Context and Noise Levels. Folia Phoniatr Logop 2016; 68:16-21. [PMID: 27362521 DOI: 10.1159/000444749] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVE Everyday life includes fluctuating noise levels, resulting in continuously changing speech intelligibility. The study aims were: (1) to quantify the amount of decrease in age-related speech perception, as a result of increasing noise level, and (2) to test the effect of age on context usage at the word level (smaller amount of contextual cues). PATIENTS AND METHODS A total of 24 young adults (age 20-30 years) and 20 older adults (age 60-75 years) were tested. Meaningful and nonsense one-syllable consonant-vowel-consonant words were presented with the background noise types of speech noise (SpN), babble noise (BN), and white noise (WN), with a signal-to-noise ratio (SNR) of 0 and -5 dB. RESULTS Older adults had lower accuracy in SNR = 0, with WN being the most difficult condition for all participants. Measuring the change in speech perception when SNR decreased showed a reduction of 18.6-61.5% in intelligibility, with age effect only for BN. Both young and older adults used less phonemic context with WN, as compared to other conditions. CONCLUSION Older adults are more affected by an increasing noise level of fluctuating informational noise as compared to steady-state noise. They also use less contextual cues when perceiving monosyllabic words. Further studies should take into consideration that when presenting the stimulus differently (change in noise level, less contextual cues), other perceptual and cognitive processes are involved.
Collapse
|
35
|
Moradi S, Lidestam B, Rönnberg J. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli. Trends Hear 2016; 20:20/0/2331216516653355. [PMID: 27317667 PMCID: PMC5562342 DOI: 10.1177/2331216516653355] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, Department of Behavioral Sciences and Learning, Linköping University, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioral Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
36
|
Impact of peripheral hearing loss on top-down auditory processing. Hear Res 2016; 343:4-13. [PMID: 27260270 DOI: 10.1016/j.heares.2016.05.018] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2016] [Revised: 05/26/2016] [Accepted: 05/28/2016] [Indexed: 01/17/2023]
Abstract
The auditory system consists of an intricate set of connections interposed between hierarchically arranged nuclei. The ascending pathways carrying sound information from the cochlea to the auditory cortex are, predictably, altered in instances of hearing loss resulting from blockage or damage to peripheral auditory structures. However, hearing loss-induced changes in descending connections that emanate from higher auditory centers and project back toward the periphery are still poorly understood. These pathways, which are the hypothesized substrate of high-level contextual and plasticity cues, are intimately linked to the ascending stream, and are thereby also likely to be influenced by auditory deprivation. In the current report, we review both the human and animal literature regarding changes in top-down modulation after peripheral hearing loss. Both aged humans and cochlear implant users are able to harness the power of top-down cues to disambiguate corrupted sounds and, in the case of aged listeners, may rely more heavily on these cues than non-aged listeners. The animal literature also reveals a plethora of structural and functional changes occurring in multiple descending projection systems after peripheral deafferentation. These data suggest that peripheral deafferentation induces a rebalancing of bottom-up and top-down controls, and that it will be necessary to understand the mechanisms underlying this rebalancing to develop better rehabilitation strategies for individuals with peripheral hearing loss.
Collapse
|
37
|
Maxcey AM, Bostic J, Maldonado T. Recognition Practice Results in a Generalizable Skill in Older Adults: Decreased Intrusion Errors to Novel Objects Belonging to Practiced Categories. APPLIED COGNITIVE PSYCHOLOGY 2016. [DOI: 10.1002/acp.3236] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
| | - Jessica Bostic
- Department of Psychology; Ball State University; Muncie USA
| | - Ted Maldonado
- Department of Psychology; Montana State University; Bozeman USA
| |
Collapse
|
38
|
Smayda KE, Van Engen KJ, Maddox WT, Chandrasekaran B. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults. PLoS One 2016; 11:e0152773. [PMID: 27031343 PMCID: PMC4816421 DOI: 10.1371/journal.pone.0152773] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Accepted: 03/18/2016] [Indexed: 11/19/2022] Open
Abstract
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.
Collapse
Affiliation(s)
- Kirsten E. Smayda
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
| | - Kristin J. Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri, United States of America
| | - W. Todd Maddox
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
| | - Bharath Chandrasekaran
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
- Communication Sciences and Disorders Department, The University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
39
|
Moulin A, Richard C. Lexical Influences on Spoken Spondaic Word Recognition in Hearing-Impaired Patients. Front Neurosci 2015; 9:476. [PMID: 26778945 PMCID: PMC4688363 DOI: 10.3389/fnins.2015.00476] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Accepted: 11/26/2015] [Indexed: 11/13/2022] Open
Abstract
Top-down contextual influences play a major part in speech understanding, especially in hearing-impaired patients with deteriorated auditory input. Those influences are most obvious in difficult listening situations, such as listening to sentences in noise but can also be observed at the word level under more favorable conditions, as in one of the most commonly used tasks in audiology, i.e., repeating isolated words in silence. This study aimed to explore the role of top-down contextual influences and their dependence on lexical factors and patient-specific factors using standard clinical linguistic material. Spondaic word perception was tested in 160 hearing-impaired patients aged 23-88 years with a four-frequency average pure-tone threshold ranging from 21 to 88 dB HL. Sixty spondaic words were randomly presented at a level adjusted to correspond to a speech perception score ranging between 40 and 70% of the performance intensity function obtained using monosyllabic words. Phoneme and whole-word recognition scores were used to calculate two context-influence indices (the j factor and the ratio of word scores to phonemic scores) and were correlated with linguistic factors, such as the phonological neighborhood density and several indices of word occurrence frequencies. Contextual influence was greater for spondaic words than in similar studies using monosyllabic words, with an overall j factor of 2.07 (SD = 0.5). For both indices, context use decreased with increasing hearing loss once the average hearing loss exceeded 55 dB HL. In right-handed patients, significantly greater context influence was observed for words presented in the right ears than for words presented in the left, especially in patients with many years of education. The correlations between raw word scores (and context influence indices) and word occurrence frequencies showed a significant age-dependent effect, with a stronger correlation between perception scores and word occurrence frequencies when the occurrence frequencies were based on the years corresponding to the patients' youth, showing a "historic" word frequency effect. This effect was still observed for patients with few years of formal education, but recent occurrence frequencies based on current word exposure had a stronger influence for those patients, especially for younger ones.
Collapse
Affiliation(s)
- Annie Moulin
- INSERM, U1028, Lyon Neuroscience Research Center, Brain Dynamics and Cognition TeamLyon, France
- CNRS, UMR5292, Lyon Neuroscience Research Center, Brain Dynamics and Cognition TeamLyon, France
- University of LyonLyon, France
| | - Céline Richard
- Otorhinolaryngology Department, Vaudois University Hospital Center and University of LausanneLausanne, Switzerland
- The Laboratory for Investigative Neurophysiology, Department of Radiology and Department of Clinical Neurosciences, Vaudois University Hospital Center and University of LausanneLausanne, Switzerland
| |
Collapse
|
40
|
Ellis RJ, Rönnberg J. How does susceptibility to proactive interference relate to speech recognition in aided and unaided conditions? Front Psychol 2015; 6:1017. [PMID: 26283981 PMCID: PMC4522515 DOI: 10.3389/fpsyg.2015.01017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Accepted: 07/06/2015] [Indexed: 12/04/2022] Open
Abstract
Proactive interference (PI) is the capacity to resist interference to the acquisition of new memories from information stored in the long-term memory. Previous research has shown that PI correlates significantly with the speech-in-noise recognition scores of younger adults with normal hearing. In this study, we report the results of an experiment designed to investigate the extent to which tests of visual PI relate to the speech-in-noise recognition scores of older adults with hearing loss, in aided and unaided conditions. The results suggest that measures of PI correlate significantly with speech-in-noise recognition only in the unaided condition. Furthermore the relation between PI and speech-in-noise recognition differs to that observed in younger listeners without hearing loss. The findings suggest that the relation between PI tests and the speech-in-noise recognition scores of older adults with hearing loss relates to capability of the test to index cognitive flexibility.
Collapse
Affiliation(s)
- Rachel J Ellis
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University , Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University , Linköping, Sweden
| |
Collapse
|
41
|
Heffner CC, Newman RS, Dilley LC, Idsardi WJ. Age-Related Differences in Speech Rate Perception Do Not Necessarily Entail Age-Related Differences in Speech Rate Use. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:1341-1349. [PMID: 25860652 DOI: 10.1044/2015_jslhr-h-14-0239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2014] [Accepted: 04/01/2015] [Indexed: 06/04/2023]
Abstract
PURPOSE A new literature has suggested that speech rate can influence the parsing of words quite strongly in speech. The purpose of this study was to investigate differences between younger adults and older adults in the use of context speech rate in word segmentation, given that older adults perceive timing information differently from younger ones. METHOD Younger (18-25 years) and older (55-65 years) adults performed a sentence transcription task for sentences that varied in speech rate context (i.e., distal speech rate) and a syntactic cue to the presence of a word boundary. RESULTS There were no differences between younger and older adults in their use of the distal speech rate cue to word segmentation. CONCLUSIONS The differences previously documented between younger and older adults in their perception of speech rate cues do not necessarily translate to older adults' use of those cues. Older adults' difficulties with compressed speech may arise from problems broader than just speech rate alone.
Collapse
|
42
|
Rogers CS, Wingfield A. Stimulus-independent semantic bias misdirects word recognition in older adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:EL26-30. [PMID: 26233056 PMCID: PMC4499053 DOI: 10.1121/1.4922363] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Older adults' normally adaptive use of semantic context to aid in word recognition can have a negative consequence of causing misrecognitions, especially when the word actually spoken sounds similar to a word that more closely fits the context. Word-pairs were presented to young and older adults, with the second word of the pair masked by multi-talker babble varying in signal-to-noise ratio. Results confirmed older adults' greater tendency to misidentify words based on their semantic context compared to the young adults, and to do so with a higher level of confidence. This age difference was unaffected by differences in the relative level of acoustic masking.
Collapse
Affiliation(s)
- Chad S Rogers
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454-9110, USA ;
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454-9110, USA ;
| |
Collapse
|
43
|
Helfer KS, Jesse A. Lexical influences on competing speech perception in younger, middle-aged, and older adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:363-76. [PMID: 26233036 PMCID: PMC4506307 DOI: 10.1121/1.4923155] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2014] [Revised: 04/09/2015] [Accepted: 06/16/2015] [Indexed: 05/20/2023]
Abstract
The influence of lexical characteristics of words in to-be-attended and to-be-ignored speech streams was examined in a competing speech task. Older, middle-aged, and younger adults heard pairs of low-cloze probability sentences in which the frequency or neighborhood density of words was manipulated in either the target speech stream or the masking speech stream. All participants also completed a battery of cognitive measures. As expected, for all groups, target words that occur frequently or that are from sparse lexical neighborhoods were easier to recognize than words that are infrequent or from dense neighborhoods. Compared to other groups, these neighborhood density effects were largest for older adults; the frequency effect was largest for middle-aged adults. Lexical characteristics of words in the to-be-ignored speech stream also affected recognition of to-be-attended words, but only when overall performance was relatively good (that is, when younger participants listened to the speech streams at a more advantageous signal-to-noise ratio). For these listeners, to-be-ignored masker words from sparse neighborhoods interfered with recognition of target speech more than masker words from dense neighborhoods. Amount of hearing loss and cognitive abilities relating to attentional control modulated overall performance as well as the strength of lexical influences.
Collapse
Affiliation(s)
- Karen S Helfer
- Department of Communication Disorders, University of Massachusetts Amherst, 358 North Pleasant Street, Amherst, Massachusetts 01003, USA
| | - Alexandra Jesse
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, 135 Hicks Way, Amherst, Massachusetts 01003, USA
| |
Collapse
|
44
|
Wingfield A, Amichetti NM, Lash A. Cognitive aging and hearing acuity: modeling spoken language comprehension. Front Psychol 2015; 6:684. [PMID: 26124724 PMCID: PMC4462993 DOI: 10.3389/fpsyg.2015.00684] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2015] [Accepted: 05/10/2015] [Indexed: 12/30/2022] Open
Abstract
The comprehension of spoken language has been characterized by a number of "local" theories that have focused on specific aspects of the task: models of word recognition, models of selective attention, accounts of thematic role assignment at the sentence level, and so forth. The ease of language understanding (ELU) model (Rönnberg et al., 2013) stands as one of the few attempts to offer a fully encompassing framework for language understanding. In this paper we discuss interactions between perceptual, linguistic, and cognitive factors in spoken language understanding. Central to our presentation is an examination of aspects of the ELU model that apply especially to spoken language comprehension in adult aging, where speed of processing, working memory capacity, and hearing acuity are often compromised. We discuss, in relation to the ELU model, conceptions of working memory and its capacity limitations, the use of linguistic context to aid in speech recognition and the importance of inhibitory control, and language comprehension at the sentence level. Throughout this paper we offer a constructive look at the ELU model; where it is strong and where there are gaps to be filled.
Collapse
Affiliation(s)
- Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | | | | |
Collapse
|
45
|
When Confidence Is Not a Signal of Knowing: How Students’ Experiences and Beliefs About Processing Fluency Can Lead to Miscalibrated Confidence. EDUCATIONAL PSYCHOLOGY REVIEW 2015. [DOI: 10.1007/s10648-015-9313-7] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
46
|
Rönnberg J, Hygge S, Keidser G, Rudner M. The effect of functional hearing loss and age on long- and short-term visuospatial memory: evidence from the UK biobank resource. Front Aging Neurosci 2014; 6:326. [PMID: 25538617 PMCID: PMC4260513 DOI: 10.3389/fnagi.2014.00326] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Accepted: 11/07/2014] [Indexed: 11/15/2022] Open
Abstract
The UK Biobank offers cross-sectional epidemiological data collected on >500,000 individuals in the UK between 40 and 70 years of age. Using the UK Biobank data, the aim of this study was to investigate the effects of functional hearing loss and hearing aid usage on visuospatial memory function. This selection of variables resulted in a sub-sample of 138,098 participants after discarding extreme values. A digit triplets functional hearing test was used to divide the participants into three groups: poor, insufficient and normal hearers. We found negative relationships between functional hearing loss and both visuospatial working memory (i.e., a card pair matching task) and visuospatial, episodic long-term memory (i.e., a prospective memory task), with the strongest association for episodic long-term memory. The use of hearing aids showed a small positive effect for working memory performance for the poor hearers, but did not have any influence on episodic long-term memory. Age also showed strong main effects for both memory tasks and interacted with gender and education for the long-term memory task. Broader theoretical implications based on a memory systems approach will be discussed and compared to theoretical alternatives.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | - Staffan Hygge
- Environmental Psychology, Faculty of Engineering and Sustainable Development, University of Gävle Gävle, Sweden
| | | | - Mary Rudner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| |
Collapse
|
47
|
Affiliation(s)
- Daniel J Levitin
- Department of Psychology, McGill University, Montreal, QC, Canada H3A 1B1; and College of Arts and Humanities, Minerva Schools at Keck Graduate Institute, San Francisco, CA 94103
| |
Collapse
|
48
|
McArthur AD, Sears CR, Scialfa CT, Sulsky LM. Aging and the inhibition of competing hypotheses during visual word identification: evidence from the progressive demasking task. AGING NEUROPSYCHOLOGY AND COGNITION 2014; 22:220-43. [DOI: 10.1080/13825585.2014.911240] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
49
|
Goy H, Pelletier M, Coletta M, Pichora-Fuller MK. The effects of semantic context and the type and amount of acoustic distortion on lexical decision by younger and older adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1715-1732. [PMID: 23882006 DOI: 10.1044/1092-4388(2013/12-0053)] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE In this study, the authors investigated how acoustic distortion affected younger and older adults' use of context in a lexical decision task. METHOD The authors measured lexical decision reaction times (RTs) when intact target words followed acoustically distorted sentence contexts. Contexts were semantically congruent, neutral, or incongruent. Younger adults (n = 216) were tested on three distortion types: low-pass filtering, time compression, and masking by multitalker babble, using two amounts of distortion selected to control for word recognition accuracy. Older adults (n = 108) were tested on two amounts of time compression and one low-pass filtering condition. RESULTS For both age groups, there was robust facilitation by congruent contexts but minimal inhibition by incongruent contexts. Facilitation decreased as distortion increased. Older listeners had slower RTs than younger listeners, but this difference was smaller in congruent than in neutral or incongruent conditions. After controlling for word recognition accuracy, older listeners' RTs were slower in time-compressed than in low-pass filtering conditions, but younger listeners performed similarly in both conditions. CONCLUSIONS These RT results highlight the interdependence between bottom-up sensory and top-down semantic processing. Consistent with previous findings based on accuracy measures, compared with younger adults, older adults were disproportionately slowed when speech was time compressed but more facilitated by congruent contexts.
Collapse
|
50
|
Lash A, Rogers CS, Zoller A, Wingfield A. Expectation and entropy in spoken word recognition: effects of age and hearing acuity. Exp Aging Res 2013; 39:235-53. [PMID: 23607396 DOI: 10.1080/0361073x.2013.779175] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
UNLABELLED BACKGROUND/STUDY CONTEXT: Older adults, especially those with reduced hearing acuity, can make good use of linguistic context in word recognition. Less is known about the effects of the weighted distribution of probable target and nontarget words that fit the sentence context (response entropy). The present study examined the effects of age, hearing acuity, linguistic context, and response entropy on spoken word recognition. METHODS Participants were 18 older adults with good hearing acuity (M age = 74.3 years), 18 older adults with mild-to-moderate hearing loss (M age = 76.1 years), and 18 young adults with age-normal hearing (M age = 19.6 years). Participants heard sentence-final words using a word-onset gating paradigm, in which words were heard with increasing amounts of onset information until they could be correctly identified. Degrees of context varied from a neutral context to a high context condition. RESULTS Older adults with poor hearing acuity required a greater amount of word onset information for recognition of words when heard in a neutral context compared with older adults with good hearing acuity and young adults. This difference progressively decreased with an increase in words' contextual probability. Unlike the young adults, both older adult groups' word recognition thresholds were sensitive to response entropy. Response entropy was not affected by hearing acuity. CONCLUSION Increasing linguistic context mitigates the negative effect of age and hearing loss on word recognition. The effect of response entropy on older adults' word recognition is discussed in terms of an age-related inhibition deficit.
Collapse
Affiliation(s)
- Amanda Lash
- Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454-9110, USA
| | | | | | | |
Collapse
|