1
|
Adornetti I, Chiera A, Altavilla D, Deriu V, Marini A, Gobbo M, Valeri G, Magni R, Ferretti F. Defining the Characteristics of Story Production of Autistic Children: A Multilevel Analysis. J Autism Dev Disord 2024; 54:3759-3776. [PMID: 37653117 PMCID: PMC11461702 DOI: 10.1007/s10803-023-06096-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2023] [Indexed: 09/02/2023]
Abstract
Several studies suggest that a valuable tool to examine linguistic skills in communication disorders is offered by procedures of narrative discourse assessment. Following this line of research, we present an exploratory study aimed to investigate storytelling abilities of autistic children to better define the characteristics of their story production. Participants included 41 autistic children and 41 children with typical development aged between 7.02 and 11.03 years matched on age, gender, level of formal education, intelligence quotient, working memory, attention skills, theory of mind, and phonological short-term memory. Narrative production was assessed by analysing the language samples obtained through the "Nest Story" description task. A multilevel analysis including micro- and macro-linguistic variables was adopted for narrative assessment. Group differences emerged on both micro- and macro-linguistic dimensions: autistic children produced narratives with more phonological errors and semantic paraphasias (microlinguistic variables) as well as more errors of global coherence and a fewer number of visible events and inferred events (macrolinguistic variables) than the control group.This study shows that even autistic children with adequate cognitive skills display several limitations in their narrative competence and that such weaknesses affect both micro- and macrolinguistic aspects of story production.
Collapse
Affiliation(s)
- Ines Adornetti
- Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Via Ostiense 234-236, 00146, Rome, Italy.
| | - Alessandra Chiera
- Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Via Ostiense 234-236, 00146, Rome, Italy
| | - Daniela Altavilla
- Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Via Ostiense 234-236, 00146, Rome, Italy
| | - Valentina Deriu
- Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Via Ostiense 234-236, 00146, Rome, Italy
| | - Andrea Marini
- Department of Languages and Literatures, Communication, Education and Society, University of Udine, Via Margreth, 3, 33100, Udine, Italy
- Claudiana - Landesfachhochschule Für Gesundheitsberufe, Bozen, Italy
| | - Marika Gobbo
- Department of Languages and Literatures, Communication, Education and Society, University of Udine, Via Margreth, 3, 33100, Udine, Italy
- Department of Life Sciences, University of Trieste, 34127, Trieste, Italy
| | - Giovanni Valeri
- Child and Adolescent Neuropsychiatry Unit, Department of Neuroscience, The Bambino Gesù Children's Hospital, IRCCS, Piazza di Sant'Onofrio 4, 00165, Rome, Italy
| | - Rita Magni
- Studio Polispecialistico Evò, Viale Pier Luigi Nervi 164, 04100, Latina, Italy
| | - Francesco Ferretti
- Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Via Ostiense 234-236, 00146, Rome, Italy
| |
Collapse
|
2
|
Brown VA, Sewell K, Villanueva J, Strand JF. Noisy speech impairs retention of previously heard information only at short time scales. Mem Cognit 2024:10.3758/s13421-024-01583-y. [PMID: 38758512 DOI: 10.3758/s13421-024-01583-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/24/2024] [Indexed: 05/18/2024]
Abstract
When speech is presented in noise, listeners must recruit cognitive resources to resolve the mismatch between the noisy input and representations in memory. A consequence of this effortful listening is impaired memory for content presented earlier. In the first study on effortful listening, Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248 (1968; Experiment 2) found that recall for a list of digits was poorer when subsequent digits were presented with masking noise than without. Experiment 3 of that study extended this effect to more naturalistic, passage-length materials. Although the findings of Rabbitt's Experiment 2 have been replicated multiple times, no work has assessed the robustness of Experiment 3. We conducted a replication attempt of Rabbitt's Experiment 3 at three signal-to-noise ratios (SNRs). Results at one of the SNRs (Experiment 1a of the current study) were in the opposite direction from what Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248, (1968) reported - that is, speech was recalled more accurately when it was followed by speech presented in noise rather than in the clear - and results at the other two SNRs showed no effect of noise (Experiments 1b and 1c). In addition, reanalysis of a replication of Rabbitt's seminal finding in his second experiment showed that the effect of effortful listening on previously presented information is transient. Thus, effortful listening caused by noise appears to only impair memory for information presented immediately before the noise, which may account for our finding that noise in the second-half of a long passage did not impair recall of information presented in the first half of the passage.
Collapse
Affiliation(s)
- Violet A Brown
- Department of Psychology, Carleton College, Northfield, MN, USA.
| | - Katrina Sewell
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Jed Villanueva
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN, USA
| |
Collapse
|
3
|
Lander DM, Liu S, Roup CM. Associations Between Auditory Working Memory, Self-Perceived Listening Effort, and Hearing Difficulty in Adults With Mild Traumatic Brain Injury. Ear Hear 2024; 45:695-709. [PMID: 38229218 DOI: 10.1097/aud.0000000000001462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2024]
Abstract
OBJECTIVES Mild traumatic brain injury (TBI) can have persistent effects in the auditory domain (e.g., difficulty listening in noise), despite individuals having normal pure-tone auditory sensitivity. Individuals with a history of mild TBI often perceive hearing difficulty and greater listening effort in complex listening situations. The purpose of the present study was to examine self-perceived hearing difficulty, listening effort, and performance on an auditory processing test battery in adults with a history of mild TBI compared with a control group. DESIGN Twenty adults ages 20 to 53 years old participated divided into a mild TBI (n = 10) and control group (n = 10). Perceived hearing difficulties were measured using the Adult Auditory Processing Scale and the Hearing Handicap Inventory for Adults. Listening effort was measured using the National Aeronautics and Space Administration-Task Load Index. Listening effort ratings were obtained at baseline, after each auditory processing test, and at the completion of the test battery. The auditory processing test battery included (1) dichotic word recognition, (2) the 500-Hz masking level difference, (3) the Listening in Spatialized Noise-Sentences test, and (4) the Word Auditory Recognition and Recall Measure (WARRM). RESULTS Results indicated that individuals with a history of mild TBI perceived significantly greater degrees of hearing difficulty and listening effort than the control group. There were no significant group differences on two of the auditory processing tasks (dichotic word recognition or Listening in Spatialized Noise-Sentences). The mild TBI group exhibited significantly poorer performance on the 500-Hz MLD and the WARRM, a measure of auditory working memory, than the control group. Greater degrees of self-perceived hearing difficulty were significantly associated with greater listening effort and poorer auditory working memory. Greater listening effort was also significantly associated with poorer auditory working memory. CONCLUSIONS Results demonstrate that adults with a history of mild TBI may experience subjective hearing difficulty and listening effort when listening in challenging acoustic environments. Poorer auditory working memory on the WARRM task was observed for the adults with mild TBI and was associated with greater hearing difficulty and listening effort. Taken together, the present study suggests that conventional clinical audiometric battery alone may not provide enough information about auditory processing deficits in individuals with a history of mild TBI. The results support the use of a multifaceted battery of auditory processing tasks and subjective measures when evaluating individuals with a history of mild TBI.
Collapse
Affiliation(s)
- Devan M Lander
- Department of Speech & Hearing Science, The Ohio State University, Columbus, Ohio, USA
| | - Shuang Liu
- Independent Statistical Consultant, Columbus, Ohio, USA
| | - Christina M Roup
- Department of Speech & Hearing Science, The Ohio State University, Columbus, Ohio, USA
| |
Collapse
|
4
|
Hansen TA, O’Leary RM, Svirsky MA, Wingfield A. Self-pacing ameliorates recall deficit when listening to vocoded discourse: a cochlear implant simulation. Front Psychol 2023; 14:1225752. [PMID: 38054180 PMCID: PMC10694252 DOI: 10.3389/fpsyg.2023.1225752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 11/07/2023] [Indexed: 12/07/2023] Open
Abstract
Introduction In spite of its apparent ease, comprehension of spoken discourse represents a complex linguistic and cognitive operation. The difficulty of such an operation can increase when the speech is degraded, as is the case with cochlear implant users. However, the additional challenges imposed by degraded speech may be mitigated to some extent by the linguistic context and pace of presentation. Methods An experiment is reported in which young adults with age-normal hearing recalled discourse passages heard with clear speech or with noise-band vocoding used to simulate the sound of speech produced by a cochlear implant. Passages were varied in inter-word predictability and presented either without interruption or in a self-pacing format that allowed the listener to control the rate at which the information was delivered. Results Results showed that discourse heard with clear speech was better recalled than discourse heard with vocoded speech, discourse with a higher average inter-word predictability was better recalled than discourse with a lower average inter-word predictability, and self-paced passages were recalled better than those heard without interruption. Of special interest was the semantic hierarchy effect: the tendency for listeners to show better recall for main ideas than mid-level information or detail from a passage as an index of listeners' ability to understand the meaning of a passage. The data revealed a significant effect of inter-word predictability, in that passages with lower predictability had an attenuated semantic hierarchy effect relative to higher-predictability passages. Discussion Results are discussed in terms of broadening cochlear implant outcome measures beyond current clinical measures that focus on single-word and sentence repetition.
Collapse
Affiliation(s)
- Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, NY, United States
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, MA, United States
| |
Collapse
|
5
|
Yasmin S, Irsik VC, Johnsrude IS, Herrmann B. The effects of speech masking on neural tracking of acoustic and semantic features of natural speech. Neuropsychologia 2023; 186:108584. [PMID: 37169066 DOI: 10.1016/j.neuropsychologia.2023.108584] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/30/2023] [Accepted: 05/08/2023] [Indexed: 05/13/2023]
Abstract
Listening environments contain background sounds that mask speech and lead to communication challenges. Sensitivity to slow acoustic fluctuations in speech can help segregate speech from background noise. Semantic context can also facilitate speech perception in noise, for example, by enabling prediction of upcoming words. However, not much is known about how different degrees of background masking affect the neural processing of acoustic and semantic features during naturalistic speech listening. In the current electroencephalography (EEG) study, participants listened to engaging, spoken stories masked at different levels of multi-talker babble to investigate how neural activity in response to acoustic and semantic features changes with acoustic challenges, and how such effects relate to speech intelligibility. The pattern of neural response amplitudes associated with both acoustic and semantic speech features across masking levels was U-shaped, such that amplitudes were largest for moderate masking levels. This U-shape may be due to increased attentional focus when speech comprehension is challenging, but manageable. The latency of the neural responses increased linearly with increasing background masking, and neural latency change associated with acoustic processing most closely mirrored the changes in speech intelligibility. Finally, tracking responses related to semantic dissimilarity remained robust until severe speech masking (-3 dB SNR). The current study reveals that neural responses to acoustic features are highly sensitive to background masking and decreasing speech intelligibility, whereas neural responses to semantic features are relatively robust, suggesting that individuals track the meaning of the story well even in moderate background sound.
Collapse
Affiliation(s)
- Sonia Yasmin
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada.
| | - Vanessa C Irsik
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada; School of Communication and Speech Disorders,The University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest, M6A 2E1, Toronto, ON, Canada; Department of Psychology,University of Toronto, M5S 1A1, Toronto, ON, Canada
| |
Collapse
|
6
|
Sherafati A, Dwyer N, Bajracharya A, Hassanpour MS, Eggebrecht AT, Firszt JB, Culver JP, Peelle JE. Prefrontal cortex supports speech perception in listeners with cochlear implants. eLife 2022; 11:e75323. [PMID: 35666138 PMCID: PMC9225001 DOI: 10.7554/elife.75323] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 06/04/2022] [Indexed: 12/14/2022] Open
Abstract
Cochlear implants are neuroprosthetic devices that can restore hearing in people with severe to profound hearing loss by electrically stimulating the auditory nerve. Because of physical limitations on the precision of this stimulation, the acoustic information delivered by a cochlear implant does not convey the same level of acoustic detail as that conveyed by normal hearing. As a result, speech understanding in listeners with cochlear implants is typically poorer and more effortful than in listeners with normal hearing. The brain networks supporting speech understanding in listeners with cochlear implants are not well understood, partly due to difficulties obtaining functional neuroimaging data in this population. In the current study, we assessed the brain regions supporting spoken word understanding in adult listeners with right unilateral cochlear implants (n=20) and matched controls (n=18) using high-density diffuse optical tomography (HD-DOT), a quiet and non-invasive imaging modality with spatial resolution comparable to that of functional MRI. We found that while listening to spoken words in quiet, listeners with cochlear implants showed greater activity in the left prefrontal cortex than listeners with normal hearing, specifically in a region engaged in a separate spatial working memory task. These results suggest that listeners with cochlear implants require greater cognitive processing during speech understanding than listeners with normal hearing, supported by compensatory recruitment of the left prefrontal cortex.
Collapse
Affiliation(s)
- Arefeh Sherafati
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
| | - Noel Dwyer
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Aahana Bajracharya
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | | | - Adam T Eggebrecht
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Electrical & Systems Engineering, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
| | - Jill B Firszt
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Joseph P Culver
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
- Department of Physics, Washington University in St. LouisSt. LouisUnited States
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| |
Collapse
|
7
|
Sun PW, Hines A. Listening Effort Informed Quality of Experience Evaluation. Front Psychol 2022; 12:767840. [PMID: 35069342 PMCID: PMC8766726 DOI: 10.3389/fpsyg.2021.767840] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/31/2021] [Indexed: 11/15/2022] Open
Abstract
Perceived quality of experience for speech listening is influenced by cognitive processing and can affect a listener's comprehension, engagement and responsiveness. Quality of Experience (QoE) is a paradigm used within the media technology community to assess media quality by linking quantifiable media parameters to perceived quality. The established QoE framework provides a general definition of QoE, categories of possible quality influencing factors, and an identified QoE formation pathway. These assist researchers to implement experiments and to evaluate perceived quality for any applications. The QoE formation pathways in the current framework do not attempt to capture cognitive effort effects and the standard experimental assessments of QoE minimize the influence from cognitive processes. The impact of cognitive processes and how they can be captured within the QoE framework have not been systematically studied by the QoE research community. This article reviews research from the fields of audiology and cognitive science regarding how cognitive processes influence the quality of listening experience. The cognitive listening mechanism theories are compared with the QoE formation mechanism in terms of the quality contributing factors, experience formation pathways, and measures for experience. The review prompts a proposal to integrate mechanisms from audiology and cognitive science into the existing QoE framework in order to properly account for cognitive load in speech listening. The article concludes with a discussion regarding how an extended framework could facilitate measurement of QoE in broader and more realistic application scenarios where cognitive effort is a material consideration.
Collapse
Affiliation(s)
- Pheobe Wenyi Sun
- QxLab, School of Computer Science, University College Dublin, Dublin, Ireland
| | - Andrew Hines
- QxLab, School of Computer Science, University College Dublin, Dublin, Ireland
| |
Collapse
|
8
|
McClannahan KS, Mainardi A, Luor A, Chiu YF, Sommers MS, Peelle JE. Spoken Word Recognition in Listeners with Mild Dementia Symptoms. J Alzheimers Dis 2022; 90:749-759. [PMID: 36189586 PMCID: PMC9885492 DOI: 10.3233/jad-215606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND Difficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. However, higher-level cognitive processes play a key role in successful communication in noise. Limited cognitive resources in adults with dementia may therefore hamper word recognition. OBJECTIVE The goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise. METHODS Participants were 53-86 years with (n = 16) or without (n = 32) dementia symptoms as classified by the Clinical Dementia Rating scale. Participants performed a word identification task with two levels of word difficulty (few and many similar sounding words) in quiet and in noise at two signal-to-noise ratios, +6 and +3 dB. Our hypothesis was that listeners with mild dementia symptoms would have more difficulty with speech perception in noise under conditions that tax cognitive resources. RESULTS Listeners with mild dementia symptoms had poorer task accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, word difficulty was not a factor in task performance for either group. CONCLUSION These results affirm the difficulty that listeners with mild dementia may have with spoken word recognition, both in quiet and in background noise, consistent with a role of cognitive resources in spoken word identification.
Collapse
Affiliation(s)
| | - Amelia Mainardi
- Department of Otolaryngology, Washington University in St. Louis
| | - Austin Luor
- Department of Otolaryngology, Washington University in St. Louis
| | - Yi-Fang Chiu
- Department of Speech, Language and Hearing Sciences, Saint Louis University
| | - Mitchell S. Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| | | |
Collapse
|
9
|
Lewis JH, Castellanos I, Moberly AC. The Impact of Neurocognitive Skills on Recognition of Spectrally Degraded Sentences. J Am Acad Audiol 2021; 32:528-536. [PMID: 34965599 DOI: 10.1055/s-0041-1732438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Recent models theorize that neurocognitive resources are deployed differently during speech recognition depending on task demands, such as the severity of degradation of the signal or modality (auditory vs. audiovisual [AV]). This concept is particularly relevant to the adult cochlear implant (CI) population, considering the large amount of variability among CI users in their spectro-temporal processing abilities. However, disentangling the effects of individual differences in spectro-temporal processing and neurocognitive skills on speech recognition in clinical populations of adult CI users is challenging. Thus, this study investigated the relationship between neurocognitive functions and recognition of spectrally degraded speech in a group of young adult normal-hearing (NH) listeners. PURPOSE The aim of this study was to manipulate the degree of spectral degradation and modality of speech presented to young adult NH listeners to determine whether deployment of neurocognitive skills would be affected. RESEARCH DESIGN Correlational study design. STUDY SAMPLE Twenty-one NH college students. DATA COLLECTION AND ANALYSIS Participants listened to sentences in three spectral-degradation conditions: no degradation (clear sentences); moderate degradation (8-channel noise-vocoded); and high degradation (4-channel noise-vocoded). Thirty sentences were presented in an auditory-only (A-only) modality and an AV fashion. Visual assessments from The National Institute of Health Toolbox Cognitive Battery were completed to evaluate working memory, inhibition-concentration, cognitive flexibility, and processing speed. Analyses of variance compared speech recognition performance among spectral degradation condition and modality. Bivariate correlation analyses were performed among speech recognition performance and the neurocognitive skills in the various test conditions. RESULTS Main effects on sentence recognition were found for degree of degradation (p = < 0.001) and modality (p = < 0.001). Inhibition-concentration skills moderately correlated (r = 0.45, p = 0.02) with recognition scores for sentences that were moderately degraded in the A-only condition. No correlations were found among neurocognitive scores and AV speech recognition scores. CONCLUSIONS Inhibition-concentration skills are deployed differentially during sentence recognition, depending on the level of signal degradation. Additional studies will be required to study these relations in actual clinical populations such as adult CI users.
Collapse
Affiliation(s)
- Jessica H Lewis
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio.,Department of Speech and Hearing Science; The Ohio State University, Columbus, Ohio
| | - Irina Castellanos
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Aaron C Moberly
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio
| |
Collapse
|
10
|
Patro C, Kreft HA, Wojtczak M. The search for correlates of age-related cochlear synaptopathy: Measures of temporal envelope processing and spatial release from speech-on-speech masking. Hear Res 2021; 409:108333. [PMID: 34425347 PMCID: PMC8424701 DOI: 10.1016/j.heares.2021.108333] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 07/17/2021] [Accepted: 08/04/2021] [Indexed: 01/13/2023]
Abstract
Older adults often experience difficulties understanding speech in adverse listening conditions. It has been suggested that for listeners with normal and near-normal audiograms, these difficulties may, at least in part, arise from age-related cochlear synaptopathy. The aim of this study was to assess if performance on auditory tasks relying on temporal envelope processing reveal age-related deficits consistent with those expected from cochlear synaptopathy. Listeners aged 20 to 66 years were tested using a series of psychophysical, electrophysiological, and speech-perception measures using stimulus configurations that promote coding by medium- and low-spontaneous-rate auditory-nerve fibers. Cognitive measures of executive function were obtained to control for age-related cognitive decline. Results from the different tests were not significantly correlated with each other despite a presumed reliance on common mechanisms involved in temporal envelope processing. Only gap-detection thresholds for a tone in noise and spatial release from speech-on-speech masking were significantly correlated with age. Increasing age was related to impaired cognitive executive function. Multivariate regression analyses showed that individual differences in hearing sensitivity, envelope-based measures, and scores from nonauditory cognitive tests did not significantly contribute to the variability in spatial release from speech-on-speech masking for small target/masker spatial separation, while age was a significant contributor.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA.
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| |
Collapse
|
11
|
Patel R, Srivastava S, Kumar P, Chauhan S, Govindu MD, Jean Simon D. Socio-economic inequality in functional disability and impairments with focus on instrumental activity of daily living: a study on older adults in India. BMC Public Health 2021; 21:1541. [PMID: 34384409 PMCID: PMC8359266 DOI: 10.1186/s12889-021-11591-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 08/03/2021] [Indexed: 12/03/2022] Open
Abstract
Background Studies have examined functional disability among older adults by combining Activities of Daily Living (ADL) and Instrumental Activities of Daily Living (IADL). This study adds another dimension to ADL and IADL by combining various impairments such as hearing, vision, walking, chewing, speaking, and memory loss among older adults. This study examines functional disability among older adults in India as measured by ADL, IADL, along with various impairments. Methods This study utilized data from Building a Knowledge Base on Population Aging in India (BKPAI), a national-level survey and conducted across seven states of India. The study utilized three outcome variables, namely, ADL, IADL, and Impairments. Descriptive and bivariate analyses were used along with multivariate analysis to fulfil the objectives of the study. The concentration index was calculated for ADL, IADL, and impairments, and further, decomposition analysis was carried out for IADL. Results The results observed that nearly 7.5% of older adults were not fully independent for ADL. More than half (56.8%) were not fully independent for IADL, and nearly three-fourths (72.6%) reported impairments. Overall, ADL, IADL, and impairments were higher among older adult’s aged 80+ years, older adults with poor self-rated health, and those suffering from chronic diseases. The likelihood of ADL (AOR = 6.42, 95% CI: 5.1–8.08), IADL (AOR = 5.08, 95% CI: 4.16–6.21), and impairment (AOR = 3.50, 95% CI: 2.73–4.48) were significantly higher among older adults aged 80+ years compared to 60–69 years. Furthermore, older adults who had poor self-rated health and suffered from chronic diseases were more likely to report ADL (AOR = 2.95, 95% CI: 2.37–3.67 and AOR = 2.70, 95% CI: 2.13–3.43), IADL (AOR = 1.74, 95% CI: 1.57–1.92 and AOR = 1.15, 95% CI: 1.04–1.15), and impairment (AOR = 2.36, 95% CI: 2.11–2.63 and AOR = 2.95, 95% CI: 2.65–3.30), respectively compared to their counterparts. Educational status and wealth explained most of the socio-economic inequality in the prevalence of IADL among older adults. Conclusion It is recommended that the government advise older adults to adopt health-promoting approaches, which may be helpful. Further, there is a pressing need to deliver quality care to older adults suffering from chronic conditions. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-11591-1.
Collapse
Affiliation(s)
- Ratna Patel
- Department of Public Health and Mortality Studies, International Institute for Population Sciences, Mumbai, India
| | - Shobhit Srivastava
- Department of Mathematical Demography and Statistics, International Institute for Population Sciences, Mumbai, India
| | - Pradeep Kumar
- Department of Mathematical Demography and Statistics, International Institute for Population Sciences, Mumbai, India
| | - Shekhar Chauhan
- Department of Population Policies and Programmes, International Institute for Population Sciences, Mumbai, India.
| | | | | |
Collapse
|
12
|
Berl RE, Samarasinghe AN, Roberts SG, Jordan FM, Gavin MC. Prestige and content biases together shape the cultural transmission of narratives. EVOLUTIONARY HUMAN SCIENCES 2021; 3:e42. [PMID: 37588523 PMCID: PMC10427335 DOI: 10.1017/ehs.2021.37] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Cultural transmission biases such as prestige are thought to have been a primary driver in shaping the dynamics of human cultural evolution. However, few empirical studies have measured the importance of prestige relative to other effects, such as content biases present within the information being transmitted. Here, we report the findings of an experimental transmission study designed to compare the simultaneous effects of a model using a high- or low-prestige regional accent with the presence of narrative content containing social, survival, emotional, moral, rational, or counterintuitive information in the form of a creation story. Results from multimodel inference reveal that prestige is a significant factor in determining the salience and recall of information, but that several content biases, specifically social, survival, negative emotional, and biological counterintuitive information, are significantly more influential. Further, we find evidence that reliance on prestige cues may serve as a conditional learning strategy when no content cues are available. Our results demonstrate that content biases serve a vital and underappreciated role in cultural transmission and cultural evolution. Social media summary: Storyteller and tale are both key to memorability, but some content is more important than the storyteller's prestige.
Collapse
Affiliation(s)
- Richard E.W. Berl
- Department of Human Dimensions of Natural Resources, Colorado State University, Fort Collins, CO80523-1480, USA
| | - Alarna N. Samarasinghe
- Department of Anthropology and Archaeology, University of Bristol, Bristol, United Kingdom
| | - Seán G. Roberts
- Department of Anthropology and Archaeology, University of Bristol, Bristol, United Kingdom
- School of English, Communication and Philosophy, Cardiff University, Cardiff, United Kingdom
| | - Fiona M. Jordan
- Department of Anthropology and Archaeology, University of Bristol, Bristol, United Kingdom
- Max Planck Institute for the Science of Human History, Jena, Germany
| | - Michael C. Gavin
- Department of Human Dimensions of Natural Resources, Colorado State University, Fort Collins, CO80523-1480, USA
- Max Planck Institute for the Science of Human History, Jena, Germany
| |
Collapse
|
13
|
Brown VA, Van Engen KJ, Peelle JE. Face mask type affects audiovisual speech intelligibility and subjective listening effort in young and older adults. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:49. [PMID: 34275022 PMCID: PMC8286438 DOI: 10.1186/s41235-021-00314-0] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 06/28/2021] [Indexed: 01/25/2023]
Abstract
Identifying speech requires that listeners make rapid use of fine-grained acoustic cues—a process that is facilitated by being able to see the talker’s face. Face masks present a challenge to this process because they can both alter acoustic information and conceal the talker’s mouth. Here, we investigated the degree to which different types of face masks and noise levels affect speech intelligibility and subjective listening effort for young (N = 180) and older (N = 180) adult listeners. We found that in quiet, mask type had little influence on speech intelligibility relative to speech produced without a mask for both young and older adults. However, with the addition of moderate (− 5 dB SNR) and high (− 9 dB SNR) levels of background noise, intelligibility dropped substantially for all types of face masks in both age groups. Across noise levels, transparent face masks and cloth face masks with filters impaired performance the most, and surgical face masks had the smallest influence on intelligibility. Participants also rated speech produced with a face mask as more effortful than unmasked speech, particularly in background noise. Although young and older adults were similarly affected by face masks and noise in terms of intelligibility and subjective listening effort, older adults showed poorer intelligibility overall and rated the speech as more effortful to process relative to young adults. This research will help individuals make more informed decisions about which types of masks to wear in various communicative settings.
Collapse
Affiliation(s)
- Violet A Brown
- Department of Psychological & Brain Sciences, Washington University in Saint Louis, St. Louis, USA.
| | - Kristin J Van Engen
- Department of Psychological & Brain Sciences, Washington University in Saint Louis, St. Louis, USA
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, St. Louis, USA
| |
Collapse
|
14
|
Zendel BR, Power BV, DiDonato RM, Hutchings VMM. Memory Deficits for Health Information Provided Through a Telehealth Video Conferencing System. Front Psychol 2021; 12:604074. [PMID: 33841239 PMCID: PMC8024525 DOI: 10.3389/fpsyg.2021.604074] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 02/26/2021] [Indexed: 11/13/2022] Open
Abstract
It is critical to remember details about meetings with healthcare providers. Forgetting could result in inadequate knowledge about ones' health, non-adherence with treatments, and poorer health outcomes. Hearing the health care provider plays a crucial role in consolidating information for recall. The recent COVID-19 pandemic has meant a rapid transition to videoconference-based medicine, here described as telehealth. When using telehealth speech must be filtered and compressed, and research has shown that degraded speech is more challenging to remember. Here we present preliminary results from a study that compared memory for health information provided in-person to telehealth. The data collection for this study was stopped due to the pandemic, but the preliminary results are interesting because the pandemic forced a rapid transition to telehealth. To examine a potential memory deficit for health information provided through telehealth, we presented older and younger adults with instructions on how to use two medical devices. One set of instructions was presented in-person, and the other through telehealth. Participants were asked to recall the instructions immediately after the session, and again after a 1-week delay. Overall, the number of details recalled was significantly lower when instructions were provided by telehealth, both immediately after the session and after a 1-week delay. It is likely that a mix of technological and communication strategies by the healthcare provider could reduce this telehealth memory deficit. Given the rapid transition to telehealth due to COVID-19, highlighting this deficit and providing potential solutions are timely and of utmost importance.
Collapse
Affiliation(s)
- Benjamin Rich Zendel
- Faculty of Medicine, Memorial University of Newfoundland, St. John's, NL, Canada.,Aging Research Centre-Newfoundland and Labrador, Memorial University, Corner Brook, NL, Canada
| | | | - Roberta Maria DiDonato
- Faculty of Medicine, Memorial University of Newfoundland, St. John's, NL, Canada.,Aging Research Centre-Newfoundland and Labrador, Memorial University, Corner Brook, NL, Canada
| | - Veronica Margaret Moore Hutchings
- Faculty of Medicine, Memorial University of Newfoundland, St. John's, NL, Canada.,Aging Research Centre-Newfoundland and Labrador, Memorial University, Corner Brook, NL, Canada
| |
Collapse
|
15
|
McLaughlin DJ, Braver TS, Peelle JE. Measuring the Subjective Cost of Listening Effort Using a Discounting Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:337-347. [PMID: 33439751 PMCID: PMC8632478 DOI: 10.1044/2020_jslhr-20-00086] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Purpose Objective measures of listening effort have been gaining prominence, as they provide metrics to quantify the difficulty of understanding speech under a variety of circumstances. A key challenge has been to develop paradigms that enable the complementary measurement of subjective listening effort in a quantitatively precise manner. In this study, we introduce a novel decision-making paradigm to examine age-related and individual differences in subjective effort during listening. Method Older and younger adults were presented with spoken sentences mixed with speech-shaped noise at multiple signal-to-noise ratios (SNRs). On each trial, subjects were offered the choice between completing an easier listening trial (presented at +20 dB SNR) for a smaller monetary reward and completing a harder listening trial (presented at either +4, 0, -4, -8, or -12 dB SNR) for a greater monetary reward. By varying the amount of the reward offered for the easier option, the subjective value of performing effortful listening trials at each SNR could be assessed. Results Older adults discounted the value of effortful listening to a greater degree than young adults, opting to accept less money in order to avoid more difficult SNRs. Additionally, older adults with poorer hearing and smaller working memory capacities were more likely to choose easier trials; however, in younger adults, no relationship with hearing or working memory was found. Self-reported measures of economic status did not affect these relationships. Conclusions These findings suggest that subjective listening effort depends on factors including, but not necessarily limited to, hearing and working memory. Additionally, this study demonstrates that economic decision-making paradigms can be a useful approach for assessing subjective listening effort and may prove beneficial in future research.
Collapse
Affiliation(s)
- Drew J. McLaughlin
- Department of Psychological & Brain Sciences, Washington University in St. Louis, MO
| | - Todd S. Braver
- Department of Psychological & Brain Sciences, Washington University in St. Louis, MO
| | | |
Collapse
|
16
|
Wasiuk PA, Radvansky GA, Greene RL, Calandruccio L. Spoken narrative comprehension for young adult listeners: effects of competing voices and noise. Int J Audiol 2021; 60:711-722. [PMID: 33586551 DOI: 10.1080/14992027.2021.1878397] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVE To examine the influence of competing voices or noise on the comprehension of spoken narratives for young adults. DESIGN First, an intelligibility assessment of the target narratives was conducted to establish a signal-to-noise ratio ensuring accurate initial speech recognition. Then, narrative comprehension for two target types (fixed and varied target talker) was measured in four listening conditions (quiet, one-talker speech, speech babble, speech-shaped noise). After hearing target narratives in each listening condition, participants completed a visual recognition memory task that assessed the comprehension of the narrative materials at three levels of representation (surface form, propositional, event model). STUDY SAMPLE Seventy adults (18-32 years of age). RESULTS Narrative comprehension results revealed a main effect of listening condition at the event model level, indicating poorer narrative memory of described situations for all noise conditions compared to quiet. Increased positive responses to thematically consistent but situationally "wrong" memory probes drove this effect. No other significant effects were observed. CONCLUSION Despite near-perfect speech recognition, background noise negatively influenced aspects of spoken narrative comprehension and memory. Specifically, noise did not disrupt memory for what was said (surface form and propositional memory), but only memory for what was talked about (event model memory).
Collapse
Affiliation(s)
- Peter A Wasiuk
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH, USA
| | | | - Robert L Greene
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH, USA
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
17
|
Zan P, Presacco A, Anderson S, Simon JZ. Exaggerated cortical representation of speech in older listeners: mutual information analysis. J Neurophysiol 2020; 124:1152-1164. [PMID: 32877288 DOI: 10.1152/jn.00002.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Aging is associated with an exaggerated representation of the speech envelope in auditory cortex. The relationship between this age-related exaggerated response and a listener's ability to understand speech in noise remains an open question. Here, information-theory-based analysis methods are applied to magnetoencephalography recordings of human listeners, investigating their cortical responses to continuous speech, using the novel nonlinear measure of phase-locked mutual information between the speech stimuli and cortical responses. The cortex of older listeners shows an exaggerated level of mutual information, compared with younger listeners, for both attended and unattended speakers. The mutual information peaks for several distinct latencies: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). For the late component, the neural enhancement of attended over unattended speech is affected by stimulus signal-to-noise ratio, but the direction of this dependency is reversed by aging. Critically, in older listeners and for the same late component, greater cortical exaggeration is correlated with decreased behavioral inhibitory control. This negative correlation also carries over to speech intelligibility in noise, where greater cortical exaggeration in older listeners is correlated with worse speech intelligibility scores. Finally, an age-related lateralization difference is also seen for the ∼100 ms latency peaks, where older listeners show a bilateral response compared with younger listeners' right lateralization. Thus, this information-theory-based analysis provides new, and less coarse-grained, results regarding age-related change in auditory cortical speech processing, and its correlation with cognitive measures, compared with related linear measures.NEW & NOTEWORTHY Cortical representations of natural speech are investigated using a novel nonlinear approach based on mutual information. Cortical responses, phase-locked to the speech envelope, show an exaggerated level of mutual information associated with aging, appearing at several distinct latencies (∼50, ∼100, and ∼200 ms). Critically, for older listeners only, the ∼200 ms latency response components are correlated with specific behavioral measures, including behavioral inhibition and speech comprehension.
Collapse
Affiliation(s)
- Peng Zan
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland
| | - Alessandro Presacco
- Institute for Systems Research, University of Maryland, College Park, Maryland
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland
| | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland.,Institute for Systems Research, University of Maryland, College Park, Maryland.,Department of Biology, University of Maryland, College Park, Maryland
| |
Collapse
|
18
|
Cortical Tracking of Speech in Delta Band Relates to Individual Differences in Speech in Noise Comprehension in Older Adults. Ear Hear 2020; 42:343-354. [PMID: 32826508 DOI: 10.1097/aud.0000000000000923] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
OBJECTIVES Understanding speech in adverse listening environments is challenging for older adults. Individual differences in pure tone averages and working memory are known to be critical indicators of speech in noise comprehension. Recent studies have suggested that tracking of the speech envelope in cortical oscillations <8 Hz may be an important mechanism related to speech comprehension by segmenting speech into words and phrases (delta, 1 to 4 Hz) or phonemes and syllables (theta, 4 to 8 Hz). The purpose of this study was to investigate the extent to which individual differences in pure tone averages, working memory, and cortical tracking of the speech envelope relate to speech in noise comprehension in older adults. DESIGN Cortical tracking of continuous speech was assessed using electroencephalography in older adults (60 to 80 years). Participants listened to speech in quiet and in the presence of noise (time-reversed speech) and answered comprehension questions. Participants completed Forward Digit Span and Backward Digit Span as measures of working memory, and pure tone averages were collected. An index of reduction in noise (RIN) was calculated by normalizing the difference between raw cortical tracking in quiet and in noise. RESULTS Comprehension question performance was greater for speech in quiet than for speech in noise. The relationship between RIN and speech in noise comprehension was assessed while controlling for the effects of individual differences in pure tone averages and working memory. Delta band RIN correlated with speech in noise comprehension, while theta band RIN did not. CONCLUSIONS Cortical tracking by delta oscillations is robust to the effects of noise. These findings demonstrate that the magnitude of delta band RIN relates to individual differences in speech in noise comprehension in older adults. Delta band RIN may serve as a neural metric of speech in noise comprehension beyond the effects of pure tone averages and working memory.
Collapse
|
19
|
Seifi Ala T, Graversen C, Wendt D, Alickovic E, Whitmer WM, Lunner T. An exploratory Study of EEG Alpha Oscillation and Pupil Dilation in Hearing-Aid Users During Effortful listening to Continuous Speech. PLoS One 2020; 15:e0235782. [PMID: 32649733 PMCID: PMC7351195 DOI: 10.1371/journal.pone.0235782] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 06/17/2020] [Indexed: 01/13/2023] Open
Abstract
Individuals with hearing loss allocate cognitive resources to comprehend noisy speech in everyday life scenarios. Such a scenario could be when they are exposed to ongoing speech and need to sustain their attention for a rather long period of time, which requires listening effort. Two well-established physiological methods that have been found to be sensitive to identify changes in listening effort are pupillometry and electroencephalography (EEG). However, these measurements have been used mainly for momentary, evoked or episodic effort. The aim of this study was to investigate how sustained effort manifests in pupillometry and EEG, using continuous speech with varying signal-to-noise ratio (SNR). Eight hearing-aid users participated in this exploratory study and performed a continuous speech-in-noise task. The speech material consisted of 30-second continuous streams that were presented from loudspeakers to the right and left side of the listener (±30° azimuth) in the presence of 4-talker background noise (+180° azimuth). The participants were instructed to attend either to the right or left speaker and ignore the other in a randomized order with two different SNR conditions: 0 dB and -5 dB (the difference between the target and the competing talker). The effects of SNR on listening effort were explored objectively using pupillometry and EEG. The results showed larger mean pupil dilation and decreased EEG alpha power in the parietal lobe during the more effortful condition. This study demonstrates that both measures are sensitive to changes in SNR during continuous speech.
Collapse
Affiliation(s)
- Tirdad Seifi Ala
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Hearing Sciences–Scottish Section, Division of Clinical Neuroscience, University of Nottingham, Glasgow, Scotland, United Kingdom
| | | | - Dorothea Wendt
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Emina Alickovic
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - William M. Whitmer
- Hearing Sciences–Scottish Section, Division of Clinical Neuroscience, University of Nottingham, Glasgow, Scotland, United Kingdom
| | - Thomas Lunner
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| |
Collapse
|
20
|
Veraksa A, Bukhalenkova D, Kartushina N, Oshchepkova E. The Relationship between Executive Functions and Language Production in 5-6-Year-Old Children: Insights from Working Memory and Storytelling. Behav Sci (Basel) 2020; 10:bs10020052. [PMID: 32033457 PMCID: PMC7071471 DOI: 10.3390/bs10020052] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 01/30/2020] [Accepted: 01/31/2020] [Indexed: 11/30/2022] Open
Abstract
This study examined the relationship between working memory capacity and narrative abilities in 5–6-year-old children. 269 children were assessed on their visual and verbal working memory and performed in a story retelling and a story creation (based on a single picture and on a series of pictures) tasks. The stories were evaluated on their macrostructure and microstructure. The results revealed a significant relationship between both components (verbal and visual) of working memory and the global indicators of a story’s macrostructure—such as semantic completeness, semantic adequacy, programming and narrative structure—and with the indicators of a story’s microstructure, such as grammatical accuracy and number of syntagmas. Yet, this relationship was systematically stronger for verbal working memory, as compared to visual working memory, suggesting that a well-developed verbal working memory leads to lexically and grammatically more accurate language production in preschool children.
Collapse
Affiliation(s)
| | - Daria Bukhalenkova
- Lomonosov MSU, Moscow 125009, Russia;
- Correspondence: ; Tel.: +7-916-321-1372
| | | | | |
Collapse
|
21
|
Yusof Y, Mukari SZMS, Dzulkifli MA, Chellapan K, Ahmad K, Ishak I, Maamor N, Ishak WS. Efficacy of a newly developed auditory-cognitive training system on speech recognition, central auditory processing and cognitive ability among older adults with normal cognition and with neurocognitive impairment. Geriatr Gerontol Int 2019; 19:768-773. [PMID: 31237107 DOI: 10.1111/ggi.13710] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 04/25/2019] [Accepted: 05/06/2019] [Indexed: 11/30/2022]
Abstract
AIM To evaluate the efficacy of a newly developed auditory-cognitive training system on speech recognition, central auditory processing and cognition among older adults with normal cognition (NC) and with neurocognitive impairment (NCI). METHODS A double-blind quasi-experiment was carried out on NC (n = 43) and NCI (n = 33) groups. Participants in each group were randomly assigned into treatment and control programs groups. The treatment group underwent auditory-cognitive training, whereas the control group was assigned to watch documentary videos, three times per week, for 8 consecutive weeks. Study outcomes that included Montreal Cognitive Assessment, Malay Hearing in Noise Test, Dichotic Digit Test, Gaps in Noise Test and Pitch Pattern Sequence Test were measured at 4-week intervals at baseline, and weeks 4, 8 and 12. RESULTS Mixed design anova showed significant training effects in total Montreal Cognitive Assessment and Dichotic Digit Test in both groups, NC (P < 0.001) and NCI (P < 0.01). The NC group also showed significant training effects in the Malay Hearing in Noise Test (quiet) (P < 0.01), Gaps in Noise Test (P < 0.001) and Pitch Pattern Sequence Test (humming) (P < 0.05). All training effects were sustained up to 4 weeks after the training ended. CONCLUSIONS The present study suggests that the newly developed auditory-cognitive training system has the potential to improve general cognition and some of the auditory processing abilities in both the NC and NCI groups. Because of the short test-retest intervals used in the present study, it is possible that the training effects were influenced by learning effect and, therefore, should be considered cautiously. Geriatr Gerontol Int 2019; 19: 768-773.
Collapse
Affiliation(s)
- Yusmeera Yusof
- The Ministry of Health, Putrajaya, Malaysia.,Faculty of Health Sciences, The National University of Malaysia, Kuala Lumpur, Malaysia
| | | | | | - Kalaivani Chellapan
- Faculty of Engineering and Built Environment, The National University of Malaysia, Kuala Lumpur, Malaysia
| | - Kartini Ahmad
- Faculty of Health Sciences, The National University of Malaysia, Kuala Lumpur, Malaysia
| | - Ismarulyusda Ishak
- Faculty of Health Sciences, The National University of Malaysia, Kuala Lumpur, Malaysia
| | - Nashrah Maamor
- Faculty of Health Sciences, The National University of Malaysia, Kuala Lumpur, Malaysia
| | - Wan Syafira Ishak
- Faculty of Health Sciences, The National University of Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|
22
|
Presacco A, Simon JZ, Anderson S. Speech-in-noise representation in the aging midbrain and cortex: Effects of hearing loss. PLoS One 2019; 14:e0213899. [PMID: 30865718 PMCID: PMC6415857 DOI: 10.1371/journal.pone.0213899] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Accepted: 03/04/2019] [Indexed: 01/24/2023] Open
Abstract
Age-related deficits in speech-in-noise understanding pose a significant problem for older adults. Despite the vast number of studies conducted to investigate the neural mechanisms responsible for these communication difficulties, the role of central auditory deficits, beyond peripheral hearing loss, remains unclear. The current study builds upon our previous work that investigated the effect of aging on normal-hearing individuals and aims to estimate the effect of peripheral hearing loss on the representation of speech in noise in two critical regions of the aging auditory pathway: the midbrain and cortex. Data from 14 hearing-impaired older adults were added to a previously published dataset of 17 normal-hearing younger adults and 15 normal-hearing older adults. The midbrain response, measured by the frequency-following response (FFR), and the cortical response, measured with the magnetoencephalography (MEG) response, were recorded from subjects listening to speech in quiet and noise conditions at four signal-to-noise ratios (SNRs): +3, 0, -3, and -6 dB sound pressure level (SPL). Both groups of older listeners showed weaker midbrain response amplitudes and overrepresentation of cortical responses compared to younger listeners. No significant differences were found between the two older groups when the midbrain and cortical measurements were analyzed independently. However, significant differences between the older groups were found when investigating the midbrain-cortex relationships; that is, only hearing-impaired older adults showed significant correlations between midbrain and cortical measurements, suggesting that hearing loss may alter reciprocal connections between lower and higher levels of the auditory pathway. The overall paucity of differences in midbrain or cortical responses between the two older groups suggests that age-related temporal processing deficits may contribute to older adults' communication difficulties beyond what might be predicted from peripheral hearing loss alone; however, hearing loss does seem to alter the connectivity between midbrain and cortex. These results may have important ramifications for the field of audiology, as it indicates that algorithms in clinical devices, such as hearing aids, should consider age-related temporal processing deficits to maximize user benefit.
Collapse
Affiliation(s)
- Alessandro Presacco
- Department of Otolaryngology, University of California, Irvine, CA, United States of America
- Center for Hearing Research, University of California, Irvine, CA, United States of America
- * E-mail:
| | - Jonathan Z. Simon
- Department of Electrical & Computer Engineering, University of Maryland, College Park, MD, United States of America
- Department of Biology, University of Maryland, College Park, MD, United States of America
- Institute for Systems Research, University of Maryland, College Park, MD, United States of America
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, United States of America
| | - Samira Anderson
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, United States of America
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, United States of America
| |
Collapse
|
23
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 315] [Impact Index Per Article: 63.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
24
|
Ayasse ND, Wingfield A. A Tipping Point in Listening Effort: Effects of Linguistic Complexity and Age-Related Hearing Loss on Sentence Comprehension. Trends Hear 2019; 22:2331216518790907. [PMID: 30235973 PMCID: PMC6154259 DOI: 10.1177/2331216518790907] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
In recent years, there has been a growing interest in the relationship between effort and performance. Early formulations implied that, as the challenge of a task increases, individuals will exert more effort, with resultant maintenance of stable performance. We report an experiment in which normal-hearing young adults, normal-hearing older adults, and older adults with age-related mild-to-moderate hearing loss were tested for comprehension of recorded sentences that varied the comprehension challenge in two ways. First, sentences were constructed that expressed their meaning either with a simpler subject-relative syntactic structure or a more computationally demanding object-relative structure. Second, for each sentence type, an adjectival phrase was inserted that created either a short or long gap in the sentence between the agent performing an action and the action being performed. The measurement of pupil dilation as an index of processing effort showed effort to increase with task difficulty until a difficulty tipping point was reached. Beyond this point, the measurement of pupil size revealed a commitment of effort by the two groups of older adults who failed to keep pace with task demands as evidenced by reduced comprehension accuracy. We take these pupillometry data as revealing a complex relationship between task difficulty, effort, and performance that might not otherwise appear from task performance alone.
Collapse
Affiliation(s)
- Nicole D Ayasse
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Arthur Wingfield
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
25
|
Rovetti J, Goy H, Pichora-Fuller MK, Russo FA. Functional Near-Infrared Spectroscopy as a Measure of Listening Effort in Older Adults Who Use Hearing Aids. Trends Hear 2019; 23:2331216519886722. [PMID: 31722613 PMCID: PMC6856975 DOI: 10.1177/2331216519886722] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 09/25/2019] [Accepted: 10/01/2019] [Indexed: 02/06/2023] Open
Abstract
Listening effort may be reduced when hearing aids improve access to the acoustic signal. However, this possibility is difficult to evaluate because many neuroimaging methods used to measure listening effort are incompatible with hearing aid use. Functional near-infrared spectroscopy (fNIRS), which can be used to measure the concentration of oxygen in the prefrontal cortex (PFC), appears to be well-suited to this application. The first aim of this study was to establish whether fNIRS could measure cognitive effort during listening in older adults who use hearing aids. The second aim was to use fNIRS to determine if listening effort, a form of cognitive effort, differed depending on whether or not hearing aids were used when listening to sound presented at 35 dB SL (flat gain). Sixteen older adults who were experienced hearing aid users completed an auditory n-back task and a visual n-back task; both tasks were completed with and without hearing aids. We found that PFC oxygenation increased with n-back working memory demand in both modalities, supporting the use of fNIRS to measure cognitive effort during listening in this population. PFC oxygenation was weakly and nonsignificantly correlated with self-reported listening effort and reaction time, respectively, suggesting that PFC oxygenation assesses a dimension of listening effort that differs from these other measures. Furthermore, the extent to which hearing aids reduced PFC oxygenation in the left lateral PFC was positively correlated with age and pure-tone average thresholds. The implications of these findings as well as future directions are discussed.
Collapse
Affiliation(s)
- Joseph Rovetti
- Department of Psychology, Ryerson University, Toronto, ON,
Canada
| | - Huiwen Goy
- Department of Psychology, Ryerson University, Toronto, ON,
Canada
| | | | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, ON,
Canada
- Toronto Rehabilitation Institute, ON, Canada
| |
Collapse
|
26
|
Payne BR, Silcox JW. Aging, context processing, and comprehension. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.07.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
27
|
Brodbeck C, Presacco A, Anderson S, Simon JZ. Over-representation of speech in older adults originates from early response in higher order auditory cortex. ACTA ACUST UNITED AC 2018; 104:774-777. [PMID: 30686956 DOI: 10.3813/aaa.919221] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Previous research has found that, paradoxically, while older adults have more difficulty comprehending speech in challenging circumstances than younger adults, their brain responses track the envelope of the acoustic signal more robustly. Here we investigate this puzzle by using magnetoencephalography (MEG) source localization to determine the anatomical origin of this difference. Our results indicate that this robust tracking in older adults does not arise merely from having the same responses as younger adults but with larger amplitudes; instead, they recruit additional regions, inferior to core auditory cortex, with a short latency of ~30 ms relative to the acoustic signal.
Collapse
Affiliation(s)
- Christian Brodbeck
- Institute for Systems Research, University of Maryland, College Park, Maryland
| | | | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland
| | - Jonathan Z Simon
- Institute for Systems Research, University of Maryland, College Park, Maryland
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland
- Department of Biology, University of Maryland, College Park, Maryland
| |
Collapse
|
28
|
Differences in Hearing Acuity among "Normal-Hearing" Young Adults Modulate the Neural Basis for Speech Comprehension. eNeuro 2018; 5:eN-NWR-0263-17. [PMID: 29911176 PMCID: PMC6001266 DOI: 10.1523/eneuro.0263-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 04/17/2018] [Accepted: 04/18/2018] [Indexed: 12/11/2022] Open
Abstract
In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18–41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause. All participants self-reported normal hearing, confirmed by audiometric testing, with some variation within a clinically normal range. We found that participants showed activity related to sentence processing in a left-lateralized frontotemporal network. Although accuracy was generally high, participants still made some errors, which were associated with increased activity in bilateral cingulo-opercular and frontoparietal attention networks. A whole-brain regression analysis revealed that activity in a right anterior middle frontal gyrus (aMFG) component of the frontoparietal attention network was related to individual differences in hearing acuity, such that listeners with poorer hearing showed greater recruitment of this region when successfully understanding a sentence. The activity in right aMFGs for listeners with poor hearing did not differ as a function of sentence type, suggesting a general mechanism that is independent of linguistic processing demands. Our results suggest that even modest variations in hearing ability impact the systems supporting auditory speech comprehension, and that auditory sentence comprehension entails the coordination of a left perisylvian network that is sensitive to linguistic variation with an executive attention network that responds to acoustic challenge.
Collapse
|
29
|
Koeritzer MA, Rogers CS, Van Engen KJ, Peelle JE. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:740-751. [PMID: 29450493 PMCID: PMC5963044 DOI: 10.1044/2017_jslhr-h-17-0077] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 08/28/2017] [Accepted: 09/20/2017] [Indexed: 05/20/2023]
Abstract
PURPOSE The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. METHOD We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. RESULTS Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. CONCLUSIONS Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5848059.
Collapse
Affiliation(s)
- Margaret A Koeritzer
- Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
| | - Chad S Rogers
- Department of Otolaryngology, Washington University in St. Louis, MO
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, MO
| |
Collapse
|
30
|
Wijayasiri P, Hartley DE, Wiggins IM. Brain activity underlying the recovery of meaning from degraded speech: A functional near-infrared spectroscopy (fNIRS) study. Hear Res 2017; 351:55-67. [DOI: 10.1016/j.heares.2017.05.010] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 05/11/2017] [Accepted: 05/23/2017] [Indexed: 11/30/2022]
|
31
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Cingulo-opercular activity affects incidental memory encoding for speech in noise. Neuroimage 2017. [PMID: 28624645 DOI: 10.1016/j.neuroimage.2017.06.028] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions.
Collapse
Affiliation(s)
- Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States.
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Jayne B Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States.
| |
Collapse
|
32
|
Goossens T, Vercammen C, Wouters J, van Wieringen A. Masked speech perception across the adult lifespan: Impact of age and hearing impairment. Hear Res 2017; 344:109-124. [DOI: 10.1016/j.heares.2016.11.004] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2016] [Revised: 10/24/2016] [Accepted: 11/07/2016] [Indexed: 10/20/2022]
|
33
|
Peelle JE. Introduction to Special Issue on Age, Hearing, and Speech Comprehension. Exp Aging Res 2016; 42:1-2. [PMID: 26683037 DOI: 10.1080/0361073x.2016.1108714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Jonathan E Peelle
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| |
Collapse
|
34
|
Presacco A, Simon JZ, Anderson S. Effect of informational content of noise on speech representation in the aging midbrain and cortex. J Neurophysiol 2016; 116:2356-2367. [PMID: 27605531 PMCID: PMC5110638 DOI: 10.1152/jn.00373.2016] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Accepted: 09/07/2016] [Indexed: 11/22/2022] Open
Abstract
The ability to understand speech is significantly degraded by aging, particularly in noisy environments. One way that older adults cope with this hearing difficulty is through the use of contextual cues. Several behavioral studies have shown that older adults are better at following a conversation when the target speech signal has high contextual content or when the background distractor is not meaningful. Specifically, older adults gain significant benefit in focusing on and understanding speech if the background is spoken by a talker in a language that is not comprehensible to them (i.e., a foreign language). To understand better the neural mechanisms underlying this benefit in older adults, we investigated aging effects on midbrain and cortical encoding of speech when in the presence of a single competing talker speaking in a language that is meaningful or meaningless to the listener (i.e., English vs. Dutch). Our results suggest that neural processing is strongly affected by the informational content of noise. Specifically, older listeners' cortical responses to the attended speech signal are less deteriorated when the competing speech signal is an incomprehensible language rather than when it is their native language. Conversely, temporal processing in the midbrain is affected by different backgrounds only during rapid changes in speech and only in younger listeners. Additionally, we found that cognitive decline is associated with an increase in cortical envelope tracking, suggesting an age-related over (or inefficient) use of cognitive resources that may explain their difficulty in processing speech targets while trying to ignore interfering noise.
Collapse
Affiliation(s)
- Alessandro Presacco
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland;
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
| | - Jonathan Z Simon
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland
- Department of Biology, University of Maryland, College Park, Maryland; and
- Institute for Systems Research, University of Maryland, College Park, Maryland
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
| |
Collapse
|
35
|
Presacco A, Simon JZ, Anderson S. Evidence of degraded representation of speech in noise, in the aging midbrain and cortex. J Neurophysiol 2016; 116:2346-2355. [PMID: 27535374 DOI: 10.1152/jn.00372.2016] [Citation(s) in RCA: 122] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Accepted: 08/12/2016] [Indexed: 01/28/2023] Open
Abstract
Humans have a remarkable ability to track and understand speech in unfavorable conditions, such as in background noise, but speech understanding in noise does deteriorate with age. Results from several studies have shown that in younger adults, low-frequency auditory cortical activity reliably synchronizes to the speech envelope, even when the background noise is considerably louder than the speech signal. However, cortical speech processing may be limited by age-related decreases in the precision of neural synchronization in the midbrain. To understand better the neural mechanisms contributing to impaired speech perception in older adults, we investigated how aging affects midbrain and cortical encoding of speech when presented in quiet and in the presence of a single-competing talker. Our results suggest that central auditory temporal processing deficits in older adults manifest in both the midbrain and in the cortex. Specifically, midbrain frequency following responses to a speech syllable are more degraded in noise in older adults than in younger adults. This suggests a failure of the midbrain auditory mechanisms needed to compensate for the presence of a competing talker. Similarly, in cortical responses, older adults show larger reductions than younger adults in their ability to encode the speech envelope when a competing talker is added. Interestingly, older adults showed an exaggerated cortical representation of speech in both quiet and noise conditions, suggesting a possible imbalance between inhibitory and excitatory processes, or diminished network connectivity that may impair their ability to encode speech efficiently.
Collapse
Affiliation(s)
- Alessandro Presacco
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland; .,Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
| | - Jonathan Z Simon
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland.,Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland.,Department of Biology, University of Maryland, College Park, Maryland; and.,Institute for Systems Research, University of Maryland, College Park, Maryland
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland.,Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
| |
Collapse
|
36
|
Peelle JE, Wingfield A. The Neural Consequences of Age-Related Hearing Loss. Trends Neurosci 2016; 39:486-497. [PMID: 27262177 DOI: 10.1016/j.tins.2016.05.001] [Citation(s) in RCA: 152] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Revised: 05/04/2016] [Accepted: 05/09/2016] [Indexed: 01/02/2023]
Abstract
During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here, we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St Louis, St Louis, MO, USA.
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA.
| |
Collapse
|