1
|
Ashrafi M, Baghban AA. Dynamic Spatial Auditory Processing in the Elderly. Indian J Otolaryngol Head Neck Surg 2024; 76:3031-3036. [PMID: 39130326 PMCID: PMC11306474 DOI: 10.1007/s12070-024-04581-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 02/25/2024] [Indexed: 08/13/2024] Open
Abstract
Purpose One of the most obvious functional effects of aging on the cognitive and processing processes of spatial hearing is the localization problem and the disorder of speech perception in noise. The purpose of the present study is to investigate the performance of dynamic spatial auditory processing in the elderly. Methods This descriptive and analytical study was conducted on 60 young participants aged from18 to 25 years old and 60 elderly participants aged from 60 to 75 old years, using speech, spatial, and qualities of hearing scale (SSQ) questionnaire, binaural masking level difference (BMLD) and dynamic quick speech in noise (DS-QSIN) tests. Results Comparing the average scores of the tests and the questionnaire using the independent t test showed a significant difference between the two groups (p < 0.001). It was also found that gender had no effect on the results (p > 0.05). Conclusions Aging is accompanied by different structural and functional changes in the auditory central nervous system, which leads to a decrease in speech perception in challenging listening environments, as well as a decrease in sound localization abilities, due to the reduction of temporal and spectral information. This problem affects the determination of the source of sound and the spatial cognition of the elderly and leads to a disturbance in the awareness of the auditory environment. Therefore, auditory rehabilitation programs can cause the improvement of spatial auditory processing performance and improve speech perception in noise in the elderly.
Collapse
Affiliation(s)
- Majid Ashrafi
- Department of Audiology, Faculty of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Alireza Akbarzadeh Baghban
- Proteomics Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Department of Biostatistics, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
2
|
Uhrig S, Perkis A, Möller S, Svensson UP, Behne DM. Effects of Spatial Speech Presentation on Listener Response Strategy for Talker-Identification. Front Neurosci 2022; 15:730744. [PMID: 35153653 PMCID: PMC8831717 DOI: 10.3389/fnins.2021.730744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/13/2021] [Indexed: 11/28/2022] Open
Abstract
This study investigates effects of spatial auditory cues on human listeners' response strategy for identifying two alternately active talkers (“turn-taking” listening scenario). Previous research has demonstrated subjective benefits of audio spatialization with regard to speech intelligibility and talker-identification effort. So far, the deliberate activation of specific perceptual and cognitive processes by listeners to optimize their task performance remained largely unexamined. Spoken sentences selected as stimuli were either clean or degraded due to background noise or bandpass filtering. Stimuli were presented via three horizontally positioned loudspeakers: In a non-spatial mode, both talkers were presented through a central loudspeaker; in a spatial mode, each talker was presented through the central or a talker-specific lateral loudspeaker. Participants identified talkers via speeded keypresses and afterwards provided subjective ratings (speech quality, speech intelligibility, voice similarity, talker-identification effort). In the spatial mode, presentations at lateral loudspeaker locations entailed quicker behavioral responses, which were significantly slower in comparison to a talker-localization task. Under clean speech, response times globally increased in the spatial vs. non-spatial mode (across all locations); these “response time switch costs,” presumably being caused by repeated switching of spatial auditory attention between different locations, diminished under degraded speech. No significant effects of spatialization on subjective ratings were found. The results suggested that when listeners could utilize task-relevant auditory cues about talker location, they continued to rely on voice recognition instead of localization of talker sound sources as primary response strategy. Besides, the presence of speech degradations may have led to increased cognitive control, which in turn compensated for incurring response time switch costs.
Collapse
Affiliation(s)
- Stefan Uhrig
- Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
- Quality and Usability Lab, Technische Universität Berlin, Berlin, Germany
- *Correspondence: Stefan Uhrig
| | - Andrew Perkis
- Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| | - Sebastian Möller
- Quality and Usability Lab, Technische Universität Berlin, Berlin, Germany
- Speech and Language Technology, German Research Center for Artificial Intelligence, Berlin, Germany
| | - U. Peter Svensson
- Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn M. Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
3
|
Patro C, Kreft HA, Wojtczak M. The search for correlates of age-related cochlear synaptopathy: Measures of temporal envelope processing and spatial release from speech-on-speech masking. Hear Res 2021; 409:108333. [PMID: 34425347 PMCID: PMC8424701 DOI: 10.1016/j.heares.2021.108333] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 07/17/2021] [Accepted: 08/04/2021] [Indexed: 01/13/2023]
Abstract
Older adults often experience difficulties understanding speech in adverse listening conditions. It has been suggested that for listeners with normal and near-normal audiograms, these difficulties may, at least in part, arise from age-related cochlear synaptopathy. The aim of this study was to assess if performance on auditory tasks relying on temporal envelope processing reveal age-related deficits consistent with those expected from cochlear synaptopathy. Listeners aged 20 to 66 years were tested using a series of psychophysical, electrophysiological, and speech-perception measures using stimulus configurations that promote coding by medium- and low-spontaneous-rate auditory-nerve fibers. Cognitive measures of executive function were obtained to control for age-related cognitive decline. Results from the different tests were not significantly correlated with each other despite a presumed reliance on common mechanisms involved in temporal envelope processing. Only gap-detection thresholds for a tone in noise and spatial release from speech-on-speech masking were significantly correlated with age. Increasing age was related to impaired cognitive executive function. Multivariate regression analyses showed that individual differences in hearing sensitivity, envelope-based measures, and scores from nonauditory cognitive tests did not significantly contribute to the variability in spatial release from speech-on-speech masking for small target/masker spatial separation, while age was a significant contributor.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA.
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| |
Collapse
|
4
|
Derleth P, Georganti E, Latzel M, Courtois G, Hofbauer M, Raether J, Kuehnel V. Binaural Signal Processing in Hearing Aids. Semin Hear 2021; 42:206-223. [PMID: 34594085 PMCID: PMC8463127 DOI: 10.1055/s-0041-1735176] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
For many years, clinicians have understood the advantages of listening with two ears compared with one. In addition to improved speech intelligibility in quiet, noisy, and reverberant environments, binaural versus monaural listening improves perceived sound quality and decreases the effort listeners must expend to understand a target voice of interest or to monitor a multitude of potential target voices. For most individuals with bilateral hearing impairment, the body of evidence collected across decades of research has also found that the provision of two compared with one hearing aid yields significant benefit for the user. This article briefly summarizes the major advantages of binaural compared with monaural hearing, followed by a detailed description of the related technological advances in modern hearing aids. Aspects related to the communication and exchange of data between the left and right hearing aids are discussed together with typical algorithmic approaches implemented in modern hearing aids.
Collapse
|
5
|
Wang X, Xu L. Speech perception in noise: Masking and unmasking. J Otol 2021; 16:109-119. [PMID: 33777124 PMCID: PMC7985001 DOI: 10.1016/j.joto.2020.12.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 12/03/2020] [Accepted: 12/06/2020] [Indexed: 11/23/2022] Open
Abstract
Speech perception is essential for daily communication. Background noise or concurrent talkers, on the other hand, can make it challenging for listeners to track the target speech (i.e., cocktail party problem). The present study reviews and compares existing findings on speech perception and unmasking in cocktail party listening environments in English and Mandarin Chinese. The review starts with an introduction section followed by related concepts of auditory masking. The next two sections review factors that release speech perception from masking in English and Mandarin Chinese, respectively. The last section presents an overall summary of the findings with comparisons between the two languages. Future research directions with respect to the difference in literature on the reviewed topic between the two languages are also discussed.
Collapse
Affiliation(s)
- Xianhui Wang
- Communication Sciences and Disorders, Ohio University, Athens, OH, 45701, USA
| | - Li Xu
- Communication Sciences and Disorders, Ohio University, Athens, OH, 45701, USA
| |
Collapse
|
6
|
EEG correlates of spatial shifts of attention in a dynamic multi-talker speech perception scenario in younger and older adults. Hear Res 2020; 398:108077. [PMID: 32987238 DOI: 10.1016/j.heares.2020.108077] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 08/13/2020] [Accepted: 09/10/2020] [Indexed: 12/23/2022]
Abstract
Speech perception under "cocktail-party" conditions critically depends on the focusing of attention toward the talker of interest. In dynamic auditory scenes, changes in talker settings require rapid shifts of attention, which is especially relevant when the position of a target talker switches from one location to another. Here, we explored electrophysiological correlates of shifts in spatial auditory attention, using a free-field speech perception task, in which sequences of short words (a company name, followed by a numeric value, e.g., "Bosch-6") were presented in the participants' left and right horizontal plane. Younger and older participants responded to the value of a pre-defined target company, while ignoring three simultaneously presented pairs of concurrent company names and values from different locations. All four stimulus pairs were spoken by different talkers, alternating from trial-to-trial. The location of the target company was within either the left or right hemisphere for a variable number of consecutive trials (between 3 and 42 trials) and then changed, switching from the left to the right hemispace or vice versa. Thus, when a switch occurred, the participants had to search for the new position of the target company among the concurrent streams of auditory information and re-focus their attention on the relevant location. As correlates of lateralized spatial auditory attention, the anterior contralateral N2 subcomponent (N2ac) and the posterior alpha power lateralization were analyzed in trials immediately before and after switches of the target location. Both measures were increased after switches, while only the increase in N2ac was related to better speech perception performance (i.e., a reduced post-switch decline in accuracy). While both age groups showed a similar pattern of switch-related attentional modulations, N2ac and alpha lateralization to the task-relevant stimulus (the target company's value) was overall greater in the younger, than older, group. The results suggest that N2ac and alpha lateralization reflect different attentional processes in multi-talker speech perception, the first being primarily associated with auditory search and the focusing of attention, and the second with the in-depth attentional processing of task-relevant information. Especially the second process appears to be prone to age-related cognitive decline.
Collapse
|
7
|
Static and dynamic cocktail party listening in younger and older adults. Hear Res 2020; 395:108020. [DOI: 10.1016/j.heares.2020.108020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 05/13/2020] [Accepted: 06/11/2020] [Indexed: 11/21/2022]
|
8
|
Bidelman GM, Yoo J. Musicians Show Improved Speech Segregation in Competitive, Multi-Talker Cocktail Party Scenarios. Front Psychol 2020; 11:1927. [PMID: 32973610 PMCID: PMC7461890 DOI: 10.3389/fpsyg.2020.01927] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 07/13/2020] [Indexed: 12/05/2022] Open
Abstract
Studies suggest that long-term music experience enhances the brain’s ability to segregate speech from noise. Musicians’ “speech-in-noise (SIN) benefit” is based largely on perception from simple figure-ground tasks rather than competitive, multi-talker scenarios that offer realistic spatial cues for segregation and engage binaural processing. We aimed to investigate whether musicians show perceptual advantages in cocktail party speech segregation in a competitive, multi-talker environment. We used the coordinate response measure (CRM) paradigm to measure speech recognition and localization performance in musicians vs. non-musicians in a simulated 3D cocktail party environment conducted in an anechoic chamber. Speech was delivered through a 16-channel speaker array distributed around the horizontal soundfield surrounding the listener. Participants recalled the color, number, and perceived location of target callsign sentences. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (0–1–2–3–4–6–8 multi-talkers). Musicians obtained faster and better speech recognition amidst up to around eight simultaneous talkers and showed less noise-related decline in performance with increasing interferers than their non-musician peers. Correlations revealed associations between listeners’ years of musical training and CRM recognition and working memory. However, better working memory correlated with better speech streaming. Basic (QuickSIN) but not more complex (speech streaming) SIN processing was still predicted by music training after controlling for working memory. Our findings confirm a relationship between musicianship and naturalistic cocktail party speech streaming but also suggest that cognitive factors at least partially drive musicians’ SIN advantage.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, United States
| | - Jessica Yoo
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
| |
Collapse
|
9
|
Moore BCJ. Effects of hearing loss and age on the binaural processing of temporal envelope and temporal fine structure information. Hear Res 2020; 402:107991. [PMID: 32418682 DOI: 10.1016/j.heares.2020.107991] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 04/24/2020] [Accepted: 05/05/2020] [Indexed: 11/28/2022]
Abstract
Within the cochlea, broadband sounds like speech and music are filtered into a series of narrowband signals, each with a relatively slowly varying envelope (ENV) imposed on a rapidly oscillating carrier (the temporal fine structure, TFS). Information about ENV is conveyed by the timing and short-term rate of action potentials in the auditory nerve while information about TFS is conveyed by synchronization of action potentials to a specific phase of the waveform in the cochlea (phase locking). This paper describes the effects of age and hearing loss on the binaural processing of ENV and TFS information, i.e. on the processing of differences in ENV and TFS at the two ears. The binaural processing of TFS information is adversely affected by both hearing loss and increasing age. The binaural processing of ENV information deteriorates somewhat with increasing age but is only slightly affected by hearing loss. The reduced TFS processing abilities found for older/hearing-impaired subjects may partially account for the difficulties that such subjects experience in complex listening situations when the target speech and interfering sounds come from different directions in space.
Collapse
Affiliation(s)
- Brian C J Moore
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK.
| |
Collapse
|
10
|
Dias JW, McClaskey CM, Eckert MA, Jensen JH, Harris KC. Intra- and interhemispheric white matter tract associations with auditory spatial processing: Distinct normative and aging effects. Neuroimage 2020; 215:116792. [PMID: 32278895 PMCID: PMC7292771 DOI: 10.1016/j.neuroimage.2020.116792] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 02/20/2020] [Accepted: 03/20/2020] [Indexed: 12/18/2022] Open
Abstract
Declining auditory spatial processing is hypothesized to contribute to the difficulty older adults have detecting, locating, and selecting a talker from among others in noisy listening environments. Though auditory spatial processing has been associated with several cortical structures, little is known regarding the underlying white matter architecture or how age-related changes in white matter microstructure may affect it. The arcuate fasciculus is a target for understanding age-related differences in auditory spatial attention based on normative spatial attention findings in humans. Similarly, animal and human clinical studies suggest that the corpus callosum plays a role in the cross-hemispheric integration of auditory spatial information important for spatial localization and attention. The current investigation used diffusion imaging to examine the extent to which age-group differences in the identification of spatially cued speech were accounted for by individual differences in the white matter microstructure of the right arcuate fasciculus and the corpus callosum. Higher right arcuate and callosal fractional anisotropy (FA) predicted better segregation and identification of spatially cued speech across younger and older listeners. Further, individual differences in callosal microstructure mediated age-group differences in auditory spatial processing. Follow-up analyses suggested that callosal tracts connecting left and right pre-frontal and posterior parietal cortex are particularly important for auditory spatial processing. The results are consistent with previous work in animals and clinical human samples and provide a cortical mechanism to account for age-related deficits in auditory spatial processing. Further, the results suggest that both intrahemispheric and interhemispheric mechanisms are involved in auditory spatial processing.
Collapse
|
11
|
Carr S, Pichora-Fuller MK, Li KZ, Phillips N, Campos JL. Multisensory, Multi-Tasking Performance of Older Adults With and Without Subjective Cognitive Decline. Multisens Res 2019; 32:797-829. [DOI: 10.1163/22134808-20191426] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 05/28/2019] [Indexed: 11/19/2022]
Abstract
Abstract
As the population ages, it is increasingly important to detect non-normative cognitive declines as early as possible. Measures of combined sensory–motor–cognitive functioning may be early markers for identifying individuals who are at increased risk of developing dementia. Further, older adults experiencing subjective cognitive decline (SCD) may have elevated risk of dementia compared to those without SCD. Tasks involving complex, multisensory interactions reflective of everyday challenges may be particularly sensitive to subjectively perceived, pre-clinical declines. In the current study, older adults with and without SCD were asked to simultaneously perform a standing balance task and a listening task under increasingly challenging sensory/cognitive/motor conditions using a dual-task paradigm in a realistic, immersive virtual environment. It was hypothesized that, compared to older adults without SCD, those with SCD would exhibit greater decrements in postural control and listening response accuracy as sensory/motor/cognitive loads increased. However, counter to predictions, older adults with SCD demonstrated greater reductions in postural sway under more challenging dual-task conditions than those without SCD. Across both groups, poorer postural task performance was associated with poorer cognitive function and speech-in-noise thresholds measured with standard baseline tests. Poorer listening task performance was associated with poorer global cognitive function, poorer mobility, and poorer speech-in-noise detection. Overall, the results provide additional support for the growing evidence demonstrating associations between sensory, motor, and cognitive functioning and contribute to an evolving consideration of how best to categorize and characterize SCD in a way that guides strategies for screening, assessment, and intervention.
Collapse
Affiliation(s)
- Sophie Carr
- 1KITE—Toronto Rehabilitation Institute, University Health Network, Canada
- 2Department of Psychology, University of Toronto, Canada
| | - M. Kathleen Pichora-Fuller
- 2Department of Psychology, University of Toronto, Canada
- 4Centre for Research in Human Development, Concordia University, Canada
| | - Karen Z. H. Li
- 3Department of Psychology, Concordia University, Canada
- 4Centre for Research in Human Development, Concordia University, Canada
| | - Natalie Phillips
- 3Department of Psychology, Concordia University, Canada
- 4Centre for Research in Human Development, Concordia University, Canada
| | - Jennifer L. Campos
- 1KITE—Toronto Rehabilitation Institute, University Health Network, Canada
- 2Department of Psychology, University of Toronto, Canada
- 4Centre for Research in Human Development, Concordia University, Canada
| |
Collapse
|
12
|
Domingo Y, Holmes E, Macpherson E, Johnsrude IS. Using spatial release from masking to estimate the magnitude of the familiar-voice intelligibility benefit. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:3487. [PMID: 31795686 DOI: 10.1121/1.5133628] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2019] [Accepted: 10/23/2019] [Indexed: 06/10/2023]
Abstract
The ability to segregate simultaneous speech streams is crucial for successful communication. Recent studies have demonstrated that participants can report 10%-20% more words spoken by naturally familiar (e.g., friends or spouses) than unfamiliar talkers in two-voice mixtures. This benefit is commensurate with one of the largest benefits to speech intelligibility currently known-that which is gained by spatially separating two talkers. However, because of differences in the methods of these previous studies, the relative benefits of spatial separation and voice familiarity are unclear. Here, the familiar-voice benefit and spatial release from masking are directly compared, and it is examined if and how these two cues interact with one another. Talkers were recorded while speaking sentences from a published closed-set "matrix" task, and then listeners were presented with three different sentences played simultaneously. Each target sentence was played at 0° azimuth, and two masker sentences were symmetrically separated about the target. On average, participants reported 10%-30% more words correctly when the target sentence was spoken in a familiar than unfamiliar voice (collapsed over spatial separation conditions); it was found that participants gain a similar benefit from a familiar target as when an unfamiliar voice is separated from two symmetrical maskers by approximately 15° azimuth.
Collapse
Affiliation(s)
- Ysabel Domingo
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
| | - Emma Holmes
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
| | - Ewan Macpherson
- School of Communication Sciences and Disorders, University of Western Ontario, London, Ontario, Canada
| | - Ingrid S Johnsrude
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
13
|
Zobel BH, Wagner A, Sanders LD, Başkent D. Spatial release from informational masking declines with age: Evidence from a detection task in a virtual separation paradigm. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:548. [PMID: 31370625 DOI: 10.1121/1.5118240] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 06/28/2019] [Indexed: 06/10/2023]
Abstract
Declines in spatial release from informational masking may contribute to the speech-processing difficulties that older adults often experience within complex listening environments. The present study sought to answer two fundamental questions: (1) Does spatial release from informational masking decline with age and, if so, (2) does age predict this decline independently of age-typical hearing loss? Younger (18-34 years) and older (60-80 years) adults with age-typical hearing completed a yes/no target-detection task with low-pass filtered noise-vocoded speech designed to reduce non-spatial segregation cues and control for hearing loss. Participants detected a target voice among two-talker masking babble while a virtual spatial separation paradigm [Freyman, Helfer, McCall, and Clifton, J. Acoust. Soc. Am. 106(6), 3578-3588 (1999)] was used to isolate informational masking release. The younger and older adults both exhibited spatial release from informational masking, but masking release was reduced among the older adults. Furthermore, age predicted this decline controlling for hearing loss, while there was no indication that hearing loss played a role. These findings provide evidence that declines specific to aging limit spatial release from informational masking under challenging listening conditions.
Collapse
Affiliation(s)
- Benjamin H Zobel
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst, Massachusetts 01003, USA
| | - Anita Wagner
- Department of Otorhinolaryngology-Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Lisa D Sanders
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst, Massachusetts 01003, USA
| | - Deniz Başkent
- Department of Otorhinolaryngology-Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
14
|
Profant O, Jilek M, Bures Z, Vencovsky V, Kucharova D, Svobodova V, Korynta J, Syka J. Functional Age-Related Changes Within the Human Auditory System Studied by Audiometric Examination. Front Aging Neurosci 2019; 11:26. [PMID: 30863300 PMCID: PMC6399208 DOI: 10.3389/fnagi.2019.00026] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Accepted: 01/30/2019] [Indexed: 12/27/2022] Open
Abstract
Age related hearing loss (presbycusis) is one of the most common sensory deficits in the aging population. The main subjective ailment in the elderly is the deterioration of speech understanding, especially in a noisy environment, which cannot solely be explained by increased hearing thresholds. The examination methods used in presbycusis are primarily focused on the peripheral pathologies (e.g., hearing sensitivity measured by hearing thresholds), with only a limited capacity to detect the central lesion. In our study, auditory tests focused on central auditory abilities were used in addition to classical examination tests, with the aim to compare auditory abilities between an elderly group (elderly, mean age 70.4 years) and young controls (young, mean age 24.4 years) with clinically normal auditory thresholds, and to clarify the interactions between peripheral and central auditory impairments. Despite the fact that the elderly were selected to show natural age-related deterioration of hearing (auditory thresholds did not exceed 20 dB HL for main speech frequencies) and with clinically normal speech reception thresholds (SRTs), the detailed examination of their auditory functions revealed deteriorated processing of temporal parameters [gap detection threshold (GDT), interaural time difference (ITD) detection] which was partially responsible for the altered perception of distorted speech (speech in babble noise, gated speech). An analysis of interactions between peripheral and central auditory abilities, showed a stronger influence of peripheral function than temporal processing ability on speech perception in silence in the elderly with normal cognitive function. However, in a more natural environment mimicked by the addition of background noise, the role of temporal processing increased rapidly.
Collapse
Affiliation(s)
- Oliver Profant
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia.,Department of Otorhinolaryngology of Faculty Hospital Královské Vinohrady and 3rd Faculty of Medicine, Charles University, Prague, Czechia
| | - Milan Jilek
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia
| | - Zbynek Bures
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia.,Department of Technical Studies, College of Polytechnics, Jihlava, Czechia
| | - Vaclav Vencovsky
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia
| | - Diana Kucharova
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia.,Department of Otorhinolaryngology and Head and Neck Surgery, 1st Faculty of Medicine, Charles University in Prague, University Hospital Motol, Prague, Czechia
| | - Veronika Svobodova
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia.,Department of Otorhinolaryngology and Head and Neck Surgery, 1st Faculty of Medicine, Charles University in Prague, University Hospital Motol, Prague, Czechia
| | | | - Josef Syka
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia
| |
Collapse
|
15
|
Zhang C, Tao R, Zhao H. Auditory spatial attention modulates the unmasking effect of perceptual separation in a "cocktail party" environment. Neuropsychologia 2019; 124:108-116. [PMID: 30659864 DOI: 10.1016/j.neuropsychologia.2019.01.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Revised: 11/01/2018] [Accepted: 01/15/2019] [Indexed: 11/30/2022]
Abstract
The perceptual separation between a signal speech and a competing speech (masker), induced by the precedence effect, plays an important role in releasing the signal speech from the masker, especially in a reverberant environment. The perceptual-separation-induced unmasking effect has been suggested to involve multiple cognitive processes, such as selective attention. However, whether listeners' spatial attention modulate the perceptual-separation-induced unmasking effect is not clear. The present study investigated how perceptual separation and auditory spatial attention interact with each other to facilitate speech perception under a simulated noisy and reverberant environment by analyzing the cortical auditory evoked potentials to the signal speech. The results showed that the N1 wave was significantly enhanced by perceptual separation between the signal and masker regardless of whether the participants' spatial attention was directed to the signal or not. However, the P2 wave was significantly enhanced by perceptual separation only when the participants attended to the signal speech. The results indicate that the perceptual-separation-induced facilitation of P2 needs more attentional resource than that of N1. The results also showed that the signal speech caused an enhanced N1 in the contralateral hemisphere regardless of whether participants' attention was directed to the signal or not. In contrast, the signal speech caused an enhanced P2 in the contralateral hemisphere only when the participant attended to the signal. The results indicate that the hemispheric distribution of N1 is mainly affected by the perceptual features of the acoustic stimuli, while that of P2 is affected by the listeners' attentional status.
Collapse
Affiliation(s)
- Changxin Zhang
- Faculty of Education, East China Normal University, Shanghai, China; Key Laboratory of Speech and Hearing Science, East China Normal University, Shanghai, China.
| | - Renxia Tao
- Faculty of Education, East China Normal University, Shanghai, China; Key Laboratory of Speech and Hearing Science, East China Normal University, Shanghai, China
| | - Hang Zhao
- Faculty of Education, East China Normal University, Shanghai, China; Key Laboratory of Speech and Hearing Science, East China Normal University, Shanghai, China
| |
Collapse
|
16
|
Jakien KM, Gallun FJ. Normative Data for a Rapid, Automated Test of Spatial Release From Masking. Am J Audiol 2018; 27:529-538. [PMID: 30458523 PMCID: PMC6436452 DOI: 10.1044/2018_aja-17-0069] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 01/20/2018] [Indexed: 12/02/2022] Open
Abstract
Purpose The purpose of this study is to report normative data and predict thresholds for a rapid test of spatial release from masking for speech perception. The test is easily administered and has good repeatability, with the potential to be used in clinics and laboratories. Normative functions were generated for adults varying in age and amounts of hearing loss. Method The test of spatial release presents a virtual auditory scene over headphones with 2 conditions: colocated (with target and maskers at 0°) and spatially separated (with target at 0° and maskers at ± 45°). Listener thresholds are determined as target-to-masker ratios, and spatial release from masking (SRM) is determined as the difference between the colocated condition and spatially separated condition. Multiple linear regression was used to fit the data from 82 adults 18–80 years of age with normal to moderate hearing loss (0–40 dB HL pure-tone average [PTA]). The regression equations were then used to generate normative functions that relate age (in years) and hearing thresholds (as PTA) to target-to-masker ratios and SRM. Results Normative functions were able to predict thresholds with an error of less than 3.5 dB in all conditions. In the colocated condition, the function included only age as a predictive parameter, whereas in the spatially separated condition, both age and PTA were included as parameters. For SRM, PTA was the only significant predictor. Different functions were generated for the 1st run, the 2nd run, and the average of the 2 runs. All 3 functions were largely similar in form, with the smallest error being associated with the function on the basis of the average of 2 runs. Conclusion With the normative functions generated from this data set, it would be possible for a researcher or clinician to interpret data from a small number of participants or even a single patient without having to first collect data from a control group, substantially reducing the time and resources needed. Supplemental Material https://doi.org/10.23641/asha.7080878
Collapse
Affiliation(s)
- Kasey M. Jakien
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Department of Veterans Affairs, OR
- Department of Otolaryngology–Head & Neck Surgery, Oregon Health and Science University, Portland
| | - Frederick J. Gallun
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Department of Veterans Affairs, OR
- Department of Otolaryngology–Head & Neck Surgery, Oregon Health and Science University, Portland
| |
Collapse
|
17
|
Rana B, Buchholz JM. Effect of improving audibility on better-ear glimpsing using non-linear amplification. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:3465. [PMID: 30599669 DOI: 10.1121/1.5083823] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 11/30/2018] [Indexed: 06/09/2023]
Abstract
Better-ear glimpsing (BEG) utilizes interaural level differences (ILDs) to improve speech intelligibility in noise. This spatial benefit is reduced in most hearing-impaired (HI) listeners due to their increased hearing loss at high frequencies. Even though this benefit can be improved by providing increased amplification, the improvement is limited by loudness discomfort. An alternative solution therefore extends ILDs to low frequencies, which has been shown to provide a substantial benefit from BEG. In contrast to previous studies, which only applied linear stimulus manipulations, wide dynamic range compression was applied here to improve the audibility of soft sounds while ensuring loudness comfort for loud sounds. Performance in both speech intelligibility and BEG was measured in 13 HI listeners at three different masker levels and for different interaural stimulus manipulations. The results revealed that at low signal levels, performance substantially improved with increasing masker level, but this improvement was reduced by the compressive behaviour at higher levels. Moreover, artificially extending ILDs by applying infinite (broadband) ILDs provided an extra spatial benefit in speech reception thresholds of up to 5 dB on top of that already provided by natural ILDs and interaural time differences, which increased with increasing signal level.
Collapse
Affiliation(s)
- Baljeet Rana
- Department of Linguistics, 16 University Avenue, Macquarie University, NSW 2109, Australia
| | - Jörg M Buchholz
- Department of Linguistics, 16 University Avenue, Macquarie University, NSW 2109, Australia
| |
Collapse
|
18
|
Nieborowska V, Lau ST, Campos J, Pichora-Fuller MK, Novak A, Li KZH. Effects of Age on Dual-Task Walking While Listening. J Mot Behav 2018; 51:416-427. [PMID: 30239280 DOI: 10.1080/00222895.2018.1498318] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
This study examined the effects of age on single- and dual-task listening and walking during virtual street crossing. Seventeen younger and 12 older adults participated. In each listening trial, three sentences were presented simultaneously from separate locations. Participants were instructed to report the target sentence. Predictability of the target sentence location was varied. Treadmill walking was measured using motion analysis. Measures included word recognition accuracy, head and trunk angles, and spatiotemporal gait parameters. Older adults exhibited a more upright head alignment and less variability in stride time during dual-tasking, particularly under less certain target sentence location conditions. Younger adults' walking was unaffected by dual-task demands. Together, the results indicate greater postural prioritization in older adults than young.
Collapse
Affiliation(s)
- Victoria Nieborowska
- a Department of Psychology , Concordia University , Montreal , Quebec , Canada .,b Centre for Research in Human Development , Montreal , Quebec , Canada .,c PERFORM Centre , Concordia University , Montreal , Quebec , Canada
| | - Sin-Tung Lau
- d Department of Kinesiology and Physical Education , Wilfrid Laurier University , Waterloo , Ontario , Canada .,e Toronto Rehabilitation Institute , University Health Network , Toronto , Ontario , Canada
| | - Jennifer Campos
- b Centre for Research in Human Development , Montreal , Quebec , Canada .,e Toronto Rehabilitation Institute , University Health Network , Toronto , Ontario , Canada .,f Department of Psychology , University of Toronto , Toronto , Ontario , Canada
| | - M Kathleen Pichora-Fuller
- b Centre for Research in Human Development , Montreal , Quebec , Canada .,e Toronto Rehabilitation Institute , University Health Network , Toronto , Ontario , Canada .,f Department of Psychology , University of Toronto , Toronto , Ontario , Canada
| | - Alison Novak
- e Toronto Rehabilitation Institute , University Health Network , Toronto , Ontario , Canada .,g Department of Occupational Science and Occupational Therapy , University of Toronto , Toronto , Ontario , Canada
| | - Karen Z H Li
- a Department of Psychology , Concordia University , Montreal , Quebec , Canada .,b Centre for Research in Human Development , Montreal , Quebec , Canada .,c PERFORM Centre , Concordia University , Montreal , Quebec , Canada
| |
Collapse
|
19
|
Rana B, Buchholz JM. Effect of audibility on better-ear glimpsing as a function of frequency in normal-hearing and hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:2195. [PMID: 29716302 DOI: 10.1121/1.5031007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Better-ear glimpsing (BEG) is an auditory phenomenon that helps understanding speech in noise by utilizing interaural level differences (ILDs). The benefit provided by BEG is limited in hearing-impaired (HI) listeners by reduced audibility at high frequencies. Rana and Buchholz [(2016). J. Acoust. Soc. Am. 140(2), 1192-1205] have shown that artificially enhancing ILDs at low and mid frequencies can help HI listeners understanding speech in noise, but the achieved benefit is smaller than in normal-hearing (NH) listeners. To understand how far this difference is explained by differences in audibility, audibility was carefully controlled here in ten NH and ten HI listeners and speech reception thresholds (SRTs) in noise were measured in a spatially separated and co-located condition as a function of frequency and sensation level. Maskers were realized by noise-vocoded speech and signals were spatialized using artificially generated broadband ILDs. The spatial benefit provided by BEG and SRTs improved consistently with increasing sensation level, but was limited in the HI listeners by loudness discomfort. Further, the HI listeners performed similar to NH listeners when differences in audibility were compensated. The results help to understand the hearing aid gain that is required to maximize the spatial benefit provided by ILDs as a function of frequency.
Collapse
Affiliation(s)
- Baljeet Rana
- National Acoustic Laboratories, 16 University Avenue, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Jörg M Buchholz
- National Acoustic Laboratories, 16 University Avenue, Macquarie University, Sydney, New South Wales 2109, Australia
| |
Collapse
|
20
|
Jakien KM, Kampel SD, Stansell MM, Gallun FJ. Validating a Rapid, Automated Test of Spatial Release From Masking. Am J Audiol 2017; 26:507-518. [PMID: 28973106 PMCID: PMC5968328 DOI: 10.1044/2017_aja-17-0013] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 05/04/2017] [Accepted: 05/23/2017] [Indexed: 11/09/2022] Open
Abstract
PURPOSE To evaluate the test-retest reliability of a headphone-based spatial release from a masking task with two maskers (referred to here as the SR2) and to describe its relationship to the same test done over loudspeakers in an anechoic chamber (the SR2A). We explore what thresholds tell us about certain populations (such as older individuals or individuals with hearing impairment) and discuss how the SR2 might be useful in the clinic. METHOD Fifty-four participants completed speech intelligibility tests in which a target phrase and two masking phrases from the Coordinate Response Measure corpus (Bolia, Nelson, Ericson, & Simpson, 2000) were presented either via earphones using a virtual spatial array or via loudspeakers in an anechoic chamber. For the SR2, the target sentence was always at 0° azimuth angle, and the maskers were either colocated at 0° or positioned at ± 45°. For the SR2A, the target was located at 0°, and the maskers were colocated or located at ± 15°, ± 30°, ± 45°, ± 90°, or ± 135°. Spatial release from masking was determined as the difference between thresholds in the colocated condition and each spatially separated condition. All participants completed the SR2 at least twice, and 29 of the individuals who completed the SR2 at least twice also participated in the SR2A. In a second experiment, 40 participants completed the SR2 8 times, and the changes in performance were evaluated as a function of test repetition. RESULTS Mean thresholds were slightly better on the SR2 after the first repetition but were consistent across 8 subsequent testing sessions. Performance was consistent for the SR2A, regardless of the number of times testing was repeated. The SR2, which simulates 45° separations of target and maskers, produced spatially separated thresholds that were similar to thresholds obtained with 30° of separation in the anechoic chamber. Over headphones and in the anechoic chamber, pure-tone average was a strong predictor of spatial release, whereas age only reached significance for colocated conditions. CONCLUSIONS The SR2 is a reliable and effective method of testing spatial release from masking, suitable for screening abnormal listening abilities and for tracking rehabilitation over time. Future work should focus on developing and validating rapid, automated testing to identify the ability of listeners to benefit from high-frequency amplification, smaller spatial separations, and larger spectral differences among talkers.
Collapse
Affiliation(s)
- Kasey M. Jakien
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, U.S. Department of Veterans Affairs, OR
- Department of Otolaryngology/Head & Neck Surgery, Oregon Health & Science University, Portland
| | - Sean D. Kampel
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, U.S. Department of Veterans Affairs, OR
- Department of Otolaryngology/Head & Neck Surgery, Oregon Health & Science University, Portland
| | - Meghan M. Stansell
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, U.S. Department of Veterans Affairs, OR
| | - Frederick J. Gallun
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, U.S. Department of Veterans Affairs, OR
- Department of Otolaryngology/Head & Neck Surgery, Oregon Health & Science University, Portland
| |
Collapse
|
21
|
Shinn-Cunningham B. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2976-2988. [PMID: 29049598 PMCID: PMC5945067 DOI: 10.1044/2017_jslhr-h-17-0080] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 06/23/2017] [Accepted: 07/05/2017] [Indexed: 05/28/2023]
Abstract
PURPOSE This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. METHOD The results from neuroscience and psychoacoustics are reviewed. RESULTS In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." CONCLUSIONS How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. PRESENTATION VIDEO http://cred.pubs.asha.org/article.aspx?articleid=2601617.
Collapse
Affiliation(s)
- Barbara Shinn-Cunningham
- Center for Research in Sensory Communication and Emerging Neural Technology, Boston University, MA
| |
Collapse
|
22
|
|
23
|
Helfer KS, Merchant GR, Freyman RL. Aging and the effect of target-masker alignment. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:3844. [PMID: 27908027 PMCID: PMC5392104 DOI: 10.1121/1.4967297] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Revised: 10/05/2016] [Accepted: 10/25/2016] [Indexed: 05/29/2023]
Abstract
Similarity between target and competing speech messages plays a large role in how easy or difficult it is to understand messages of interest. Much research on informational masking has used highly aligned target and masking utterances that are very similar semantically and syntactically. However, listeners rarely encounter situations in real life where they must understand one sentence in the presence of another (or more than one) highly aligned, syntactically similar competing sentence(s). The purpose of the present study was to examine the effect of syntactic/semantic similarity of target and masking speech in different spatial conditions among younger, middle-aged, and older adults. The results of this experiment indicate that differences in speech recognition between older and younger participants were largest when the masker surrounded the target and was more similar to the target, especially at more adverse signal-to-noise ratios. Differences among listeners and the effect of similarity were much less robust, and all listeners were relatively resistant to masking, when maskers were located on one side of the target message. The present results suggest that previous studies using highly aligned stimuli may have overestimated age-related speech recognition problems.
Collapse
Affiliation(s)
- Karen S Helfer
- Department of Communication Disorders, University of Massachusetts Amherst, 358 North Pleasant Street, Amherst, Massachusetts 01003, USA
| | - Gabrielle R Merchant
- Department of Communication Disorders, University of Massachusetts Amherst, 358 North Pleasant Street, Amherst, Massachusetts 01003, USA
| | - Richard L Freyman
- Department of Communication Disorders, University of Massachusetts Amherst, 358 North Pleasant Street, Amherst, Massachusetts 01003, USA
| |
Collapse
|
24
|
Dai L, Shinn-Cunningham BG. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks. Front Hum Neurosci 2016; 10:530. [PMID: 27812330 PMCID: PMC5071360 DOI: 10.3389/fnhum.2016.00530] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Accepted: 10/05/2016] [Indexed: 11/13/2022] Open
Abstract
Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands.
Collapse
Affiliation(s)
- Lengshi Dai
- Department of Biomedical Engineering, Boston University Boston, MA, USA
| | | |
Collapse
|
25
|
Dai L, Shinn-Cunningham BG. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks. Front Hum Neurosci 2016. [PMID: 27812330 DOI: 10.3389/fnhum.2016.00530/bibtex] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands.
Collapse
Affiliation(s)
- Lengshi Dai
- Department of Biomedical Engineering, Boston University Boston, MA, USA
| | | |
Collapse
|
26
|
The Implications of Cognitive Aging for Listening and the Framework for Understanding Effortful Listening (FUEL). Ear Hear 2016; 37 Suppl 1:44S-51S. [DOI: 10.1097/aud.0000000000000309] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
27
|
Attentional modulation of informational masking on early cortical representations of speech signals. Hear Res 2016; 331:119-30. [DOI: 10.1016/j.heares.2015.11.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2013] [Revised: 10/27/2015] [Accepted: 11/04/2015] [Indexed: 11/27/2022]
|
28
|
Glyde H, Buchholz JM, Nielsen L, Best V, Dillon H, Cameron S, Hickson L. Effect of audibility on spatial release from speech-on-speech masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:3311-9. [PMID: 26627803 PMCID: PMC5392063 DOI: 10.1121/1.4934732] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
This study investigated to what extent spatial release from masking (SRM) deficits in hearing-impaired adults may be related to reduced audibility of the test stimuli. Sixteen adults with sensorineural hearing loss and 28 adults with normal hearing were assessed on the Listening in Spatialized Noise-Sentences test, which measures SRM using a symmetric speech-on-speech masking task. Stimuli for the hearing-impaired listeners were delivered using three amplification levels (National Acoustic Laboratories - Revised Profound prescription (NAL-RP) +25%, and NAL-RP +50%), while stimuli for the normal-hearing group were filtered to achieve matched audibility. SRM increased as audibility increased for all participants. Thus, it is concluded that reduced audibility of stimuli may be a significant factor in hearing-impaired adults' reduced SRM even when hearing loss is compensated for with linear gain. However, the SRM achieved by the normal hearers with simulated audibility loss was still significantly greater than that achieved by hearing-impaired listeners, suggesting other factors besides audibility may still play a role.
Collapse
Affiliation(s)
- Helen Glyde
- The HEARing Cooperative Research Centre, 550 Swanston Street, Carlton, Victoria 3010, Australia
| | - Jörg M Buchholz
- The HEARing Cooperative Research Centre, 550 Swanston Street, Carlton, Victoria 3010, Australia
| | - Lillian Nielsen
- National Acoustic Laboratories, 16 University Avenue, Macquarie University, Sydney, New South Wales 2067, Australia
| | - Virginia Best
- National Acoustic Laboratories, 16 University Avenue, Macquarie University, Sydney, New South Wales 2067, Australia
| | - Harvey Dillon
- National Acoustic Laboratories, 16 University Avenue, Macquarie University, Sydney, New South Wales 2067, Australia
| | - Sharon Cameron
- National Acoustic Laboratories, 16 University Avenue, Macquarie University, Sydney, New South Wales 2067, Australia
| | - Louise Hickson
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, Queensland 4072, Australia
| |
Collapse
|
29
|
Getzmann S, Hanenberg C, Lewald J, Falkenstein M, Wascher E. Effects of age on electrophysiological correlates of speech processing in a dynamic "cocktail-party" situation. Front Neurosci 2015; 9:341. [PMID: 26483623 PMCID: PMC4586946 DOI: 10.3389/fnins.2015.00341] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Accepted: 09/09/2015] [Indexed: 11/23/2022] Open
Abstract
Successful speech perception in multi-speaker environments depends on auditory scene analysis, comprising auditory object segregation and grouping, and on focusing attention toward the speaker of interest. Changes in speaker settings (e.g., in speaker position) require object re-selection and attention re-focusing. Here, we tested the processing of changes in a realistic multi-speaker scenario in younger and older adults, employing a speech-perception task, and event-related potential (ERP) measures. Sequences of short words (combinations of company names and values) were simultaneously presented via four loudspeakers at different locations, and the participants responded to the value of a target company. Voice and position of the speaker of the target information were kept constant for a variable number of trials and then changed. Relative to the pre-change level, changes caused higher error rates, and more so in older than younger adults. The ERP analysis revealed stronger fronto-central N2 and N400 components in younger adults, suggesting a more effective inhibition of concurrent speech stimuli and enhanced language processing. The difference ERPs (post-change minus pre-change) indicated a change-related N400 and late positive complex (LPC) over parietal areas in both groups. Only the older adults showed an additional frontal LPC, suggesting increased allocation of attentional resources after changes in speaker settings. In sum, changes in speaker settings are critical events for speech perception in multi-speaker environments. Especially older persons show deficits that could be based on less flexible inhibitory control and increased distraction.
Collapse
Affiliation(s)
- Stephan Getzmann
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Germany
| | - Christina Hanenberg
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Germany
| | - Jörg Lewald
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Germany
| | - Michael Falkenstein
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Germany
| | - Edmund Wascher
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors Dortmund, Germany
| |
Collapse
|
30
|
Lin G, Carlile S. Costs of switching auditory spatial attention in following conversational turn-taking. Front Neurosci 2015; 9:124. [PMID: 25941466 PMCID: PMC4403343 DOI: 10.3389/fnins.2015.00124] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2015] [Accepted: 03/26/2015] [Indexed: 11/17/2022] Open
Abstract
Following a multi-talker conversation relies on the ability to rapidly and efficiently shift the focus of spatial attention from one talker to another. The current study investigated the listening costs associated with shifts in spatial attention during conversational turn-taking in 16 normally-hearing listeners using a novel sentence recall task. Three pairs of syntactically fixed but semantically unpredictable matrix sentences, recorded from a single male talker, were presented concurrently through an array of three loudspeakers (directly ahead and +/−30° azimuth). Subjects attended to one spatial location, cued by a tone, and followed the target conversation from one sentence to the next using the call-sign at the beginning of each sentence. Subjects were required to report the last three words of each sentence (speech recall task) or answer multiple choice questions related to the target material (speech comprehension task). The reading span test, attention network test, and trail making test were also administered to assess working memory, attentional control, and executive function. There was a 10.7 ± 1.3% decrease in word recall, a pronounced primacy effect, and a rise in masker confusion errors and word omissions when the target switched location between sentences. Switching costs were independent of the location, direction, and angular size of the spatial shift but did appear to be load dependent and only significant for complex questions requiring multiple cognitive operations. Reading span scores were positively correlated with total words recalled, and negatively correlated with switching costs and word omissions. Task switching speed (Trail-B time) was also significantly correlated with recall accuracy. Overall, this study highlights (i) the listening costs associated with shifts in spatial attention and (ii) the important role of working memory in maintaining goal relevant information and extracting meaning from dynamic multi-talker conversations.
Collapse
Affiliation(s)
- Gaven Lin
- Auditory Neuroscience Laboratory, Department of Physiology, School of Medical Sciences, University of Sydney Sydney, NSW, Australia
| | - Simon Carlile
- Auditory Neuroscience Laboratory, Department of Physiology, School of Medical Sciences, University of Sydney Sydney, NSW, Australia
| |
Collapse
|
31
|
Füllgrabe C, Moore BCJ, Stone MA. Age-group differences in speech identification despite matched audiometrically normal hearing: contributions from auditory temporal processing and cognition. Front Aging Neurosci 2015; 6:347. [PMID: 25628563 PMCID: PMC4292733 DOI: 10.3389/fnagi.2014.00347] [Citation(s) in RCA: 234] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2014] [Accepted: 12/23/2014] [Indexed: 11/13/2022] Open
Abstract
Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60-79 years) participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125-6 kHz were matched to nine young (YNH; 18-27 years) participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1) identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2) identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3) detection of modulation of the temporal envelope (TE) at frequencies 5-180 Hz; (4) monaural and binaural sensitivity to temporal fine structure (TFS); (5) various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (the improvement in speech identification obtained by amplitude modulating a noise background) and spatial masking release (the benefit obtained from spatially separating masker and target speech) were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN) identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in audiometric sensitivity.
Collapse
Affiliation(s)
| | | | - Michael A. Stone
- School of Psychological Sciences, University of ManchesterManchester, UK
- Central Manchester NHS Hospitals Foundation TrustManchester, UK
| |
Collapse
|
32
|
Getzmann S, Lewald J, Falkenstein M. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences. Front Neurosci 2014; 8:413. [PMID: 25540608 PMCID: PMC4261705 DOI: 10.3389/fnins.2014.00413] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Accepted: 11/24/2014] [Indexed: 11/13/2022] Open
Abstract
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.
Collapse
Affiliation(s)
- Stephan Getzmann
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors, Technical University of Dortmund (IfADo) Dortmund, Germany
| | - Jörg Lewald
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors, Technical University of Dortmund (IfADo) Dortmund, Germany ; Faculty of Psychology, Ruhr-University Bochum Bochum, Germany
| | - Michael Falkenstein
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors, Technical University of Dortmund (IfADo) Dortmund, Germany
| |
Collapse
|
33
|
Schurman J, Brungart D, Gordon-Salant S. Effects of masker type, sentence context, and listener age on speech recognition performance in 1-back listening tasks. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:3337. [PMID: 25480078 DOI: 10.1121/1.4901708] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Studies have shown that older listeners with normal hearing have greater difficulty understanding speech in noisy environments than younger listeners even during simple assessments where listeners respond to auditory stimuli immediately after presentation. Older listeners may have increased difficulty understanding speech in challenging listening situations that require the recall of prior sentences during the presentation of new auditory stimuli. This study compared the performance of older and younger normal-hearing listeners in 0-back trials, which required listeners to respond to the most recent sentence, and 1-back trials, which required the recall of the sentence preceding the most recent. Speech stimuli were high-context and anomalous sentences with four types of maskers. The results show that older listeners have greater difficulty in the 1-back task than younger listeners with all masker types, even when SNR was adjusted to produce 80% correct performance in the 0-back task for both groups. The differences between the groups in the 1-back task may be explained by differences in working memory for the noise and spatially separated speech maskers but not in the conditions with co-located speech maskers, suggesting that older listeners have increased difficulty in memory-intensive speech perception tasks involving high levels of informational masking.
Collapse
Affiliation(s)
- Jaclyn Schurman
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742
| | - Douglas Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742
| |
Collapse
|
34
|
Zhang C, Lu L, Wu X, Li L. Attentional modulation of the early cortical representation of speech signals in informational or energetic masking. BRAIN AND LANGUAGE 2014; 135:85-95. [PMID: 24992572 DOI: 10.1016/j.bandl.2014.06.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2013] [Revised: 06/04/2014] [Accepted: 06/05/2014] [Indexed: 06/03/2023]
Abstract
It is easier to recognize a masked speech when the speech and its masker are perceived as spatially segregated. Using event-related potentials, this study examined how the early cortical representation of speech is affected by different masker types and perceptual locations, when the listener is either passively or actively listening to the target speech syllable. The results showed that the two-talker-speech masker induced a much larger masking effect on the N1/P2 complex than either the steady-state-noise masker or the amplitude-modulated speech-spectrum-noise masker did. Also, a switch from the passive- to active-listening condition enhanced the N1/P2 complex only when the masker was speech. Moreover, under the active-listening condition, perceived separation between target and masker enhanced the N1/P2 complex only when the masker was speech. Thus, when a masker is present, the effect of selective attention to the target-speech signal on the early cortical representation of the speech signal is masker-type dependent.
Collapse
Affiliation(s)
- Changxin Zhang
- Department of Psychology, Speech and Hearing Research Center, McGovern Institute for Brain Research at PKU, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100871, China
| | - Lingxi Lu
- Department of Psychology, Speech and Hearing Research Center, McGovern Institute for Brain Research at PKU, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100871, China
| | - Xihong Wu
- Department of Psychology, Speech and Hearing Research Center, McGovern Institute for Brain Research at PKU, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100871, China
| | - Liang Li
- Department of Psychology, Speech and Hearing Research Center, McGovern Institute for Brain Research at PKU, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100871, China.
| |
Collapse
|
35
|
Cousins KAQ, Dar H, Wingfield A, Miller P. Acoustic masking disrupts time-dependent mechanisms of memory encoding in word-list recall. Mem Cognit 2014; 42:622-38. [PMID: 24838269 PMCID: PMC4030694 DOI: 10.3758/s13421-013-0377-7] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recall of recently heard words is affected by the clarity of presentation: Even if all words are presented with sufficient clarity for successful recognition, those that are more difficult to hear are less likely to be recalled. Such a result demonstrates that memory processing depends on more than whether a word is simply "recognized" versus "not recognized." More surprising is that, when a single item in a list of spoken words is acoustically masked, prior words that were heard with full clarity are also less likely to be recalled. To account for such a phenomenon, we developed the linking-by-active-maintenance model (LAMM). This computational model of perception and encoding predicts that these effects will be time dependent. Here we challenged our model by investigating whether and how the impact of acoustic masking on memory depends on presentation rate. We found that a slower presentation rate causes a more disruptive impact of stimulus degradation on prior, clearly heard words than does a fast rate. These results are unexpected according to prior theories of effortful listening, but we demonstrated that they can be accounted for by LAMM.
Collapse
Affiliation(s)
- Katheryn A Q Cousins
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, 02454-9110, USA
| | | | | | | |
Collapse
|
36
|
Ibrahim I, Parsa V, Macpherson E, Cheesman M. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology. Audiol Res 2012; 3:e1. [PMID: 26557339 PMCID: PMC4627128 DOI: 10.4081/audiores.2013.e1] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2012] [Revised: 10/15/2012] [Accepted: 11/19/2012] [Indexed: 11/23/2022] Open
Abstract
Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.
Collapse
Affiliation(s)
- Iman Ibrahim
- Faculty of Health Sciences, Western University , London, Canada
| | - Vijay Parsa
- National Centre for Audiology, Western University , London, Canada
| | - Ewan Macpherson
- National Centre for Audiology, Western University , London, Canada
| | | |
Collapse
|
37
|
Ezzatian P, Li L, Pichora-Fuller MK, Schneider BA. The effect of energetic and informational masking on the time-course of stream segregation: Evidence that streaming depends on vocal fine structure cues. ACTA ACUST UNITED AC 2012. [DOI: 10.1080/01690965.2011.591934] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
38
|
Ben-David BM, Tse VY, Schneider BA. Does it take older adults longer than younger adults to perceptually segregate a speech target from a background masker? Hear Res 2012; 290:55-63. [DOI: 10.1016/j.heares.2012.04.022] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/24/2011] [Revised: 04/18/2012] [Accepted: 04/24/2012] [Indexed: 10/28/2022]
|
39
|
Abstract
Older individuals often find it hard to communicate under difficult listening conditions, for example, in the presence of background noise or competing speakers. However, there is increasing evidence that this age-related decline in speech perception can be – at least in part – compensated by an increased recruitment of more general cognitive functions. The interplay of age-related declines and compensatory mechanisms in spoken language understanding under naturalistic and demanding listening conditions was tested here in a word detection task. Pairs of different coherent stories were presented dichotically to 14 younger and 14 older listeners (age ranges 19–25 and 54–64 years, respectively). The listeners had to respond to target words in one story, while suppressing distracting information in the other. In addition, the listeners had to pay attention to the content of the attended story. Older listeners outperformed the younger listeners in target detection and produced less missing responses. However, high performance in target detection came along with low performance in text recall. The analyses of event-related potentials indicated a reduction in parietal P3b of older, relative to younger, listeners. In turn, older listeners showed a prominent frontal P3a that was absent in younger listeners. In line with the so-called decline-compensation hypothesis, these results support the idea that, in order to perform well, older listeners compensate a potential decline by extra allocation of mental resources. One potential mechanism of compensation could be a selective attentional orientation to the target stimuli.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
40
|
Understanding of spoken language under challenging listening conditions in younger and older listeners: A combined behavioral and electrophysiological study. Brain Res 2011; 1415:8-22. [DOI: 10.1016/j.brainres.2011.08.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2011] [Revised: 07/29/2011] [Accepted: 08/01/2011] [Indexed: 11/19/2022]
|
41
|
Normal hearing is not enough to guarantee robust encoding of suprathreshold features important in everyday communication. Proc Natl Acad Sci U S A 2011; 108:15516-21. [PMID: 21844339 DOI: 10.1073/pnas.1108912108] [Citation(s) in RCA: 151] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
"Normal hearing" is typically defined by threshold audibility, even though everyday communication relies on extracting key features of easily audible sound, not on sound detection. Anecdotally, many normal-hearing listeners report difficulty communicating in settings where there are competing sound sources, but the reasons for such difficulties are debated: Do these difficulties originate from deficits in cognitive processing, or differences in peripheral, sensory encoding? Here we show that listeners with clinically normal thresholds exhibit very large individual differences on a task requiring them to focus spatial selective auditory attention to understand one speech stream when there are similar, competing speech streams coming from other directions. These individual differences in selective auditory attention ability are unrelated to age, reading span (a measure of cognitive function), and minor differences in absolute hearing threshold; however, selective attention ability correlates with the ability to detect simple frequency modulation in a clearly audible tone. Importantly, we also find that selective attention performance correlates with physiological measures of how well the periodic, temporal structure of sounds above the threshold of audibility are encoded in early, subcortical portions of the auditory pathway. These results suggest that the fidelity of early sensory encoding of the temporal structure in suprathreshold sounds influences the ability to communicate in challenging settings. Tests like these may help tease apart how peripheral and central deficits contribute to communication impairments, ultimately leading to new approaches to combat the social isolation that often ensues.
Collapse
|
42
|
Abstract
To participate effectively in multi-talker conversations, listeners need to do more than simply recognize and repeat speech. They have to keep track of who said what, extract the meaning of each utterance, store it in memory for future use, integrate the incoming information with what each conversational participant has said in the past, and draw on the listener’s own knowledge of the topic under consideration to extract general themes and formulate responses. In other words, to acquire and use the information contained in spoken language requires the smooth and rapid functioning of an integrated system of perceptual and cognitive processes. Here we review evidence indicating that the operation of this integrated system of perceptual and cognitive processes is more easily disrupted in older than in younger adults, especially when there are competing sounds in the auditory scene.
Collapse
Affiliation(s)
- B A Schneider
- Department of Psychology, University of Toronto Mississauga , Ontario, Canada
| |
Collapse
|
43
|
The Effect of Priming on Release From Informational Masking Is Equivalent for Younger and Older Adults. Ear Hear 2011; 32:84-96. [DOI: 10.1097/aud.0b013e3181ee6b8a] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
44
|
Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners. J Assoc Res Otolaryngol 2010; 12:395-405. [PMID: 21128091 DOI: 10.1007/s10162-010-0254-z] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2010] [Accepted: 11/16/2010] [Indexed: 10/18/2022] Open
Abstract
Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.
Collapse
|
45
|
Helfer KS, Chevalier J, Freyman RL. Aging, spatial cues, and single- versus dual-task performance in competing speech perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:3625-3633. [PMID: 21218894 PMCID: PMC3037770 DOI: 10.1121/1.3502462] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2010] [Revised: 09/22/2010] [Accepted: 09/22/2010] [Indexed: 05/26/2023]
Abstract
Older individuals often report difficulty coping in situations with multiple conversations in which they at times need to "tune out" the background speech and at other times seek to monitor competing messages. The present study was designed to simulate this type of interaction by examining the cost of requiring listeners to perform a secondary task in conjunction with understanding a target talker in the presence of competing speech. The ability of younger and older adults to understand a target utterance was measured with and without requiring the listener to also determine how many masking voices were presented time-reversed. Also of interest was how spatial separation affected the ability to perform these two tasks. Older adults demonstrated slightly reduced overall speech recognition and obtained less spatial release from masking, as compared to younger listeners. For both younger and older listeners, spatial separation increased the costs associated with performing both tasks together. The meaningfulness of the masker had a greater detrimental effect on speech understanding for older participants than for younger participants. However, the results suggest that the problems experienced by older adults in complex listening situations are not necessarily due to a deficit in the ability to switch and/or divide attention among talkers.
Collapse
Affiliation(s)
- Karen S Helfer
- Department of Communication Disorders, University of Massachusetts, 358 North Pleasant Street, Amherst, Massachusetts 01003, USA.
| | | | | |
Collapse
|
46
|
Abstract
OBJECTIVES To examine the impact of hearing impairment on a listener's ability to process simultaneous spoken messages. DESIGN Nine young listeners with bilateral sensorineural hearing loss and nine young listeners with normal hearing participated in this study. Two messages of equal level were presented separately to the two ears. The messages were systematically degraded by adding speech-shaped noise. Listeners performed a single task in which report of one message was required and a dual task in which report of both messages was required. RESULTS As the level of the added noise was increased, performance on both single and dual tasks declined. In the dual task, performance on the message reported second was poorer and more sensitive to the noise level than performance on the message reported first. When compared to listeners with normal hearing, listeners with hearing loss showed a larger deficit in recall of the second message than the first. This difference disappeared when performance of the hearing loss group was compared to that of the normal-hearing group at a poorer signal to noise ratio. CONCLUSIONS A listener's ability to process a secondary message is more sensitive to noise and hearing impairment than the ability to process a primary message. Tasks involving the processing of simultaneous messages may be useful for assessing hearing handicap and the benefits of rehabilitation in realistic listening scenarios.
Collapse
|
47
|
Arlinger S, Lunner T, Lyxell B, Pichora-Fuller MK. The emergence of cognitive hearing science. Scand J Psychol 2010; 50:371-84. [PMID: 19778385 DOI: 10.1111/j.1467-9450.2009.00753.x] [Citation(s) in RCA: 133] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Cognitive Hearing Science or Auditory Cognitive Science is an emerging field of interdisciplinary research concerning the interactions between hearing and cognition. It follows a trend over the last half century for interdisciplinary fields to develop, beginning with Neuroscience, then Cognitive Science, then Cognitive Neuroscience, and then Cognitive Vision Science. A common theme is that an interdisciplinary approach is necessary to understand complex human behaviors, to develop technologies incorporating knowledge of these behaviors, and to find solutions for individuals with impairments that undermine typical behaviors. Accordingly, researchers in traditional academic disciplines, such as Psychology, Physiology, Linguistics, Philosophy, Anthropology, and Sociology benefit from collaborations with each other, and with researchers in Computer Science and Engineering working on the design of technologies, and with health professionals working with individuals who have impairments. The factors that triggered the emergence of Cognitive Hearing Science include the maturation of the component disciplines of Hearing Science and Cognitive Science, new opportunities to use complex digital signal-processing to design technologies suited to performance in challenging everyday environments, and increasing social imperatives to help people whose communication problems span hearing and cognition. Cognitive Hearing Science is illustrated in research on three general topics: (1) language processing in challenging listening conditions; (2) use of auditory communication technologies or the visual modality to boost performance; (3) changes in performance with development, aging, and rehabilitative training. Future directions for modeling and the translation of research into practice are suggested.
Collapse
Affiliation(s)
- Stig Arlinger
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Sweden
| | | | | | | |
Collapse
|
48
|
Schneider BA, Pichora-Fuller K, Daneman M. Effects of Senescent Changes in Audition and Cognition on Spoken Language Comprehension. THE AGING AUDITORY SYSTEM 2010. [DOI: 10.1007/978-1-4419-0993-0_7] [Citation(s) in RCA: 92] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
49
|
|
50
|
Harris KC, Eckert MA, Ahlstrom JB, Dubno JR. Age-related differences in gap detection: effects of task difficulty and cognitive ability. Hear Res 2009; 264:21-9. [PMID: 19800958 DOI: 10.1016/j.heares.2009.09.017] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/02/2009] [Revised: 09/28/2009] [Accepted: 09/30/2009] [Indexed: 10/20/2022]
Abstract
Differences in gap detection for younger and older adults have been shown to vary with the complexity of the task or stimuli, but the factors that contribute to these differences remain unknown. To address this question, we examined the extent to which age-related differences in processing speed and workload predicted age-related differences in gap detection. Gap detection thresholds were measured for 10 younger and 11 older adults in two conditions that varied in task complexity but used identical stimuli: (1) gap location fixed at the beginning, middle, or end of a noise burst and (2) gap location varied randomly from trial to trial from the beginning, middle, or end of the noise. We hypothesized that gap location uncertainty would place increased demands on cognitive and attentional resources and result in significantly higher gap detection thresholds for older but not younger adults. Overall, gap detection thresholds were lower for the middle location as compared to beginning and end locations and were lower for the fixed than the random condition. In general, larger age-related differences in gap detection were observed for more challenging conditions. That is, gap detection thresholds for older adults were significantly larger for the random condition than for the fixed condition when the gap was at the beginning and end locations but not the middle. In contrast, gap detection thresholds for younger adults were not significantly different for the random and fixed condition at any location. Subjective ratings of workload indicated that older adults found the gap detection task more mentally demanding than younger adults. Consistent with these findings, results of the Purdue Pegboard and Connections tests revealed age-related slowing of processing speed. Moreover, age group differences in workload and processing speed predicted gap detection in younger and older adults when gap location varied from trial to trial; these associations were not observed when gap location remained constant across trials. Taken together, these results suggest that age-related differences in complex measures of auditory temporal processing may be explained, in part, by age-related deficits in processing speed and attention.
Collapse
Affiliation(s)
- Kelly C Harris
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave., MSC 550, Charleston, SC 29425-5500, USA.
| | | | | | | |
Collapse
|