1
|
Creff G, Lambert C, Coudert P, Pean V, Laurent S, Godey B. Comparison of Tonotopic and Default Frequency Fitting for Speech Understanding in Noise in New Cochlear Implantees: A Prospective, Randomized, Double-Blind, Cross-Over Study. Ear Hear 2024; 45:35-52. [PMID: 37823850 DOI: 10.1097/aud.0000000000001423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
OBJECTIVES While cochlear implants (CIs) have provided benefits for speech recognition in quiet for subjects with severe-to-profound hearing loss, speech recognition in noise remains challenging. A body of evidence suggests that reducing frequency-to-place mismatch may positively affect speech perception. Thus, a fitting method based on a tonotopic map may improve speech perception results in quiet and noise. The aim of our study was to assess the impact of a tonotopic map on speech perception in noise and quiet in new CI users. DESIGN A prospective, randomized, double-blind, two-period cross-over study in 26 new CI users was performed over a 6-month period. New CI users older than 18 years with bilateral severe-to-profound sensorineural hearing loss or complete hearing loss for less than 5 years were selected in the University Hospital Centre of Rennes in France. An anatomical tonotopic map was created using postoperative flat-panel computed tomography and a reconstruction software based on the Greenwood function. Each participant was randomized to receive a conventional map followed by a tonotopic map or vice versa. Each setting was maintained for 6 weeks, at the end of which participants performed speech perception tasks. The primary outcome measure was speech recognition in noise. Participants were allocated to sequences by block randomization of size two with a ratio 1:1 (CONSORT Guidelines). Participants and those assessing the outcomes were blinded to the intervention. RESULTS Thirteen participants were randomized to each sequence. Two of the 26 participants recruited (one in each sequence) had to be excluded due to the COVID-19 pandemic. Twenty-four participants were analyzed. Speech recognition in noise was significantly better with the tonotopic fitting at all signal-to-noise ratio (SNR) levels tested [SNR = +9 dB, p = 0.002, mean effect (ME) = 12.1%, 95% confidence interval (95% CI) = 4.9 to 19.2, standardized effect size (SES) = 0.71; SNR = +6 dB, p < 0.001, ME = 16.3%, 95% CI = 9.8 to 22.7, SES = 1.07; SNR = +3 dB, p < 0.001 ME = 13.8%, 95% CI = 6.9 to 20.6, SES = 0.84; SNR = 0 dB, p = 0.003, ME = 10.8%, 95% CI = 4.1 to 17.6, SES = 0.68]. Neither period nor interaction effects were observed for any signal level. Speech recognition in quiet ( p = 0.66) and tonal audiometry ( p = 0.203) did not significantly differ between the two settings. 92% of the participants kept the tonotopy-based map after the study period. No correlation was found between speech-in-noise perception and age, duration of hearing deprivation, angular insertion depth, or position or width of the frequency filters allocated to the electrodes. CONCLUSION For new CI users, tonotopic fitting appears to be more efficient than the default frequency fitting because it allows for better speech recognition in noise without compromising understanding in quiet.
Collapse
Affiliation(s)
- Gwenaelle Creff
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
- MediCIS, LTSI (Image and Signal Processing Laboratory), INSERM, U1099, Rennes, France
| | - Cassandre Lambert
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
| | - Paul Coudert
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
| | | | | | - Benoit Godey
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
- MediCIS, LTSI (Image and Signal Processing Laboratory), INSERM, U1099, Rennes, France
- Hearing Aid Academy, Javene, France
| |
Collapse
|
2
|
Chai X, Liu M, Huang T, Wu M, Li J, Zhao X, Yan T, Song Y, Zhang YX. Neurophysiological evidence for goal-oriented modulation of speech perception. Cereb Cortex 2022; 33:3910-3921. [PMID: 35972410 DOI: 10.1093/cercor/bhac315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 07/20/2022] [Accepted: 07/21/2022] [Indexed: 11/14/2022] Open
Abstract
Speech perception depends on the dynamic interplay of bottom-up and top-down information along a hierarchically organized cortical network. Here, we test, for the first time in the human brain, whether neural processing of attended speech is dynamically modulated by task demand using a context-free discrimination paradigm. Electroencephalographic signals were recorded during 3 parallel experiments that differed only in the phonological feature of discrimination (word, vowel, and lexical tone, respectively). The event-related potentials (ERPs) revealed the task modulation of speech processing at approximately 200 ms (P2) after stimulus onset, probably influencing what phonological information to retain in memory. For the phonological comparison of sequential words, task modulation occurred later at approximately 300 ms (N3 and P3), reflecting the engagement of task-specific cognitive processes. The ERP results were consistent with the changes in delta-theta neural oscillations, suggesting the involvement of cortical tracking of speech envelopes. The study thus provides neurophysiological evidence for goal-oriented modulation of attended speech and calls for speech perception models incorporating limited memory capacity and goal-oriented optimization mechanisms.
Collapse
Affiliation(s)
- Xiaoke Chai
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Min Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Ting Huang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Meiyun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Jinhong Li
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Xue Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Tingting Yan
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yan Song
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yu-Xuan Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
3
|
Effects of temporal order and intentionality on reflective attention to words in noise. PSYCHOLOGICAL RESEARCH 2021; 86:544-557. [PMID: 33683449 DOI: 10.1007/s00426-021-01494-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 02/15/2021] [Indexed: 10/22/2022]
Abstract
Speech perception in noise is a cognitively demanding process that challenges not only the auditory sensory system, but also cognitive networks involved in attention. The predictive coding theory has been influential in characterizing the influence of prior context on processing incoming auditory stimuli, with comparatively less research dedicated to "postdictive" processes and subsequent context effects on speech perception. Effects of subsequent semantic context were evaluated while manipulating the relationship of three target words presented in noise and the temporal position of targets compared to the subsequent contextual cue, demonstrating that subsequent context benefits were present regardless of whether the targets were related to each other and did not depend on the position of the target. However, participants instructed to focus on the relation between target and cue performed worse than those who did not receive this instruction, suggesting a disruption of a natural process of continuous speech recognition. We discuss these findings in relation to lexical commitment and stimulus-driven attention to short-term memory as mechanisms of subsequent context integration.
Collapse
|
4
|
Chan TMV, Alain C. Brain indices associated with semantic cues prior to and after a word in noise. Brain Res 2020; 1751:147206. [PMID: 33189693 DOI: 10.1016/j.brainres.2020.147206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 11/01/2020] [Accepted: 11/09/2020] [Indexed: 10/23/2022]
Abstract
It is well established that identification of words in noise improves when it is preceded by a semantically related word, but comparatively little is known about the effect of subsequent context in guiding word in noise identification. We build on the findings of a previous behavioural study (Chan & Alain, 2019) by measuring neuro-electric brain activity while manipulating the semantic content of a cue that either preceded or followed a word in noise. Participants were more accurate in identifying the word in noise when it was preceded or followed by a cue that was semantically related. This gain in accuracy coincided with a late positive component, which was time-locked to the word in noise when preceded by a cue and time-locked to the cue when it followed the word in noise. Distributed source analyses of this positive component revealed different patterns in source activity between the two temporal conditions. The effects of relatedness also generated an event-related potential modulation around 400 ms (N400) that was present at cue presentation when it followed the word in noise, but not for the word in noise when preceded by the cue, consistent with findings regarding its sensitivity to signal degradation. Exploratory analyses examined a subset of data based on participants' subjective perceived clarity, which revealed a posterior deflection over the left hemisphere that showed a relatedness effect. We discuss these findings in light of research on prediction as well as a reflective attention framework.
Collapse
Affiliation(s)
- T M Vanessa Chan
- Department of Psychology, University of Toronto, Sidney Smith Building, 100 St. George St., Toronto, Ontario M5S 3G3, Canada; Rotman Research Institute, Baycrest, 3560 Bathurst Street, Toronto, Ontario M6A 2E1, Canada
| | - Claude Alain
- Department of Psychology, University of Toronto, Sidney Smith Building, 100 St. George St., Toronto, Ontario M5S 3G3, Canada; Rotman Research Institute, Baycrest, 3560 Bathurst Street, Toronto, Ontario M6A 2E1, Canada; Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada; Faculty of Music, University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
5
|
Castiglione A, Casa M, Gallo S, Sorrentino F, Dhima S, Cilia D, Lovo E, Gambin M, Previato M, Colombo S, Caserta E, Gheller F, Giacomelli C, Montino S, Limongi F, Brotto D, Gabelli C, Trevisi P, Bovo R, Martini A. Correspondence Between Cognitive and Audiological Evaluations Among the Elderly: A Preliminary Report of an Audiological Screening Model of Subjects at Risk of Cognitive Decline With Slight to Moderate Hearing Loss. Front Neurosci 2019; 13:1279. [PMID: 31920475 PMCID: PMC6915032 DOI: 10.3389/fnins.2019.01279] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 11/11/2019] [Indexed: 11/25/2022] Open
Abstract
Epidemiological studies show increasing prevalence rates of cognitive decline and hearing loss with age, particularly after the age of 65 years. These conditions are reported to be associated, although conclusive evidence of causality and implications is lacking. Nevertheless, audiological and cognitive assessment among elderly people is a key target for comprehensive and multidisciplinary evaluation of the subject’s frailty status. To evaluate the use of tools for identifying older adults at risk of hearing loss and cognitive decline and to compare skills and abilities in terms of hearing and cognitive performances between older adults and young subjects, we performed a prospective cross-sectional study using supraliminal auditory tests. The relationship between cognitive assessment results and audiometric results was investigated, and reference ranges for different ages or stages of disease were determined. Patients older than 65 years with different degrees of hearing function were enrolled. Each subject underwent an extensive audiological assessment, including tonal and speech audiometry, Italian Matrix Sentence Test, and speech audiometry with logatomes in quiet. Cognitive function was screened and then verified by experienced clinicians using the Montreal Cognitive Assessment Score, the Geriatric Depression Scale, and further investigations in some. One hundred twenty-three subjects were finally enrolled during 2016–2019: 103 were >65 years of age and 20 were younger participants (as controls). Cognitive functions showed a correlation with the audiological results in post-lingual hearing-impaired patients, in particular in those affected by slight to moderate hearing loss and aged more than 70 years. Audiological testing can thus be useful in clinical assessment and identification of patients at risk of cognitive impairment. The study was limited by its sample size (CI 95%; CL 10%), strict dependence on language, and hearing threshold. Further investigations should be conducted to confirm the reported results and to verify similar screening models.
Collapse
Affiliation(s)
- Alessandro Castiglione
- Department of Neurosciences, University of Padua, Padua, Italy.,Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Mariella Casa
- Regional Center for the Study and Treatment of the Aging Brain, Department of Internal Medicine, Padua, Italy
| | - Samanta Gallo
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Flavia Sorrentino
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Sonila Dhima
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Dalila Cilia
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Elisa Lovo
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Marta Gambin
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Maela Previato
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Simone Colombo
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Ezio Caserta
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Flavia Gheller
- Department of Neurosciences, University of Padua, Padua, Italy
| | | | - Silvia Montino
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Federica Limongi
- Institute of Neuroscience, National Research Council, Padua, Italy
| | - Davide Brotto
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Carlo Gabelli
- Regional Center for the Study and Treatment of the Aging Brain, Department of Internal Medicine, Padua, Italy
| | - Patrizia Trevisi
- Department of Neurosciences, University of Padua, Padua, Italy.,Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Roberto Bovo
- Department of Neurosciences, University of Padua, Padua, Italy.,Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Alessandro Martini
- Department of Neurosciences, University of Padua, Padua, Italy.,Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| |
Collapse
|
6
|
Illusory sound texture reveals multi-second statistical completion in auditory scene analysis. Nat Commun 2019; 10:5096. [PMID: 31704913 PMCID: PMC6841952 DOI: 10.1038/s41467-019-12893-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Accepted: 10/03/2019] [Indexed: 12/27/2022] Open
Abstract
Sound sources in the world are experienced as stable even when intermittently obscured, implying perceptual completion mechanisms that “fill in” missing sensory information. We demonstrate a filling-in phenomenon in which the brain extrapolates the statistics of background sounds (textures) over periods of several seconds when they are interrupted by another sound, producing vivid percepts of illusory texture. The effect differs from previously described completion effects in that 1) the extrapolated sound must be defined statistically given the stochastic nature of texture, and 2) the effect lasts much longer, enabling introspection and facilitating assessment of the underlying representation. Illusory texture biases subsequent texture statistic estimates indistinguishably from actual texture, suggesting that it is represented similarly to actual texture. The illusion appears to represent an inference about whether the background is likely to continue during concurrent sounds, providing a stable statistical representation of the ongoing environment despite unstable sensory evidence. Auditory textures are sounds defined by a particular statistical distribution, e.g. as is produced by rain, or a swarm of insects. Here, the authors describe a striking perceptual illusion in which sound textures are heard to continue, even though they have in fact been replaced by white noise.
Collapse
|