1
|
Mai G, Jiang Z, Wang X, Tachtsidis I, Howell P. Neuroplasticity of Speech-in-Noise Processing in Older Adults Assessed by Functional Near-Infrared Spectroscopy (fNIRS). Brain Topogr 2024; 37:1139-1157. [PMID: 39042322 PMCID: PMC11408581 DOI: 10.1007/s10548-024-01070-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 07/13/2024] [Indexed: 07/24/2024]
Abstract
Functional near-infrared spectroscopy (fNIRS), a non-invasive optical neuroimaging technique that is portable and acoustically silent, has become a promising tool for evaluating auditory brain functions in hearing-vulnerable individuals. This study, for the first time, used fNIRS to evaluate neuroplasticity of speech-in-noise processing in older adults. Ten older adults, most of whom had moderate-to-mild hearing loss, participated in a 4-week speech-in-noise training. Their speech-in-noise performances and fNIRS brain responses to speech (auditory sentences in noise), non-speech (spectrally-rotated speech in noise) and visual (flashing chequerboards) stimuli were evaluated pre- (T0) and post-training (immediately after training, T1; and after a 4-week retention, T2). Behaviourally, speech-in-noise performances were improved after retention (T2 vs. T0) but not immediately after training (T1 vs. T0). Neurally, we intriguingly found brain responses to speech vs. non-speech decreased significantly in the left auditory cortex after retention (T2 vs. T0 and T2 vs. T1) for which we interpret as suppressed processing of background noise during speech listening alongside the significant behavioural improvements. Meanwhile, functional connectivity within and between multiple regions of temporal, parietal and frontal lobes was significantly enhanced in the speech condition after retention (T2 vs. T0). We also found neural changes before the emergence of significant behavioural improvements. Compared to pre-training, responses to speech vs. non-speech in the left frontal/prefrontal cortex were decreased significantly both immediately after training (T1 vs. T0) and retention (T2 vs. T0), reflecting possible alleviation of listening efforts. Finally, connectivity was significantly decreased between auditory and higher-level non-auditory (parietal and frontal) cortices in response to visual stimuli immediately after training (T1 vs. T0), indicating decreased cross-modal takeover of speech-related regions during visual processing. The results thus showed that neuroplasticity can be observed not only at the same time with, but also before, behavioural changes in speech-in-noise perception. To our knowledge, this is the first fNIRS study to evaluate speech-based auditory neuroplasticity in older adults. It thus provides important implications for current research by illustrating the promises of detecting neuroplasticity using fNIRS in hearing-vulnerable individuals.
Collapse
Affiliation(s)
- Guangting Mai
- National Institute for Health and Care Research Nottingham Biomedical Research Centre, Nottingham, UK.
- Academic Unit of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK.
- Division of Psychology and Language Sciences, University College London, London, UK.
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
| | - Zhizhao Jiang
- Division of Psychology and Language Sciences, University College London, London, UK
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Xinran Wang
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Ilias Tachtsidis
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Peter Howell
- Division of Psychology and Language Sciences, University College London, London, UK
| |
Collapse
|
2
|
Meng Y, Liang C, Chen W, Liu Z, Yang C, Hu J, Gao Z, Gao S. Neural basis of language familiarity effects on voice recognition: An fNIRS study. Cortex 2024; 176:1-10. [PMID: 38723449 DOI: 10.1016/j.cortex.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/18/2024] [Accepted: 04/10/2024] [Indexed: 06/11/2024]
Abstract
Recognizing talkers' identity via speech is an important social skill in interpersonal interaction. Behavioral evidence has shown that listeners can identify better the voices of their native language than those of a non-native language, which is known as the language familiarity effect (LFE). However, its underlying neural mechanisms remain unclear. This study therefore investigated how the LFE occurs at the neural level by employing functional near-infrared spectroscopy (fNIRS). Late unbalanced bilinguals were first asked to learn to associate strangers' voices with their identities and then tested for recognizing the talkers' identities based on their voices speaking a language either highly familiar (i.e., native language Chinese), or moderately familiar (i.e., second language English), or completely unfamiliar (i.e., Ewe) to participants. Participants identified talkers the most accurately in Chinese and the least accurately in Ewe. Talker identification was quicker in Chinese than in English and Ewe but reaction time did not differ between the two non-native languages. At the neural level, recognizing voices speaking Chinese relative to English/Ewe produced less activity in the inferior frontal gyrus, precentral/postcentral gyrus, supramarginal gyrus, and superior temporal sulcus/gyrus while no difference was found between English and Ewe, indicating facilitation of voice identification by the automatic phonological encoding in the native language. These findings shed new light on the interrelations between language ability and voice recognition, revealing that the brain activation pattern of the LFE depends on the automaticity of language processing.
Collapse
Affiliation(s)
- Yuan Meng
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Chunyan Liang
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; Zhuojin Branch of Yandaojie Primary School, Chengdu, China
| | - Wenjing Chen
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhaoning Liu
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Chaoqing Yang
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiehui Hu
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhao Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, China.
| | - Shan Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, China.
| |
Collapse
|
3
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
4
|
McLinden J, Borgheai B, Hosni S, Kumar C, Rahimi N, Shao M, Spencer KM, Shahriari Y. Individual-Specific Characterization of Event-Related Hemodynamic Responses during an Auditory Task: An Exploratory Study. Behav Brain Res 2022; 436:114074. [PMID: 36028001 DOI: 10.1016/j.bbr.2022.114074] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 08/11/2022] [Accepted: 08/21/2022] [Indexed: 11/24/2022]
Abstract
Functional near-infrared spectroscopy (fNIRS) has been established as an informative modality for understanding the hemodynamic-metabolic correlates of cortical auditory processing. To date, such knowledge has shown broad clinical applications in the diagnosis, treatment, and intervention procedures in disorders affecting auditory processing; however, exploration of the hemodynamic response to auditory tasks is yet incomplete. This holds particularly true in the context of auditory event-related fNIRS experiments, where preliminary work has shown the presence of valid responses while leaving the need for more comprehensive explorations of the hemodynamic correlates of event-related auditory processing. In this study, we apply an individual-specific approach to characterize fNIRS-based hemodynamic changes during an auditory task in healthy adults. Oxygenated hemoglobin (HbO2) concentration change time courses were acquired from eight participants. Independent component analysis (ICA) was then applied to isolate individual-specific class discriminative spatial filters, which were then applied to HbO2 time courses to extract auditory-related hemodynamic features. While six of eight participants produced significant class discriminative features before ICA-based spatial filtering, the proposed method identified significant auditory hemodynamic features in all participants. Furthermore, ICA-based filtering improved correlation between trial labels and extracted features in every participant. For the first time, this study demonstrates hemodynamic features important in experiments exploring auditory processing as well as the utility of individual-specific ICA-based spatial filtering in fNIRS-based feature extraction techniques in auditory experiments. These outcomes provide insights for future studies exploring auditory hemodynamic characteristics and may eventually provide a baseline framework for better understanding auditory response dysfunctions in clinical populations.
Collapse
Affiliation(s)
- J McLinden
- Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI, USA
| | - B Borgheai
- Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI, USA
| | - S Hosni
- Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI, USA
| | - C Kumar
- Department of Computer and Information Science, University of Massachusetts Dartmouth, MA
| | - N Rahimi
- Department of Computer and Information Science, University of Massachusetts Dartmouth, MA
| | - M Shao
- Department of Computer and Information Science, University of Massachusetts Dartmouth, MA
| | - K M Spencer
- Department of Psychiatry, VA Boston Healthcare System and Harvard Medical School, Jamaica Plain, Boston, MA, USA
| | - Y Shahriari
- Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI, USA.
| |
Collapse
|
5
|
Butera IM, Larson ED, DeFreese AJ, Lee AK, Gifford RH, Wallace MT. Functional localization of audiovisual speech using near infrared spectroscopy. Brain Topogr 2022; 35:416-430. [PMID: 35821542 PMCID: PMC9334437 DOI: 10.1007/s10548-022-00904-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 05/19/2022] [Indexed: 11/21/2022]
Abstract
Visual cues are especially vital for hearing impaired individuals such as cochlear implant (CI) users to understand speech in noise. Functional Near Infrared Spectroscopy (fNIRS) is a light-based imaging technology that is ideally suited for measuring the brain activity of CI users due to its compatibility with both the ferromagnetic and electrical components of these implants. In a preliminary step toward better elucidating the behavioral and neural correlates of audiovisual (AV) speech integration in CI users, we designed a speech-in-noise task and measured the extent to which 24 normal hearing individuals could integrate the audio of spoken monosyllabic words with the corresponding visual signals of a female speaker. In our behavioral task, we found that audiovisual pairings provided average improvements of 103% and 197% over auditory-alone listening conditions in -6 and -9 dB signal-to-noise ratios consisting of multi-talker background noise. In an fNIRS task using similar stimuli, we measured activity during auditory-only listening, visual-only lipreading, and AV listening conditions. We identified cortical activity in all three conditions over regions of middle and superior temporal cortex typically associated with speech processing and audiovisual integration. In addition, three channels active during the lipreading condition showed uncorrected correlations associated with behavioral measures of audiovisual gain as well as with the McGurk effect. Further work focusing primarily on the regions of interest identified in this study could test how AV speech integration may differ for CI users who rely on this mechanism for daily communication.
Collapse
Affiliation(s)
- Iliza M Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| | - Eric D Larson
- Institute for Learning & Brain Sciences, University of Washington, Seattle Washington, USA
| | - Andrea J DeFreese
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Adrian Kc Lee
- Institute for Learning & Brain Sciences, University of Washington, Seattle Washington, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
6
|
Zhou X, Sobczak GS, McKay CM, Litovsky RY. Effects of degraded speech processing and binaural unmasking investigated using functional near-infrared spectroscopy (fNIRS). PLoS One 2022; 17:e0267588. [PMID: 35468160 PMCID: PMC9037936 DOI: 10.1371/journal.pone.0267588] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 04/11/2022] [Indexed: 12/24/2022] Open
Abstract
The present study aimed to investigate the effects of degraded speech perception and binaural unmasking using functional near-infrared spectroscopy (fNIRS). Normal hearing listeners were tested when attending to unprocessed or vocoded speech, presented to the left ear at two speech-to-noise ratios (SNRs). Additionally, by comparing monaural versus diotic masker noise, we measured binaural unmasking. Our primary research question was whether the prefrontal cortex and temporal cortex responded differently to varying listening configurations. Our a priori regions of interest (ROIs) were located at the left dorsolateral prefrontal cortex (DLPFC) and auditory cortex (AC). The left DLPFC has been reported to be involved in attentional processes when listening to degraded speech and in spatial hearing processing, while the AC has been reported to be sensitive to speech intelligibility. Comparisons of cortical activity between these two ROIs revealed significantly different fNIRS response patterns. Further, we showed a significant and positive correlation between self-reported task difficulty levels and fNIRS responses in the DLPFC, with a negative but non-significant correlation for the left AC, suggesting that the two ROIs played different roles in effortful speech perception. Our secondary question was whether activity within three sub-regions of the lateral PFC (LPFC) including the DLPFC was differentially affected by varying speech-noise configurations. We found significant effects of spectral degradation and SNR, and significant differences in fNIRS response amplitudes between the three regions, but no significant interaction between ROI and speech type, or between ROI and SNR. When attending to speech with monaural and diotic noises, participants reported the latter conditions being easier; however, no significant main effect of masker condition on cortical activity was observed. For cortical responses in the LPFC, a significant interaction between SNR and masker condition was observed. These findings suggest that binaural unmasking affects cortical activity through improving speech reception threshold in noise, rather than by reducing effort exerted.
Collapse
Affiliation(s)
- Xin Zhou
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Gabriel S. Sobczak
- School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Colette M. McKay
- The Bionics Institute of Australia, Melbourne, VIC, Australia
- Department of Medical Bionics, University of Melbourne, Melbourne, VIC, Australia
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
- Department of Communication Science and Disorders, University of Wisconsin-Madison, Madison, WI, United States of America
- Division of Otolaryngology, Department of Surgery, University of Wisconsin-Madison, Madison, WI, United States of America
| |
Collapse
|
7
|
Hakim U, Pinti P, Noah AJ, Zhang X, Burgess P, Hamilton A, Hirsch J, Tachtsidis I. Investigation of functional near-infrared spectroscopy signal quality and development of the hemodynamic phase correlation signal. NEUROPHOTONICS 2022; 9:025001. [PMID: 35599691 PMCID: PMC9116886 DOI: 10.1117/1.nph.9.2.025001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 04/13/2022] [Indexed: 06/15/2023]
Abstract
Significance: There is a longstanding recommendation within the field of fNIRS to use oxygenated (HbO 2 ) and deoxygenated (HHb) hemoglobin when analyzing and interpreting results. Despite this, many fNIRS studies do focus onHbO 2 only. Previous work has shown thatHbO 2 on its own is susceptible to systemic interference and results may mostly reflect that rather than functional activation. Studies using bothHbO 2 and HHb to draw their conclusions do so with varying methods and can lead to discrepancies between studies. The combination ofHbO 2 and HHb has been recommended as a method to utilize both signals in analysis. Aim: We present the development of the hemodynamic phase correlation (HPC) signal to combineHbO 2 and HHb as recommended to utilize both signals in the analysis. We use synthetic and experimental data to evaluate how the HPC and current signals used for fNIRS analysis compare. Approach: About 18 synthetic datasets were formed using resting-state fNIRS data acquired from 16 channels over the frontal lobe. To simulate fNIRS data for a block-design task, we superimposed a synthetic task-related hemodynamic response to the resting state data. This data was used to develop an HPC-general linear model (GLM) framework. Experiments were conducted to investigate the performance of each signal at different SNR and to investigate the effect of false positives on the data. Performance was based on each signal's mean T -value across channels. Experimental data recorded from 128 participants across 134 channels during a finger-tapping task were used to investigate the performance of multiple signals [HbO 2 , HHb, HbT, HbD, correlation-based signal improvement (CBSI), and HPC] on real data. Signal performance was evaluated on its ability to localize activation to a specific region of interest. Results: Results from varying the SNR show that the HPC signal has the highest performance for high SNRs. The CBSI performed the best for medium-low SNR. The next analysis evaluated how false positives affect the signals. The analyses evaluating the effect of false positives showed that the HPC and CBSI signals reflect the effect of false positives onHbO 2 and HHb. The analysis of real experimental data revealed that the HPC and HHb signals provide localization to the primary motor cortex with the highest accuracy. Conclusions: We developed a new hemodynamic signal (HPC) with the potential to overcome the current limitations of usingHbO 2 and HHb separately. Our results suggest that the HPC signal provides comparable accuracy to HHb to localize functional activation while at the same time being more robust against false positives.
Collapse
Affiliation(s)
- Uzair Hakim
- University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom
| | - Paola Pinti
- University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom
- University of London, Birkbeck College, Centre for Brain and Cognitive Development, London, United Kingdom
| | - Adam J. Noah
- Yale University, Department of Neuroscience and Comparative Medicine, Yale School of Medicine, United States
| | - Xian Zhang
- Yale University, Department of Neuroscience and Comparative Medicine, Yale School of Medicine, United States
| | - Paul Burgess
- University College London, Institute of Cognitive Neuroscience, London, United Kingdom
| | - Antonia Hamilton
- University College London, Institute of Cognitive Neuroscience, London, United Kingdom
| | - Joy Hirsch
- University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom
- Yale University, Department of Neuroscience and Comparative Medicine, Yale School of Medicine, United States
| | - Ilias Tachtsidis
- University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom
| |
Collapse
|
8
|
Defenderfer J, Forbes S, Wijeakumar S, Hedrick M, Plyler P, Buss AT. Frontotemporal activation differs between perception of simulated cochlear implant speech and speech in background noise: An image-based fNIRS study. Neuroimage 2021; 240:118385. [PMID: 34256138 PMCID: PMC8503862 DOI: 10.1016/j.neuroimage.2021.118385] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/10/2021] [Accepted: 07/09/2021] [Indexed: 10/27/2022] Open
Abstract
In this study we used functional near-infrared spectroscopy (fNIRS) to investigate neural responses in normal-hearing adults as a function of speech recognition accuracy, intelligibility of the speech stimulus, and the manner in which speech is distorted. Participants listened to sentences and reported aloud what they heard. Speech quality was distorted artificially by vocoding (simulated cochlear implant speech) or naturally by adding background noise. Each type of distortion included high and low-intelligibility conditions. Sentences in quiet were used as baseline comparison. fNIRS data were analyzed using a newly developed image reconstruction approach. First, elevated cortical responses in the middle temporal gyrus (MTG) and middle frontal gyrus (MFG) were associated with speech recognition during the low-intelligibility conditions. Second, activation in the MTG was associated with recognition of vocoded speech with low intelligibility, whereas MFG activity was largely driven by recognition of speech in background noise, suggesting that the cortical response varies as a function of distortion type. Lastly, an accuracy effect in the MFG demonstrated significantly higher activation during correct perception relative to incorrect perception of speech. These results suggest that normal-hearing adults (i.e., untrained listeners of vocoded stimuli) do not exploit the same attentional mechanisms of the frontal cortex used to resolve naturally degraded speech and may instead rely on segmental and phonetic analyses in the temporal lobe to discriminate vocoded speech.
Collapse
Affiliation(s)
- Jessica Defenderfer
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Samuel Forbes
- Psychology, University of East Anglia, Norwich, England.
| | | | - Mark Hedrick
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Patrick Plyler
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Aaron T Buss
- Psychology, University of Tennessee, Knoxville, TN, United States.
| |
Collapse
|
9
|
Bell L, Peng ZE, Pausch F, Reindl V, Neuschaefer-Rube C, Fels J, Konrad K. fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations. CHILDREN (BASEL, SWITZERLAND) 2020; 7:E219. [PMID: 33171753 PMCID: PMC7695031 DOI: 10.3390/children7110219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/02/2020] [Accepted: 11/05/2020] [Indexed: 11/16/2022]
Abstract
The integration of virtual acoustic environments (VAEs) with functional near-infrared spectroscopy (fNIRS) offers novel avenues to investigate behavioral and neural processes of speech-in-noise (SIN) comprehension in complex auditory scenes. Particularly in children with hearing aids (HAs), the combined application might offer new insights into the neural mechanism of SIN perception in simulated real-life acoustic scenarios. Here, we present first pilot data from six children with normal hearing (NH) and three children with bilateral HAs to explore the potential applicability of this novel approach. Children with NH received a speech recognition benefit from low room reverberation and target-distractors' spatial separation, particularly when the pitch of the target and the distractors was similar. On the neural level, the left inferior frontal gyrus appeared to support SIN comprehension during effortful listening. Children with HAs showed decreased SIN perception across conditions. The VAE-fNIRS approach is critically compared to traditional SIN assessments. Although the current study shows that feasibility still needs to be improved, the combined application potentially offers a promising tool to investigate novel research questions in simulated real-life listening. Future modified VAE-fNIRS applications are warranted to replicate the current findings and to validate its application in research and clinical settings.
Collapse
Affiliation(s)
- Laura Bell
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
| | - Z. Ellen Peng
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
- Waisman Center, University of Wisconsin-Madison, Madison, WI 53705, USA;
| | - Florian Pausch
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
| | - Vanessa Reindl
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen & Research Centre Juelich, 52428 Juelich, Germany
| | - Christiane Neuschaefer-Rube
- Clinic of Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany;
| | - Janina Fels
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
| | - Kerstin Konrad
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen & Research Centre Juelich, 52428 Juelich, Germany
| |
Collapse
|
10
|
Mushtaq F, Wiggins IM, Kitterick PT, Anderson CA, Hartley DEH. Evaluating time-reversed speech and signal-correlated noise as auditory baselines for isolating speech-specific processing using fNIRS. PLoS One 2019; 14:e0219927. [PMID: 31314802 PMCID: PMC6636749 DOI: 10.1371/journal.pone.0219927] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 07/03/2019] [Indexed: 12/14/2022] Open
Abstract
Evidence using well-established imaging techniques, such as functional magnetic resonance imaging and electrocorticography, suggest that speech-specific cortical responses can be functionally localised by contrasting speech responses with an auditory baseline stimulus, such as time-reversed (TR) speech or signal-correlated noise (SCN). Furthermore, these studies suggest that SCN is a more effective baseline than TR speech. Functional near-infrared spectroscopy (fNIRS) is a relatively novel, optically-based imaging technique with features that make it ideal for investigating speech and language function in paediatric populations. However, it is not known which baseline is best at isolating speech activation when imaging using fNIRS. We presented normal speech, TR speech and SCN in an event-related format to 25 normally-hearing children aged 6-12 years. Brain activity was measured across frontal and temporal brain areas in both cerebral hemispheres whilst children passively listened to the auditory stimuli. In all three conditions, significant activation was observed bilaterally in channels targeting superior temporal regions when stimuli were contrasted against silence. Unlike previous findings in infants, we found no significant activation in the region of interest over superior temporal cortex in school-age children when normal speech was contrasted against either TR speech or SCN. Although no statistically significant lateralisation effects were observed in the region of interest, a left-sided channel targeting posterior temporal regions showed significant activity in response to normal speech only, and was investigated further. Significantly greater activation was observed in this left posterior channel compared to the corresponding channel on the right side under the normal speech vs SCN contrast only. Our findings suggest that neither TR speech nor SCN are suitable auditory baselines for functionally isolating speech-specific processing in an experimental set up involving fNIRS with 6-12 year old children.
Collapse
Affiliation(s)
- Faizah Mushtaq
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Ian M. Wiggins
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Pádraig T. Kitterick
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Carly A. Anderson
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Douglas E. H. Hartley
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham University Hospitals NHS Trust, Queens Medical Centre, Nottingham, United Kingdom
| |
Collapse
|
11
|
Lawrence RJ, Wiggins IM, Anderson CA, Davies-Thompson J, Hartley DE. Cortical correlates of speech intelligibility measured using functional near-infrared spectroscopy (fNIRS). Hear Res 2018; 370:53-64. [DOI: 10.1016/j.heares.2018.09.005] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2018] [Revised: 09/11/2018] [Accepted: 09/19/2018] [Indexed: 11/26/2022]
|