1
|
Bálint A, Wimmer W, Caversaccio M, Rummel C, Weder S. Brain activation patterns in normal hearing adults: An fNIRS Study using an adapted clinical speech comprehension task. Hear Res 2025; 455:109155. [PMID: 39637600 DOI: 10.1016/j.heares.2024.109155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Revised: 11/01/2024] [Accepted: 11/27/2024] [Indexed: 12/07/2024]
Abstract
OBJECTIVES Understanding brain processing of auditory and visual speech is essential for advancing speech perception research and improving clinical interventions for individuals with hearing impairment. Functional near-infrared spectroscopy (fNIRS) is deemed to be highly suitable for measuring brain activity during language tasks. However, accurate data interpretation also requires validated stimuli and behavioral measures. DESIGN Twenty-six adults with normal hearing listened to sentences from the Oldenburg Sentence Test (OLSA), and brain activation in the temporal, occipital, and prefrontal areas was measured by fNIRS. The sentences were presented in one of the four different modalities: speech-in-quiet, speech-in-noise, audiovisual speech or visual speech (i.e., lipreading). To support the interpretation of our fNIRS data, and to obtain a more comprehensive understanding of the study population, we performed hearing tests (pure tone and speech audiometry) and collected behavioral data using validated questionnaires, in-task comprehension questions, and listening effort ratings. RESULTS In the auditory conditions (i.e., speech-in-quiet and speech-in-noise), we observed cortical activity in the temporal regions bilaterally. During the visual speech condition, we measured significant activation in the occipital area. Following the audiovisual condition, cortical activation was observed in both regions. Furthermore, we established a baseline for how individuals with normal hearing process visual cues during lipreading, and we found higher activity in the prefrontal cortex in noise conditions compared to quiet conditions, linked to higher listening effort. CONCLUSIONS We demonstrated the applicability of a clinically inspired audiovisual speech-comprehension task in participants with normal hearing. The measured brain activation patterns were supported and complemented by objective and behavioral parameters.
Collapse
Affiliation(s)
- András Bálint
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern 3008 Bern, Switzerland; Department of ENT - Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern 3010 Bern, Switzerland
| | - Wilhelm Wimmer
- Department of ENT - Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern 3010 Bern, Switzerland; Department of Otorhinolaryngology, Klinikum rechts der Isar, Technical University of Munich, Germany
| | - Marco Caversaccio
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern 3008 Bern, Switzerland; Department of ENT - Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern 3010 Bern, Switzerland
| | - Christian Rummel
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern 3010 Bern, Switzerland
| | - Stefan Weder
- Department of ENT - Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern 3010 Bern, Switzerland.
| |
Collapse
|
2
|
Lewis AV, Fang Q. Revisiting equivalent optical properties for cerebrospinal fluid to improve diffusion-based modeling accuracy in the brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.20.608859. [PMID: 39229084 PMCID: PMC11370459 DOI: 10.1101/2024.08.20.608859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Abstract
Significance The diffusion approximation (DA) is used in functional near-infrared spectroscopy (fNIRS) studies despite its known limitations due to the presence of cerebrospinal fluid (CSF). Many of these studies rely on a set of empirical CSF optical properties, recommended by a previous simulation study, that were not selected for the purpose of minimizing DA modeling errors. Aim We aim to directly quantify the accuracy of DA solutions in brain models by comparing those with the gold-standard solutions produced by the mesh-based Monte Carlo (MMC), based on which we derive updated recommendations. Approach For both a 5-layer head and Colin27 atlas models, we obtain DA solutions by independently sweeping the CSF absorptionμ a and reduced scatteringμ s ' coefficients. Using an MMC solution with literature CSF optical properties as reference, we compute the errors for surface fluence, total brain sensitivity and brain energy-deposition, and identify the optimized settings where the such error is minimized. Results Our results suggest that previously recommended CSF properties can cause significant errors (8.7% to 52%) in multiple tested metrics. By simultaneously sweepingμ a andμ s ' , we can identify infinite numbers of solutions that can exactly match DA with MMC solutions for any single tested metric. Furthermore, it is also possible to simultaneously minimize multiple metrics at multiple source/detector separations, leading to our new recommendation of settingμ s ' = 0.15 mm - 1 while maintaining physiologicalμ a for CSF in DA simulations. Conclusion Our new recommendation of CSF equivalent optical properties can greatly reduce the model mismatches between DA and MMC solutions at multiple metrics without sacrificing computational speed. We also show that it is possible to eliminate such a mismatch for a single or a pair of metrics of interest.
Collapse
Affiliation(s)
- Aiden Vincent Lewis
- Northeastern University, Department of Bioengineering, 360 Huntington Avenue, Boston, USA, 02115
| | - Qianqian Fang
- Northeastern University, Department of Bioengineering, 360 Huntington Avenue, Boston, USA, 02115
- Northeastern University, Department of EECS, 360 Huntington Avenue, Boston, USA, 02115
| |
Collapse
|
3
|
Zheng Y, Zhang B. 25-year neuroimaging research on spoken language processing: a bibliometric analysis. Front Hum Neurosci 2024; 18:1461505. [PMID: 39668910 PMCID: PMC11635769 DOI: 10.3389/fnhum.2024.1461505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Accepted: 11/11/2024] [Indexed: 12/14/2024] Open
Abstract
Introduction Spoken language processing is of huge interest to cognitive and neural scientists, as it is the dominant channel for everyday verbal communication. The aim of this study is to depict the dynamics of publications in the field of neuroimaging research on spoken language processing between 2000 and 2024. Methods A bibliometric analysis was conducted to probe this particular subject matter based on data retrieved from Web of Science. A total of 8,085 articles were found, which were analyzed together with their authors, journals of publication, citations and countries of origin. Results Results showed a steady increase of publication volume and a relatively high academic visibility of this research field indexed by total citations in the first 25 years of the 21st century. Maps of frequent keywords, institutional collaboration network show that cooperations mainly happen between institutions in the United States, the United Kingdom and Germany. Future trends based on burst detection predict that classification, Alzheimer's disease and oscillations are potential hot topics. Discussion Possible reasons for the result include the aging of the population in developed countries, and the rapid growth of artificial intelligence in the past decade. Finally, specific research avenues were proposed which might benefit future studies.
Collapse
Affiliation(s)
- Yuxuan Zheng
- School of Interpreting and Translation, Beijing International Studies University, Beijing, China
- AI and Cognition Laboratory, Beijing International Studies University, Beijing, China
| | - Boning Zhang
- School of English Studies, Beijing International Studies University, Beijing, China
| |
Collapse
|
4
|
Farrar R, Ashjaei S, Arjmandi MK. Speech-evoked cortical activities and speech recognition in adult cochlear implant listeners: a review of functional near-infrared spectroscopy studies. Exp Brain Res 2024; 242:2509-2530. [PMID: 39305309 PMCID: PMC11527908 DOI: 10.1007/s00221-024-06921-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 09/04/2024] [Indexed: 11/01/2024]
Abstract
Cochlear implants (CIs) are the most successful neural prostheses, enabling individuals with severe to profound hearing loss to access sounds and understand speech. While CI has demonstrated success, speech perception outcomes vary largely among CI listeners, with significantly reduced performance in noise. This review paper summarizes prior findings on speech-evoked cortical activities in adult CI listeners using functional near-infrared spectroscopy (fNIRS) to understand (a) speech-evoked cortical processing in CI listeners compared to normal-hearing (NH) individuals, (b) the relationship between these activities and behavioral speech recognition scores, (c) the extent to which current fNIRS-measured speech-evoked cortical activities in CI listeners account for their differences in speech perception, and (d) challenges in using fNIRS for CI research. Compared to NH listeners, CI listeners had diminished speech-evoked activation in the middle temporal gyrus (MTG) and in the superior temporal gyrus (STG), except one study reporting an opposite pattern for STG. NH listeners exhibited higher inferior frontal gyrus (IFG) activity when listening to CI-simulated speech compared to natural speech. Among CI listeners, higher speech recognition scores correlated with lower speech-evoked activation in the STG, higher activation in the left IFG and left fusiform gyrus, with mixed findings in the MTG. fNIRS shows promise for enhancing our understanding of cortical processing of speech in CI listeners, though findings are mixed. Challenges include test-retest reliability, managing noise, replicating natural conditions, optimizing montage design, and standardizing methods to establish a strong predictive relationship between fNIRS-based cortical activities and speech perception in CI listeners.
Collapse
Affiliation(s)
- Reed Farrar
- Department of Psychology, University of South Carolina, 1512 Pendleton Street, Columbia, SC, 29208, USA
| | - Samin Ashjaei
- Department of Communication Sciences and Disorders, University of South Carolina, 1705 College Street, Columbia, SC, 29208, USA
| | - Meisam K Arjmandi
- Department of Communication Sciences and Disorders, University of South Carolina, 1705 College Street, Columbia, SC, 29208, USA.
- Institute for Mind and Brain, University of South Carolina, Barnwell Street, Columbia, SC, 29208, USA.
| |
Collapse
|
5
|
Ashjaei S, Behroozmand R, Fozdar S, Farrar R, Arjmandi M. Vocal control and speech production in cochlear implant listeners: A review within auditory-motor processing framework. Hear Res 2024; 453:109132. [PMID: 39447319 DOI: 10.1016/j.heares.2024.109132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 10/11/2024] [Accepted: 10/14/2024] [Indexed: 10/26/2024]
Abstract
A comprehensive literature review is conducted to summarize and discuss prior findings on how cochlear implants (CI) affect the users' abilities to produce and control vocal and articulatory movements within the auditory-motor integration framework of speech. Patterns of speech production pre- versus post-implantation, post-implantation adjustments, deviations from the typical ranges of speakers with normal hearing (NH), the effects of switching the CI on and off, as well as the impact of altered auditory feedback on vocal and articulatory speech control are discussed. Overall, findings indicate that CIs enhance the vocal and articulatory control aspects of speech production at both segmental and suprasegmental levels. While many CI users achieve speech quality comparable to NH individuals, some features still deviate in a group of CI users even years post-implantation. More specifically, contracted vowel space, increased vocal jitter and shimmer, longer phoneme and utterance durations, shorter voice onset time, decreased contrast in fricative production, limited prosodic patterns, and reduced intelligibility have been reported in subgroups of CI users compared to NH individuals. Significant individual variations among CI users have been observed in both the pace of speech production adjustments and long-term speech outcomes. Few controlled studies have explored how the implantation age and the duration of CI use influence speech features, leaving substantial gaps in our understanding about the effects of spectral resolution, auditory rehabilitation, and individual auditory-motor processing abilities on vocal and articulatory speech outcomes in CI users. Future studies under the auditory-motor integration framework are warranted to determine how suboptimal CI auditory feedback impacts auditory-motor processing and precise vocal and articulatory control in CI users.
Collapse
Affiliation(s)
- Samin Ashjaei
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 2811 North Floyd Road, Richardson, TX 75080, USA
| | - Shaivee Fozdar
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Reed Farrar
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Meisam Arjmandi
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA; Institute for Mind and Brain, University of South Carolina, Barnwell Street, Columbia, SC 29208, USA.
| |
Collapse
|
6
|
Mai G, Jiang Z, Wang X, Tachtsidis I, Howell P. Neuroplasticity of Speech-in-Noise Processing in Older Adults Assessed by Functional Near-Infrared Spectroscopy (fNIRS). Brain Topogr 2024; 37:1139-1157. [PMID: 39042322 PMCID: PMC11408581 DOI: 10.1007/s10548-024-01070-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 07/13/2024] [Indexed: 07/24/2024]
Abstract
Functional near-infrared spectroscopy (fNIRS), a non-invasive optical neuroimaging technique that is portable and acoustically silent, has become a promising tool for evaluating auditory brain functions in hearing-vulnerable individuals. This study, for the first time, used fNIRS to evaluate neuroplasticity of speech-in-noise processing in older adults. Ten older adults, most of whom had moderate-to-mild hearing loss, participated in a 4-week speech-in-noise training. Their speech-in-noise performances and fNIRS brain responses to speech (auditory sentences in noise), non-speech (spectrally-rotated speech in noise) and visual (flashing chequerboards) stimuli were evaluated pre- (T0) and post-training (immediately after training, T1; and after a 4-week retention, T2). Behaviourally, speech-in-noise performances were improved after retention (T2 vs. T0) but not immediately after training (T1 vs. T0). Neurally, we intriguingly found brain responses to speech vs. non-speech decreased significantly in the left auditory cortex after retention (T2 vs. T0 and T2 vs. T1) for which we interpret as suppressed processing of background noise during speech listening alongside the significant behavioural improvements. Meanwhile, functional connectivity within and between multiple regions of temporal, parietal and frontal lobes was significantly enhanced in the speech condition after retention (T2 vs. T0). We also found neural changes before the emergence of significant behavioural improvements. Compared to pre-training, responses to speech vs. non-speech in the left frontal/prefrontal cortex were decreased significantly both immediately after training (T1 vs. T0) and retention (T2 vs. T0), reflecting possible alleviation of listening efforts. Finally, connectivity was significantly decreased between auditory and higher-level non-auditory (parietal and frontal) cortices in response to visual stimuli immediately after training (T1 vs. T0), indicating decreased cross-modal takeover of speech-related regions during visual processing. The results thus showed that neuroplasticity can be observed not only at the same time with, but also before, behavioural changes in speech-in-noise perception. To our knowledge, this is the first fNIRS study to evaluate speech-based auditory neuroplasticity in older adults. It thus provides important implications for current research by illustrating the promises of detecting neuroplasticity using fNIRS in hearing-vulnerable individuals.
Collapse
Affiliation(s)
- Guangting Mai
- National Institute for Health and Care Research Nottingham Biomedical Research Centre, Nottingham, UK.
- Academic Unit of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK.
- Division of Psychology and Language Sciences, University College London, London, UK.
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
| | - Zhizhao Jiang
- Division of Psychology and Language Sciences, University College London, London, UK
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Xinran Wang
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Ilias Tachtsidis
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Peter Howell
- Division of Psychology and Language Sciences, University College London, London, UK
| |
Collapse
|
7
|
Han JH, Kim JH, Park GK, Lee HJ. Preserved Gray Matter Volume in the Left Superior Temporal Gyrus Underpins Speech-in-Noise Processing in Middle-Aged Adults. J Int Adv Otol 2024; 20:62-68. [PMID: 38454291 PMCID: PMC10895841 DOI: 10.5152/iao.2024.231241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 09/05/2023] [Indexed: 03/09/2024] Open
Abstract
BACKGROUND Neuroanatomical evidence suggests that behavioral speech-in-noise (SiN) perception and the underlying cortical structural network are altered by aging, and these aging-induced changes could be initiated during middle age. However, the mechanism behind the relationship between auditory performance and neural substrates of speech perception in middle-aged individuals remains unclear. In this study, we measured the structural volumes of selected neuroanatomical regions involved in speech and hearing processing to establish their association with speech perception ability in middle-aged adults. METHODS Sentence perception in quiet and noisy conditions was behaviorally measured in 2 different age groups: young (20-39 years old) and middle-aged (40-59-year-old) adults. Anatomical magnetic resonance images were taken to assess the gray matter volume of specific parcellated brain areas associated with speech perception. The relationships between these and behavioral auditory performance with age were determined. RESULTS The middle-aged adults showed poorer speech perception in both quiet and noisy conditions than the young adults. Neuroanatomical data revealed that the normalized gray matter volume in the left superior temporal gyrus, which is closely related to acoustic and phonological processing, is associated with behavioral SiN perception in the middle-aged group. In addition, the normalized gray matter volumes in multiple cortical areas seem to decrease with age. CONCLUSION The results indicate that SiN perception in middle-aged adults is closely related to the brain region responsible for lower-level speech processing, which involves the detection and phonemic representation of speech. Nonetheless, the higher-order cortex may also contribute to age-induced changes in auditory performance.
Collapse
Affiliation(s)
- Ji-Hye Han
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University, College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, South Korea
| | - Ja-Hee Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University, College of Medicine, Chuncheon, Republic of Korea
| | - Gin-Kyeong Park
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University, College of Medicine, Anyang, Republic of Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University, College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, South Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University, College of Medicine, Chuncheon, Republic of Korea
| |
Collapse
|
8
|
Xie X, Jaeger TF, Kurumada C. What we do (not) know about the mechanisms underlying adaptive speech perception: A computational framework and review. Cortex 2023; 166:377-424. [PMID: 37506665 DOI: 10.1016/j.cortex.2023.05.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 12/23/2022] [Accepted: 05/05/2023] [Indexed: 07/30/2023]
Abstract
Speech from unfamiliar talkers can be difficult to comprehend initially. These difficulties tend to dissipate with exposure, sometimes within minutes or less. Adaptivity in response to unfamiliar input is now considered a fundamental property of speech perception, and research over the past two decades has made substantial progress in identifying its characteristics. The mechanisms underlying adaptive speech perception, however, remain unknown. Past work has attributed facilitatory effects of exposure to any one of three qualitatively different hypothesized mechanisms: (1) low-level, pre-linguistic, signal normalization, (2) changes in/selection of linguistic representations, or (3) changes in post-perceptual decision-making. Direct comparisons of these hypotheses, or combinations thereof, have been lacking. We describe a general computational framework for adaptive speech perception (ASP) that-for the first time-implements all three mechanisms. We demonstrate how the framework can be used to derive predictions for experiments on perception from the acoustic properties of the stimuli. Using this approach, we find that-at the level of data analysis presently employed by most studies in the field-the signature results of influential experimental paradigms do not distinguish between the three mechanisms. This highlights the need for a change in research practices, so that future experiments provide more informative results. We recommend specific changes to experimental paradigms and data analysis. All data and code for this study are shared via OSF, including the R markdown document that this article is generated from, and an R library that implements the models we present.
Collapse
Affiliation(s)
- Xin Xie
- Language Science, University of California, Irvine, USA.
| | - T Florian Jaeger
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA; Computer Science, University of Rochester, Rochester, NY, USA
| | - Chigusa Kurumada
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
9
|
Zhou XQ, Zhang QL, Xi X, Leng MR, Liu H, Liu S, Zhang T, Yuan W. Cortical responses correlate with speech performance in pre-lingually deaf cochlear implant children. Front Neurosci 2023; 17:1126813. [PMID: 37332858 PMCID: PMC10272438 DOI: 10.3389/fnins.2023.1126813] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 05/17/2023] [Indexed: 06/20/2023] Open
Abstract
Introduction Cochlear implantation is currently the most successful intervention for severe-to-profound sensorineural hearing loss, particularly in deaf infants and children. Nonetheless, there remains a significant degree of variability in the outcomes of CI post-implantation. The purpose of this study was to understand the cortical correlates of the variability in speech outcomes with a cochlear implant in pre-lingually deaf children using functional near-infrared spectroscopy (fNIRS), an emerging brain-imaging technique. Methods In this experiment, cortical activities when processing visual speech and two levels of auditory speech, including auditory speech in quiet and in noise with signal-to-noise ratios of 10 dB, were examined in 38 CI recipients with pre-lingual deafness and 36 normally hearing children whose age and sex matched CI users. The HOPE corpus (a corpus of Mandarin sentences) was used to generate speech stimuli. The regions of interest (ROIs) for the fNIRS measurements were fronto-temporal-parietal networks involved in language processing, including bilateral superior temporal gyrus, left inferior frontal gyrus, and bilateral inferior parietal lobes. Results The fNIRS results confirmed and extended findings previously reported in the neuroimaging literature. Firstly, cortical responses of superior temporal gyrus to both auditory and visual speech in CI users were directly correlated to auditory speech perception scores, with the strongest positive association between the levels of cross-modal reorganization and CI outcome. Secondly, compared to NH controls, CI users, particularly those with good speech perception, showed larger cortical activation in the left inferior frontal gyrus in response to all speech stimuli used in the experiment. Discussion In conclusion, cross-modal activation to visual speech in the auditory cortex of pre-lingually deaf CI children may be at least one of the neural bases of highly variable CI performance due to its beneficial effects for speech understanding, thus supporting the prediction and assessment of CI outcomes in clinic. Additionally, cortical activation of the left inferior frontal gyrus may be a cortical marker for effortful listening.
Collapse
Affiliation(s)
- Xiao-Qing Zhou
- Department of Otolaryngology, Chongqing Medical University, Chongqing, China
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing, China
- Department of Otolaryngology, Chongqing General Hospital, Chongqing, China
| | - Qing-Ling Zhang
- Department of Otolaryngology, Chongqing Medical University, Chongqing, China
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing, China
- Department of Otolaryngology, Chongqing General Hospital, Chongqing, China
| | - Xin Xi
- Department of Otolaryngology Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China
| | - Ming-Rong Leng
- Chongqing Integrated Service Center for Disabled Persons, Chongqing, China
| | - Hao Liu
- Chongqing Integrated Service Center for Disabled Persons, Chongqing, China
| | - Shu Liu
- Chongqing Integrated Service Center for Disabled Persons, Chongqing, China
| | - Ting Zhang
- Chongqing Integrated Service Center for Disabled Persons, Chongqing, China
| | - Wei Yuan
- Department of Otolaryngology, Chongqing Medical University, Chongqing, China
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing, China
- Department of Otolaryngology, Chongqing General Hospital, Chongqing, China
| |
Collapse
|
10
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
11
|
You Y, Liu J, Wang D, Fu Y, Liu R, Ma X. Cognitive Performance in Short Sleep Young Adults with Different Physical Activity Levels: A Cross-Sectional fNIRS Study. Brain Sci 2023; 13:brainsci13020171. [PMID: 36831714 PMCID: PMC9954673 DOI: 10.3390/brainsci13020171] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/14/2023] [Accepted: 01/16/2023] [Indexed: 01/20/2023] Open
Abstract
Short sleep is a common issue nowadays. The purpose of this study was to investigate prefrontal cortical hemodynamics by evaluating changes in concentrations of oxygenated hemoglobin (HbO) in cognitive tests among short-sleep young adults and to explore the relationship between sleep duration, physical activity level, and cognitive function in this specific population. A total of 46 participants (25 males and 21 females) were included in our study, and among them, the average sleep duration was 358 min/day. Stroop performance in the short sleep population was linked to higher levels cortical activation in distinct parts of the left middle frontal gyrus. This study found that moderate-to-vigorous physical activity (MVPA) was significantly associated with lower accuracy of incongruent Stroop test. The dose-response relationship between sleep duration and Stroop performance under different levels of light-intensity physical activity (LPA) and MVPA was further explored, and increasing sleep time for different PA level was associated with better Stroop performance. In summary, this present study provided neurobehavioral evidence between cortical hemodynamics and cognitive function in the short sleep population. Furthermore, our findings indicated that, in younger adults with short sleep, more MVPA was associated with worse cognitive performance. Short sleep young adults should increase sleep time, rather than more MVPA, to achieve better cognitive function.
Collapse
Affiliation(s)
- Yanwei You
- Division of Sports Science & Physical Education, Tsinghua University, Beijing 100084, China
- School of Social Sciences, Tsinghua University, Beijing 100084, China
| | - Jianxiu Liu
- Division of Sports Science & Physical Education, Tsinghua University, Beijing 100084, China
- Vanke School of Public Health, Tsinghua University, Beijing 100084, China
| | - Dizhi Wang
- Division of Sports Science & Physical Education, Tsinghua University, Beijing 100084, China
- School of Social Sciences, Tsinghua University, Beijing 100084, China
| | - Yingyao Fu
- Beijing Jianhua Experimental Etown School, Beijing 100176, China
| | - Ruidong Liu
- Sports Coaching College, Beijing Sport University, Beijing 100091, China
- Correspondence: (R.L.); (X.M.)
| | - Xindong Ma
- Division of Sports Science & Physical Education, Tsinghua University, Beijing 100084, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
- Correspondence: (R.L.); (X.M.)
| |
Collapse
|
12
|
Bálint A, Wimmer W, Caversaccio M, Weder S. Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study. JMIR Res Protoc 2022; 11:e38407. [PMID: 35727624 PMCID: PMC9239541 DOI: 10.2196/38407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/12/2022] [Accepted: 06/03/2022] [Indexed: 11/21/2022] Open
Abstract
BACKGROUND Functional near-infrared spectroscopy (fNIRS) studies have demonstrated associations between hearing outcomes after cochlear implantation and plastic brain changes. However, inconsistent results make it difficult to draw conclusions. A major problem is that many variables need to be controlled. To gain further understanding, a careful preparation and planning of such a functional neuroimaging task is key. OBJECTIVE Using fNIRS, our main objective is to develop a well-controlled audiovisual speech comprehension task to study brain activation in individuals with normal hearing and hearing impairment (including cochlear implant users). The task should be deductible from clinically established tests, induce maximal cortical activation, use optimal coverage of relevant brain regions, and be reproducible by other research groups. METHODS The protocol will consist of a 5-minute resting state and 2 stimulation periods that are 12 minutes each. During the stimulation periods, 13-second video recordings of the clinically established Oldenburg Sentence Test (OLSA) will be presented. Stimuli will be presented in 4 different modalities: (1) speech in quiet, (2) speech in noise, (3) visual only (ie, lipreading), and (4) audiovisual speech. Each stimulus type will be repeated 10 times in a counterbalanced block design. Interactive question windows will monitor speech comprehension during the task. After the measurement, we will perform a 3D scan to digitize optode positions and verify the covered anatomical locations. RESULTS This paper reports the study protocol. Enrollment for the study started in August 2021. We expect to publish our first results by the end of 2022. CONCLUSIONS The proposed audiovisual speech comprehension task will help elucidate neural correlates to speech understanding. The comprehensive study will have the potential to provide additional information beyond the conventional clinical standards about the underlying plastic brain changes of a hearing-impaired person. It will facilitate more precise indication criteria for cochlear implantation and better planning of rehabilitation. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/38407.
Collapse
Affiliation(s)
- András Bálint
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Wilhelm Wimmer
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Marco Caversaccio
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Stefan Weder
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
13
|
Sherafati A, Dwyer N, Bajracharya A, Hassanpour MS, Eggebrecht AT, Firszt JB, Culver JP, Peelle JE. Prefrontal cortex supports speech perception in listeners with cochlear implants. eLife 2022; 11:e75323. [PMID: 35666138 PMCID: PMC9225001 DOI: 10.7554/elife.75323] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 06/04/2022] [Indexed: 12/14/2022] Open
Abstract
Cochlear implants are neuroprosthetic devices that can restore hearing in people with severe to profound hearing loss by electrically stimulating the auditory nerve. Because of physical limitations on the precision of this stimulation, the acoustic information delivered by a cochlear implant does not convey the same level of acoustic detail as that conveyed by normal hearing. As a result, speech understanding in listeners with cochlear implants is typically poorer and more effortful than in listeners with normal hearing. The brain networks supporting speech understanding in listeners with cochlear implants are not well understood, partly due to difficulties obtaining functional neuroimaging data in this population. In the current study, we assessed the brain regions supporting spoken word understanding in adult listeners with right unilateral cochlear implants (n=20) and matched controls (n=18) using high-density diffuse optical tomography (HD-DOT), a quiet and non-invasive imaging modality with spatial resolution comparable to that of functional MRI. We found that while listening to spoken words in quiet, listeners with cochlear implants showed greater activity in the left prefrontal cortex than listeners with normal hearing, specifically in a region engaged in a separate spatial working memory task. These results suggest that listeners with cochlear implants require greater cognitive processing during speech understanding than listeners with normal hearing, supported by compensatory recruitment of the left prefrontal cortex.
Collapse
Affiliation(s)
- Arefeh Sherafati
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
| | - Noel Dwyer
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Aahana Bajracharya
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | | | - Adam T Eggebrecht
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Electrical & Systems Engineering, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
| | - Jill B Firszt
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Joseph P Culver
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
- Department of Physics, Washington University in St. LouisSt. LouisUnited States
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| |
Collapse
|