1
|
Farrar R, Ashjaei S, Arjmandi MK. Speech-evoked cortical activities and speech recognition in adult cochlear implant listeners: a review of functional near-infrared spectroscopy studies. Exp Brain Res 2024; 242:2509-2530. [PMID: 39305309 PMCID: PMC11527908 DOI: 10.1007/s00221-024-06921-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 09/04/2024] [Indexed: 11/01/2024]
Abstract
Cochlear implants (CIs) are the most successful neural prostheses, enabling individuals with severe to profound hearing loss to access sounds and understand speech. While CI has demonstrated success, speech perception outcomes vary largely among CI listeners, with significantly reduced performance in noise. This review paper summarizes prior findings on speech-evoked cortical activities in adult CI listeners using functional near-infrared spectroscopy (fNIRS) to understand (a) speech-evoked cortical processing in CI listeners compared to normal-hearing (NH) individuals, (b) the relationship between these activities and behavioral speech recognition scores, (c) the extent to which current fNIRS-measured speech-evoked cortical activities in CI listeners account for their differences in speech perception, and (d) challenges in using fNIRS for CI research. Compared to NH listeners, CI listeners had diminished speech-evoked activation in the middle temporal gyrus (MTG) and in the superior temporal gyrus (STG), except one study reporting an opposite pattern for STG. NH listeners exhibited higher inferior frontal gyrus (IFG) activity when listening to CI-simulated speech compared to natural speech. Among CI listeners, higher speech recognition scores correlated with lower speech-evoked activation in the STG, higher activation in the left IFG and left fusiform gyrus, with mixed findings in the MTG. fNIRS shows promise for enhancing our understanding of cortical processing of speech in CI listeners, though findings are mixed. Challenges include test-retest reliability, managing noise, replicating natural conditions, optimizing montage design, and standardizing methods to establish a strong predictive relationship between fNIRS-based cortical activities and speech perception in CI listeners.
Collapse
Affiliation(s)
- Reed Farrar
- Department of Psychology, University of South Carolina, 1512 Pendleton Street, Columbia, SC, 29208, USA
| | - Samin Ashjaei
- Department of Communication Sciences and Disorders, University of South Carolina, 1705 College Street, Columbia, SC, 29208, USA
| | - Meisam K Arjmandi
- Department of Communication Sciences and Disorders, University of South Carolina, 1705 College Street, Columbia, SC, 29208, USA.
- Institute for Mind and Brain, University of South Carolina, Barnwell Street, Columbia, SC, 29208, USA.
| |
Collapse
|
2
|
Lee JJ, Scott TL, Perrachione TK. Efficient functional localization of language regions in the brain. Neuroimage 2024; 285:120489. [PMID: 38065277 PMCID: PMC10999251 DOI: 10.1016/j.neuroimage.2023.120489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 11/25/2023] [Accepted: 12/05/2023] [Indexed: 12/17/2023] Open
Abstract
Important recent advances in the cognitive neuroscience of language have been made using functional localizers to demarcate language-selective regions in individual brains. Although single-subject localizers offer insights that are unavailable in classic group analyses, they require additional scan time that imposes costs on investigators and participants. In particular, the unique practical challenges of scanning children and other special populations has led to less adoption of localizers for neuroimaging research with these theoretically and clinically important groups. Here, we examined how measurements of the spatial extent and functional response profiles of language regions are affected by the duration of an auditory language localizer. We compared how parametrically smaller amounts of data collected from one scanning session affected (i) consistency of group-level whole-brain parcellations, (ii) functional selectivity of subject-level activation in individually defined functional regions of interest (fROIs), (iii) sensitivity and specificity of subject-level whole-brain and fROI activation, and (iv) test-retest reliability of subject-level whole-brain and fROI activation. For many of these metrics, the localizer duration could be reduced by 50-75% while preserving the stability and reliability of both the spatial extent and functional response profiles of language areas. These results indicate that, for most measures relevant to cognitive neuroimaging studies, the brain's language network can be localized just as effectively with 3.5 min of scan time as it can with 12 min. Minimizing the time required to reliably localize the brain's language network allows more effective localizer use in situations where each minute of scan time is particularly precious.
Collapse
Affiliation(s)
- Jayden J Lee
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Ave., Boston, MA 02215, United States
| | - Terri L Scott
- Department of Neurological Surgery, University of California - San Francisco, San Francisco, CA, United States
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Ave., Boston, MA 02215, United States.
| |
Collapse
|
3
|
Olson HA, Chen EM, Lydic KO, Saxe RR. Left-Hemisphere Cortical Language Regions Respond Equally to Observed Dialogue and Monologue. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:575-610. [PMID: 38144236 PMCID: PMC10745132 DOI: 10.1162/nol_a_00123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 09/20/2023] [Indexed: 12/26/2023]
Abstract
Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. Twenty adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20 s videos of puppets speaking either to another puppet (the dialogue condition) or directly to the viewer (the monologue condition), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1-3 min each) of two puppets conversing with each other, in which one puppet was comprehensible while the other's speech was reversed. All participants saw the same visual input but were randomly assigned which character's speech was comprehensible. In left-hemisphere cortical language regions, the time course of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually localized theory of mind regions and right-hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.
Collapse
|
4
|
Sheffield SW, Larson E, Butera IM, DeFreese A, Rogers BP, Wallace MT, Stecker GC, Lee AKC, Gifford RH. Sound Level Changes the Auditory Cortical Activation Detected with Functional Near-Infrared Spectroscopy. Brain Topogr 2023; 36:686-697. [PMID: 37393418 DOI: 10.1007/s10548-023-00981-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 06/19/2023] [Indexed: 07/03/2023]
Abstract
BACKGROUND Functional near-infrared spectroscopy (fNIRS) is a viable non-invasive technique for functional neuroimaging in the cochlear implant (CI) population; however, the effects of acoustic stimulus features on the fNIRS signal have not been thoroughly examined. This study examined the effect of stimulus level on fNIRS responses in adults with normal hearing or bilateral CIs. We hypothesized that fNIRS responses would correlate with both stimulus level and subjective loudness ratings, but that the correlation would be weaker with CIs due to the compression of acoustic input to electric output. METHODS Thirteen adults with bilateral CIs and 16 with normal hearing (NH) completed the study. Signal-correlated noise, a speech-shaped noise modulated by the temporal envelope of speech stimuli, was used to determine the effect of stimulus level in an unintelligible speech-like stimulus between the range of soft to loud speech. Cortical activity in the left hemisphere was recorded. RESULTS Results indicated a positive correlation of cortical activation in the left superior temporal gyrus with stimulus level in both NH and CI listeners with an additional correlation between cortical activity and perceived loudness for the CI group. The results are consistent with the literature and our hypothesis. CONCLUSIONS These results support the potential of fNIRS to examine auditory stimulus level effects at a group level and the importance of controlling for stimulus level and loudness in speech recognition studies. Further research is needed to better understand cortical activation patterns for speech recognition as a function of both stimulus presentation level and perceived loudness.
Collapse
Affiliation(s)
- Sterling W Sheffield
- Department of Speech, Language, and Hearing Science, University of Florida, 1225 Center Drive Room 2130, Gainesville, FL, 32160, USA.
| | - Eric Larson
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA
| | - Iliza M Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Andrea DeFreese
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Baxter P Rogers
- Department of Radiology & Radiological Sciences, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | | | - Adrian K C Lee
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Rene H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, TN, USA
| |
Collapse
|
5
|
Agyeman K, McCarty T, Multani H, Mattingly K, Koziar K, Chu J, Liu C, Kokkoni E, Christopoulos V. Task-based functional neuroimaging in infants: a systematic review. Front Neurosci 2023; 17:1233990. [PMID: 37655006 PMCID: PMC10466897 DOI: 10.3389/fnins.2023.1233990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 07/17/2023] [Indexed: 09/02/2023] Open
Abstract
Background Infancy is characterized by rapid neurological transformations leading to consolidation of lifelong function capabilities. Studying the infant brain is crucial for understanding how these mechanisms develop during this sensitive period. We review the neuroimaging modalities used with infants in stimulus-induced activity paradigms specifically, for the unique opportunity the latter provide for assessment of brain function. Methods Conducted a systematic review of literature published between 1977-2021, via a comprehensive search of four major databases. Standardized appraisal tools and inclusion/exclusion criteria were set according to the PRISMA guidelines. Results Two-hundred and thirteen papers met the criteria of the review process. The results show clear evidence of overall cumulative growth in the number of infant functional neuroimaging studies, with electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) to be the most utilized and fastest growing modalities with behaving infants. However, there is a high level of exclusion rates associated with technical limitations, leading to limited motor control studies (about 6 % ) in this population. Conclusion Although the use of functional neuroimaging modalities with infants increases, there are impediments to effective adoption of existing technologies with this population. Developing new imaging modalities and experimental designs to monitor brain activity in awake and behaving infants is vital.
Collapse
Affiliation(s)
- Kofi Agyeman
- Department of Bioengineering, University of California, Riverside, Riverside, CA, United States
| | - Tristan McCarty
- Department of Bioengineering, University of California, Riverside, Riverside, CA, United States
| | - Harpreet Multani
- Department of Bioengineering, University of California, Riverside, Riverside, CA, United States
| | - Kamryn Mattingly
- Neuroscience Graduate Program, University of California, Riverside, Riverside, CA, United States
| | - Katherine Koziar
- Orbach Science Library, University of California, Riverside, Riverside, CA, United States
| | - Jason Chu
- Division of Neurosurgery, Children’s Hospital Los Angeles, Los Angeles, CA, United States
- Department of Neurological Surgery, University of Southern California, Los Angeles, CA, United States
| | - Charles Liu
- USC Neurorestoration Center, University of Southern California, Los Angeles, CA, United States
- Department of Neurological Surgery, University of Southern California, Los Angeles, CA, United States
| | - Elena Kokkoni
- Department of Bioengineering, University of California, Riverside, Riverside, CA, United States
| | - Vassilios Christopoulos
- Department of Bioengineering, University of California, Riverside, Riverside, CA, United States
- Neuroscience Graduate Program, University of California, Riverside, Riverside, CA, United States
- Department of Neurological Surgery, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
6
|
Shader MJ, Luke R, McKay CM. Contralateral dominance to speech in the adult auditory cortex immediately after cochlear implantation. iScience 2022; 25:104737. [PMID: 35938045 PMCID: PMC9352526 DOI: 10.1016/j.isci.2022.104737] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/12/2022] [Accepted: 07/07/2022] [Indexed: 11/06/2022] Open
Abstract
Sensory deprivation causes structural and functional changes in the human brain. Cochlear implantation delivers immediate reintroduction of auditory sensory information. Previous reports have indicated that over a year is required for the brain to reestablish canonical cortical processing patterns after the reintroduction of auditory stimulation. We utilized functional near-infrared spectroscopy (fNIRS) to investigate brain activity to natural speech stimuli directly after cochlear implantation. We presented 12 cochlear implant recipients, who each had a minimum of 12 months of auditory deprivation, with unilateral auditory- and visual-speech stimuli. Regardless of the side of implantation, canonical responses were elicited primarily on the contralateral side of stimulation as early as 1 h after device activation. These data indicate that auditory pathway connections are sustained during periods of sensory deprivation in adults, and that typical cortical lateralization is observed immediately following the reintroduction of auditory sensory input.
Collapse
Affiliation(s)
- Maureen J. Shader
- Purdue University, Department of Speech, Language, and Hearing Sciences, 715 Clinic Drive, West Lafayette, IN 47907, USA
- The University of Melbourne, Department of Medical Bionics, Parkville, VIC 3010, Australia
| | - Robert Luke
- Bionics Institute, 384-388 Albert St, East Melbourne, VIC 3002, Australia
- Macquarie University, Department of Linguistics, Faculty of Medicine, Health and Human Sciences, Macquarie Hearing, NSW 2109, Australia
| | - Colette M. McKay
- Bionics Institute, 384-388 Albert St, East Melbourne, VIC 3002, Australia
- The University of Melbourne, Department of Medical Bionics, Parkville, VIC 3010, Australia
| |
Collapse
|
7
|
Elin K, Malyutina S, Bronov O, Stupina E, Marinets A, Zhuravleva A, Dragoy O. A New Functional Magnetic Resonance Imaging Localizer for Preoperative Language Mapping Using a Sentence Completion Task: Validity, Choice of Baseline Condition, and Test–Retest Reliability. Front Hum Neurosci 2022; 16:791577. [PMID: 35431846 PMCID: PMC9006995 DOI: 10.3389/fnhum.2022.791577] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 03/04/2022] [Indexed: 11/24/2022] Open
Abstract
To avoid post-neurosurgical language deficits, intraoperative mapping of the language function in the brain can be complemented with preoperative mapping with functional magnetic resonance imaging (fMRI). The validity of an fMRI “language localizer” paradigm crucially depends on the choice of an optimal language task and baseline condition. This study presents a new fMRI “language localizer” in Russian using overt sentence completion, a task that comprehensively engages the language function by involving both production and comprehension at the word and sentence level. The paradigm was validated in 18 neurologically healthy volunteers who participated in two scanning sessions, for estimating test–retest reliability. For the first time, two baseline conditions for the sentence completion task were compared. At the group level, the paradigm significantly activated both anterior and posterior language-related regions. Individual-level analysis showed that activation was elicited most consistently in the inferior frontal regions, followed by posterior temporal regions and the angular gyrus. Test–retest reliability of activation location, as measured by Dice coefficients, was moderate and thus comparable to previous studies. Test–retest reliability was higher in the frontal than temporo-parietal region and with the most liberal statistical thresholding compared to two more conservative thresholding methods. Lateralization indices were expectedly left-hemispheric, with greater lateralization in the frontal than temporo-parietal region, and showed moderate test-retest reliability. Finally, the pseudoword baseline elicited more extensive and more reliable activation, although the syllable baseline appears more feasible for future clinical use. Overall, the study demonstrated the validity and reliability of the sentence completion task for mapping the language function in the brain. The paradigm needs further validation in a clinical sample of neurosurgical patients. Additionally, the study contributes to general evidence on test–retest reliability of fMRI.
Collapse
Affiliation(s)
- Kirill Elin
- Center for Language and Brain, HSE University, Moscow, Russia
| | - Svetlana Malyutina
- Center for Language and Brain, HSE University, Moscow, Russia
- *Correspondence: Svetlana Malyutina,
| | - Oleg Bronov
- Department of Radiology, National Medical and Surgical Center Named After N.I. Pirogov, Moscow, Russia
| | | | - Aleksei Marinets
- Department of Radiology, National Medical and Surgical Center Named After N.I. Pirogov, Moscow, Russia
| | - Anna Zhuravleva
- Center for Language and Brain, HSE University, Moscow, Russia
| | - Olga Dragoy
- Center for Language and Brain, HSE University, Moscow, Russia
- Institute of Linguistics, Russian Academy of Sciences, Moscow, Russia
| |
Collapse
|
8
|
Agmon G, Yahav PHS, Ben-Shachar M, Golumbic EZ. Attention to Speech: Mapping Distributed and Selective Attention Systems. Cereb Cortex 2021; 32:3763-3776. [PMID: 34875678 DOI: 10.1093/cercor/bhab446] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 11/02/2021] [Accepted: 11/03/2021] [Indexed: 11/14/2022] Open
Abstract
When faced with situations where many people talk at once, individuals can employ different listening strategies to deal with the cacophony of speech sounds and to achieve different goals. In this fMRI study, we investigated how the pattern of neural activity is affected by the type of attention applied to speech in a simulated "cocktail party." Specifically, we compared brain activation patterns when listeners "attended selectively" to only one speaker and ignored all others, versus when they "distributed their attention" and followed several concurrent speakers. Conjunction analysis revealed a highly overlapping network of regions activated for both types of attention, including auditory association cortex (bilateral STG/STS) and frontoparietal regions related to speech processing and attention (bilateral IFG/insula, right MFG, left IPS). Activity within nodes of this network, though, was modulated by the type of attention required as well as the number of competing speakers. Auditory and speech-processing regions exhibited higher activity during distributed attention, whereas frontoparietal regions were activated more strongly during selective attention. These results suggest a common "attention to speech" network, which provides the computational infrastructure to deal effectively with multi-speaker input, but with sufficient flexibility to implement different prioritization strategies and to adapt to different listener goals.
Collapse
Affiliation(s)
- Galit Agmon
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Paz Har-Shai Yahav
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Michal Ben-Shachar
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel.,Department of English Literature and Linguistics, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Elana Zion Golumbic
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel
| |
Collapse
|
9
|
Shader MJ, Luke R, Gouailhardou N, McKay CM. The use of broad vs restricted regions of interest in functional near-infrared spectroscopy for measuring cortical activation to auditory-only and visual-only speech. Hear Res 2021; 406:108256. [PMID: 34051607 DOI: 10.1016/j.heares.2021.108256] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 03/31/2021] [Accepted: 04/19/2021] [Indexed: 12/24/2022]
Abstract
As an alternative to fMRI, functional near-infrared spectroscopy (fNIRS) is a relatively new tool for observing cortical activation. However, spatial resolution is reduced compared to fMRI and often the exact locations of fNIRS optodes and specific anatomical information is not known. The aim of this study was to explore the location and range of specific regions of interest that are sensitive to detecting cortical activation using fNIRS in response to auditory- and visual-only connected speech. Two approaches to a priori region-of-interest selection were explored. First, broad regions corresponding to the auditory cortex and occipital lobe were analysed. Next, the fNIRS Optode Location Decider (fOLD) tool was used to divide the auditory and visual regions into two subregions corresponding to distinct anatomical structures. The Auditory-A and -B regions corresponded to Heschl's gyrus and planum temporale, respectively. The Visual-A region corresponded to the superior occipital gyrus and the cuneus, and the Visual-B region corresponded to the middle occipital gyrus. The experimental stimulus consisted of a connected speech signal segmented into 12.5-sec blocks and was presented in either an auditory-only or visual-only condition. Group-level results for eight normal-hearing adult participants averaged over the broad regions of interest revealed significant auditory-evoked activation for both the left and right broad auditory regions of interest. No significant activity was observed for any other broad region of interest in response to any stimulus condition. When divided into subregions, there was a significant positive auditory-evoked response in the left and right Auditory-A regions, suggesting activation near the primary auditory cortex in response to auditory-only speech. There was a significant positive visual-evoked response in the Visual-B region, suggesting middle occipital gyrus activation in response to visual-only speech. In the Visual-A region, however, there was a significant negative visual-evoked response. This result suggests a significant decrease in oxygenated hemoglobin in the superior occipital gyrus as well as the cuneus in response to visual-only speech. Distinct response characteristics, either positive or negative, in adjacent subregions within the temporal and occipital lobes were fairly consistent on the individual level. Results suggest that temporal regions near Heschl's gyrus may be the most advantageous location in adults for identifying hemodynamic responses to complex auditory speech signals using fNIRS. In the occipital lobe, regions corresponding to the facial processing pathway may prove advantageous for measuring positive responses to visual speech using fNIRS.
Collapse
Affiliation(s)
- Maureen J Shader
- Bionics Institute, 384-388 Albert Street, East Melbourne, Victoria 3002, Australia; Department of Medical Bionics, The University of Melbourne, Grattan Street, Parkville, Victoria 3010, Australia.
| | - Robert Luke
- Bionics Institute, 384-388 Albert Street, East Melbourne, Victoria 3002, Australia; Department of Linguistics, Faculty of Medicine, Health and Human Sciences, Macquarie Hearing, Macquarie University, 16 University Avenue, New South Wales 2109, Australia
| | | | - Colette M McKay
- Bionics Institute, 384-388 Albert Street, East Melbourne, Victoria 3002, Australia; Department of Medical Bionics, The University of Melbourne, Grattan Street, Parkville, Victoria 3010, Australia
| |
Collapse
|
10
|
Luke R, Larson E, Shader MJ, Innes-Brown H, Van Yper L, Lee AKC, Sowman PF, McAlpine D. Analysis methods for measuring passive auditory fNIRS responses generated by a block-design paradigm. NEUROPHOTONICS 2021; 8:025008. [PMID: 34036117 PMCID: PMC8140612 DOI: 10.1117/1.nph.8.2.025008] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 04/28/2021] [Indexed: 05/20/2023]
Abstract
Significance: Functional near-infrared spectroscopy (fNIRS) is an increasingly popular tool in auditory research, but the range of analysis procedures employed across studies may complicate the interpretation of data. Aim: We aim to assess the impact of different analysis procedures on the morphology, detection, and lateralization of auditory responses in fNIRS. Specifically, we determine whether averaging or generalized linear model (GLM)-based analysis generates different experimental conclusions when applied to a block-protocol design. The impact of parameter selection of GLMs on detecting auditory-evoked responses was also quantified. Approach: 17 listeners were exposed to three commonly employed auditory stimuli: noise, speech, and silence. A block design, comprising sounds of 5 s duration and 10 to 20 s silent intervals, was employed. Results: Both analysis procedures generated similar response morphologies and amplitude estimates, and both indicated that responses to speech were significantly greater than to noise or silence. Neither approach indicated a significant effect of brain hemisphere on responses to speech. Methods to correct for systemic hemodynamic responses using short channels improved detection at the individual level. Conclusions: Consistent with theoretical considerations, simulations, and other experimental domains, GLM and averaging analyses generate the same group-level experimental conclusions. We release this dataset publicly for use in future development and optimization of algorithms.
Collapse
Affiliation(s)
- Robert Luke
- Macquarie University, Macquarie University Hearing & Department of Linguistics, Australian Hearing Hub, Sydney, New South Wales, Australia
- The Bionics Institute, Melbourne, Victoria, Australia
| | - Eric Larson
- University of Washington, Institute for Learning & Brain Sciences, Seattle, Washington, United States
| | - Maureen J. Shader
- The Bionics Institute, Melbourne, Victoria, Australia
- The University of Melbourne, Department of Medical Bionics, Melbourne, Victoria, Australia
| | - Hamish Innes-Brown
- The University of Melbourne, Department of Medical Bionics, Melbourne, Victoria, Australia
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Lindsey Van Yper
- Macquarie University, Macquarie University Hearing & Department of Linguistics, Australian Hearing Hub, Sydney, New South Wales, Australia
| | - Adrian K. C. Lee
- University of Washington, Institute for Learning & Brain Sciences, Seattle, Washington, United States
- University of Washington, Department of Speech & Hearing Sciences and Institute for Learning & Brain Sciences, Seattle, Washington, United States
| | - Paul F. Sowman
- Macquarie University, Department of Cognitive Science, Faculty of Medicine, Health and Human Sciences, Sydney, New South Wales, Australia
| | - David McAlpine
- Macquarie University, Macquarie University Hearing & Department of Linguistics, Australian Hearing Hub, Sydney, New South Wales, Australia
| |
Collapse
|
11
|
Mushtaq F, Wiggins IM, Kitterick PT, Anderson CA, Hartley DEH. The Benefit of Cross-Modal Reorganization on Speech Perception in Pediatric Cochlear Implant Recipients Revealed Using Functional Near-Infrared Spectroscopy. Front Hum Neurosci 2020; 14:308. [PMID: 32922273 PMCID: PMC7457128 DOI: 10.3389/fnhum.2020.00308] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 07/13/2020] [Indexed: 01/01/2023] Open
Abstract
Cochlear implants (CIs) are the most successful treatment for severe-to-profound deafness in children. However, speech outcomes with a CI often lag behind those of normally-hearing children. Some authors have attributed these deficits to the takeover of the auditory temporal cortex by vision following deafness, which has prompted some clinicians to discourage the rehabilitation of pediatric CI recipients using visual speech. We studied this cross-modal activity in the temporal cortex, along with responses to auditory speech and non-speech stimuli, in experienced CI users and normally-hearing controls of school-age, using functional near-infrared spectroscopy. Strikingly, CI users displayed significantly greater cortical responses to visual speech, compared with controls. Importantly, in the same regions, the processing of auditory speech, compared with non-speech stimuli, did not significantly differ between the groups. This suggests that visual and auditory speech are processed synergistically in the temporal cortex of children with CIs, and they should be encouraged, rather than discouraged, to use visual speech.
Collapse
Affiliation(s)
- Faizah Mushtaq
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Ian M. Wiggins
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Pádraig T. Kitterick
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Carly A. Anderson
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Douglas E. H. Hartley
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom
| |
Collapse
|
12
|
Honbolygó F, Kóbor A, Hermann P, Kettinger ÁO, Vidnyánszky Z, Kovács G, Csépe V. Expectations about word stress modulate neural activity in speech-sensitive cortical areas. Neuropsychologia 2020; 143:107467. [PMID: 32305299 DOI: 10.1016/j.neuropsychologia.2020.107467] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 03/06/2020] [Accepted: 04/12/2020] [Indexed: 10/24/2022]
Abstract
A recent dual-stream model of language processing proposed that the postero-dorsal stream performs predictive sequential processing of linguistic information via hierarchically organized internal models. However, it remains unexplored whether the prosodic segmentation of linguistic information involves predictive processes. Here, we addressed this question by investigating the processing of word stress, a major component of speech segmentation, using probabilistic repetition suppression (RS) modulation as a marker of predictive processing. In an event-related acoustic fMRI RS paradigm, we presented pairs of pseudowords having the same (Rep) or different (Alt) stress patterns, in blocks with varying Rep and Alt trial probabilities. We found that the BOLD signal was significantly lower for Rep than for Alt trials, indicating RS in the posterior and middle superior temporal gyrus (STG) bilaterally, and in the anterior STG in the left hemisphere. Importantly, the magnitude of RS was modulated by repetition probability in the posterior and middle STG. These results reveal the predictive processing of word stress in the STG areas and raise the possibility that words stress processing is related to the dorsal "where" auditory stream.
Collapse
Affiliation(s)
- Ferenc Honbolygó
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary; Institute of Psychology, Eötvös Loránd University, Budapest, Hungary.
| | - Andrea Kóbor
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| | - Petra Hermann
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| | - Ádám Ottó Kettinger
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary; Department of Nuclear Techniques, Budapest University of Technology and Economics, Budapest, Hungary
| | - Zoltán Vidnyánszky
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| | - Gyula Kovács
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary; Department of Biological Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Jena, Germany
| | - Valéria Csépe
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary; Faculty of Modern Philology and Social Sciences, University of Pannonia, Veszprém, Hungary
| |
Collapse
|
13
|
Charbonnier L, Raemaekers MAH, Cornelisse PA, Verwoert M, Braun KPJ, Ramsey NF, Vansteensel MJ. A Functional Magnetic Resonance Imaging Approach for Language Laterality Assessment in Young Children. Front Pediatr 2020; 8:587593. [PMID: 33313027 PMCID: PMC7707083 DOI: 10.3389/fped.2020.587593] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Accepted: 09/03/2020] [Indexed: 11/23/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) is a usable technique to determine hemispheric dominance of language function, but high-quality fMRI images are difficult to acquire in young children. Here we aimed to develop and validate an fMRI approach to reliably determine hemispheric language dominance in young children. We designed two new tasks (story, SR; Letter picture matching, LPM) that aimed to match the interests and the levels of cognitive development of young children. We studied 32 healthy children (6-10 years old, median age 8.7 years) and seven children with epilepsy (7-11 years old, median age 8.6 years) and compared the lateralization index of the new tasks with those of a well-validated task (verb generation, VG) and with clinical measures of hemispheric language dominance. A conclusive assessment of hemispheric dominance (lateralization index ≤-0.2 or ≥0.2) was obtained for 94% of the healthy participants who performed both new tasks. At least one new task provided conclusive language laterality assessment in six out of seven participants with epilepsy. The new tasks may contribute to assessing language laterality in young and preliterate children and may benefit children who are scheduled for surgical treatment of disorders such as epilepsy.
Collapse
Affiliation(s)
- Lisette Charbonnier
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Mathijs A H Raemaekers
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Philippe A Cornelisse
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Maxime Verwoert
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Kees P J Braun
- Department of Child Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Nick F Ramsey
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Mariska J Vansteensel
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| |
Collapse
|
14
|
Mushtaq F, Wiggins IM, Kitterick PT, Anderson CA, Hartley DEH. Evaluating time-reversed speech and signal-correlated noise as auditory baselines for isolating speech-specific processing using fNIRS. PLoS One 2019; 14:e0219927. [PMID: 31314802 PMCID: PMC6636749 DOI: 10.1371/journal.pone.0219927] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 07/03/2019] [Indexed: 12/14/2022] Open
Abstract
Evidence using well-established imaging techniques, such as functional magnetic resonance imaging and electrocorticography, suggest that speech-specific cortical responses can be functionally localised by contrasting speech responses with an auditory baseline stimulus, such as time-reversed (TR) speech or signal-correlated noise (SCN). Furthermore, these studies suggest that SCN is a more effective baseline than TR speech. Functional near-infrared spectroscopy (fNIRS) is a relatively novel, optically-based imaging technique with features that make it ideal for investigating speech and language function in paediatric populations. However, it is not known which baseline is best at isolating speech activation when imaging using fNIRS. We presented normal speech, TR speech and SCN in an event-related format to 25 normally-hearing children aged 6-12 years. Brain activity was measured across frontal and temporal brain areas in both cerebral hemispheres whilst children passively listened to the auditory stimuli. In all three conditions, significant activation was observed bilaterally in channels targeting superior temporal regions when stimuli were contrasted against silence. Unlike previous findings in infants, we found no significant activation in the region of interest over superior temporal cortex in school-age children when normal speech was contrasted against either TR speech or SCN. Although no statistically significant lateralisation effects were observed in the region of interest, a left-sided channel targeting posterior temporal regions showed significant activity in response to normal speech only, and was investigated further. Significantly greater activation was observed in this left posterior channel compared to the corresponding channel on the right side under the normal speech vs SCN contrast only. Our findings suggest that neither TR speech nor SCN are suitable auditory baselines for functionally isolating speech-specific processing in an experimental set up involving fNIRS with 6-12 year old children.
Collapse
Affiliation(s)
- Faizah Mushtaq
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Ian M. Wiggins
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Pádraig T. Kitterick
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Carly A. Anderson
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Douglas E. H. Hartley
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham University Hospitals NHS Trust, Queens Medical Centre, Nottingham, United Kingdom
| |
Collapse
|
15
|
Blank IA, Kiran S, Fedorenko E. Can neuroimaging help aphasia researchers? Addressing generalizability, variability, and interpretability. Cogn Neuropsychol 2017; 34:377-393. [PMID: 29188746 PMCID: PMC6157596 DOI: 10.1080/02643294.2017.1402756] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Neuroimaging studies of individuals with brain damage seek to link brain structure and activity to cognitive impairments, spontaneous recovery, or treatment outcomes. To date, such studies have relied on the critical assumption that a given anatomical landmark corresponds to the same functional unit(s) across individuals. However, this assumption is fallacious even across neurologically healthy individuals. Here, we discuss the severe implications of this issue, and argue for an approach that circumvents it, whereby: (i) functional brain regions are defined separately for each subject using fMRI, allowing for inter-individual variability in their precise location; (ii) the response profile of these subject-specific regions are characterized using various other tasks; and (iii) the results are averaged across individuals, guaranteeing generalizabliity. This method harnesses the complementary strengths of single-case studies and group studies, and it eliminates the need for post hoc "reverse inference" from anatomical landmarks back to cognitive operations, thus improving data interpretability.
Collapse
Affiliation(s)
- Idan A Blank
- a McGovern Institute for Brain Research , Massachusetts Institute of Technology , Cambridge , MA , USA
| | - Swathi Kiran
- b Department of Speech Language and Hearing Sciences, Aphasia Research Laboratory , Sargent College, Boston University , Boston , MA , USA
| | - Evelina Fedorenko
- c Department of Psychiatry , Massachusetts General Hospital , Charlestown , MA , USA
- d Department of Psychiatry , Harvard Medical School , Boston , MA , USA
| |
Collapse
|
16
|
Venezia JH, Vaden KI, Rong F, Maddox D, Saberi K, Hickok G. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus. Front Hum Neurosci 2017; 11:174. [PMID: 28439236 PMCID: PMC5383672 DOI: 10.3389/fnhum.2017.00174] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 03/24/2017] [Indexed: 11/30/2022] Open
Abstract
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.
Collapse
Affiliation(s)
| | - Kenneth I Vaden
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South CarolinaCharleston, SC, USA
| | - Feng Rong
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| | - Dale Maddox
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| | - Kourosh Saberi
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| |
Collapse
|
17
|
Scott TL, Gallée J, Fedorenko E. A new fun and robust version of an fMRI localizer for the frontotemporal language system. Cogn Neurosci 2016; 8:167-176. [PMID: 27386919 DOI: 10.1080/17588928.2016.1201466] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
A set of brain regions in the frontal, temporal, and parietal lobes supports high-level linguistic processing. These regions can be reliably identified in individual subjects using fMRI, by contrasting neural responses to meaningful and structured language stimuli vs. stimuli matched for low-level properties but lacking meaning and/or structure. We here present a novel version of a language 'localizer,' which should be suitable for diverse populations including children and/or clinical populations who may have difficulty with reading or cognitively demanding tasks. In particular, we contrast responses to auditorily presented excerpts from engaging interviews or stories, and acoustically degraded versions of these materials. This language localizer is appealing because it uses (a) naturalistic and engaging linguistic materials, (b) auditory presentation,
Collapse
Affiliation(s)
- Terri L Scott
- a Boston University , Graduate Program for Neuroscience , Boston , MA , USA
| | - Jeanne Gallée
- b Department of Cognitive and Linguistic Sciences , Wellesley College , Wellesley , MA , USA
| | - Evelina Fedorenko
- c Massachusetts General Hospital , Harvard Medical School, Department of Psychiatry, Athinoula A. Martinos Center for Biomedical Imaging , Charlestown , MA , USA.,d Massachusetts Institute of Technology , McGovern Institute for Brain Research , Cambridge , MA , USA
| |
Collapse
|
18
|
Obrig H, Mentzel J, Rossi S. Universal and language-specific sublexical cues in speech perception: a novel electroencephalography-lesion approach. Brain 2016; 139:1800-16. [DOI: 10.1093/brain/aww077] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2015] [Accepted: 02/24/2016] [Indexed: 11/12/2022] Open
|
19
|
Halag-Milo T, Stoppelman N, Kronfeld-Duenias V, Civier O, Amir O, Ezrati-Vinacour R, Ben-Shachar M. Beyond production: Brain responses during speech perception in adults who stutter. Neuroimage Clin 2016; 11:328-338. [PMID: 27298762 PMCID: PMC4893016 DOI: 10.1016/j.nicl.2016.02.017] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2015] [Revised: 02/03/2016] [Accepted: 02/18/2016] [Indexed: 12/02/2022]
Abstract
Developmental stuttering is a speech disorder that disrupts the ability to produce speech fluently. While stuttering is typically diagnosed based on one's behavior during speech production, some models suggest that it involves more central representations of language, and thus may affect language perception as well. Here we tested the hypothesis that developmental stuttering implicates neural systems involved in language perception, in a task that manipulates comprehensibility without an overt speech production component. We used functional magnetic resonance imaging to measure blood oxygenation level dependent (BOLD) signals in adults who do and do not stutter, while they were engaged in an incidental speech perception task. We found that speech perception evokes stronger activation in adults who stutter (AWS) compared to controls, specifically in the right inferior frontal gyrus (RIFG) and in left Heschl's gyrus (LHG). Significant differences were additionally found in the lateralization of response in the inferior frontal cortex: AWS showed bilateral inferior frontal activity, while controls showed a left lateralized pattern of activation. These findings suggest that developmental stuttering is associated with an imbalanced neural network for speech processing, which is not limited to speech production, but also affects cortical responses during speech perception.
Collapse
Affiliation(s)
- Tali Halag-Milo
- The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel; The Cognitive Science Program, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Nadav Stoppelman
- The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| | - Vered Kronfeld-Duenias
- The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| | - Oren Civier
- The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| | - Ofer Amir
- The Department of Communication Disorders, Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Ruth Ezrati-Vinacour
- The Department of Communication Disorders, Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Michal Ben-Shachar
- The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel; Department of English Literature and Linguistics, Bar-Ilan University, Ramat-Gan, Israel.
| |
Collapse
|
20
|
Bernal B, Ardila A. From Hearing Sounds to Recognizing Phonemes: Primary Auditory Cortex is A Truly Perceptual Language Area. AIMS Neurosci 2016. [DOI: 10.3934/neuroscience.2016.4.454] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
|
21
|
Van der Haegen L, Acke F, Vingerhoets G, Dhooge I, De Leenheer E, Cai Q, Brysbaert M. Laterality and unilateral deafness: Patients with congenital right ear deafness do not develop atypical language dominance. Neuropsychologia 2015; 93:482-492. [PMID: 26522620 DOI: 10.1016/j.neuropsychologia.2015.10.032] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Revised: 10/19/2015] [Accepted: 10/26/2015] [Indexed: 02/06/2023]
Abstract
Auditory speech perception, speech production and reading lateralize to the left hemisphere in the majority of healthy right-handers. In this study, we investigated to what extent sensory input underlies the side of language dominance. We measured the lateralization of the three core subprocesses of language in patients who had profound hearing loss in the right ear from birth and in matched control subjects. They took part in a semantic decision listening task involving speech and sound stimuli (auditory perception), a word generation task (speech production) and a passive reading task (reading). The results show that a lack of sensory auditory input on the right side, which is strongly connected to the contralateral left hemisphere, does not lead to atypical lateralization of speech perception. Speech production and reading were also typically left lateralized in all but one patient, contradicting previous small scale studies. Other factors such as genetic constraints presumably overrule the role of sensory input in the development of (a)typical language lateralization.
Collapse
Affiliation(s)
| | - Frederic Acke
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Guy Vingerhoets
- Department of Experimental Psychology, Ghent University, Belgium
| | - Ingeborg Dhooge
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Els De Leenheer
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Qing Cai
- Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, East China Normal University, Shanghai 200062, China; NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, 200062 Shanghai, China.
| | - Marc Brysbaert
- Department of Experimental Psychology, Ghent University, Belgium
| |
Collapse
|
22
|
Suarez RO, Taimouri V, Boyer K, Vega C, Rotenberg A, Madsen JR, Loddenkemper T, Duffy FH, Prabhu SP, Warfield SK. Passive fMRI mapping of language function for pediatric epilepsy surgical planning: validation using Wada, ECS, and FMAER. Epilepsy Res 2014; 108:1874-88. [PMID: 25445239 DOI: 10.1016/j.eplepsyres.2014.09.016] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Revised: 09/07/2014] [Accepted: 09/13/2014] [Indexed: 11/25/2022]
Abstract
In this study we validate passive language fMRI protocols designed for clinical application in pediatric epilepsy surgical planning as they do not require overt participation from patients. We introduced a set of quality checks that assess reliability of noninvasive fMRI mappings utilized for clinical purposes. We initially compared two fMRI language mapping paradigms, one active in nature (requiring participation from the patient) and the other passive in nature (requiring no participation from the patient). Group-level analysis in a healthy control cohort demonstrated similar activation of the putative language centers of the brain in the inferior frontal (IFG) and temporoparietal (TPG) regions. Additionally, we showed that passive language fMRI produced more left-lateralized activation in TPG (LI=+0.45) compared to the active task; with similarly robust left-lateralized IFG (LI=+0.24) activations using the passive task. We validated our recommended fMRI mapping protocols in a cohort of 15 pediatric epilepsy patients by direct comparison against the invasive clinical gold-standards. We found that language-specific TPG activation by fMRI agreed to within 9.2mm to subdural localizations by invasive functional mapping in the same patients, and language dominance by fMRI agreed with Wada test results at 80% congruency in TPG and 73% congruency in IFG. Lastly, we tested the recommended passive language fMRI protocols in a cohort of very young patients and confirmed reliable language-specific activation patterns in that challenging cohort. We concluded that language activation maps can be reliably achieved using the passive language fMRI protocols we proposed even in very young (average 7.5 years old) or sedated pediatric epilepsy patients.
Collapse
Affiliation(s)
- Ralph O Suarez
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Vahid Taimouri
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Katrina Boyer
- Department of Psychology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clemente Vega
- Department of Psychology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alexander Rotenberg
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Joseph R Madsen
- Department of Neurosurgery, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Tobias Loddenkemper
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Frank H Duffy
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Sanjay P Prabhu
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Simon K Warfield
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
23
|
van Leeuwen TM, Lamers MJA, Petersson KM, Gussenhoven C, Rietveld T, Poser B, Hagoort P. Phonological markers of information structure: an fMRI study. Neuropsychologia 2014; 58:64-74. [PMID: 24726334 DOI: 10.1016/j.neuropsychologia.2014.03.017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2013] [Revised: 02/11/2014] [Accepted: 03/31/2014] [Indexed: 10/25/2022]
Abstract
In this fMRI study we investigate the neural correlates of information structure integration during sentence comprehension in Dutch. We looked into how prosodic cues (pitch accents) that signal the information status of constituents to the listener (new information) are combined with other types of information during the unification process. The difficulty of unifying the prosodic cues into overall sentence meaning was manipulated by constructing sentences in which the pitch accent did (focus-accent agreement), and sentences in which the pitch accent did not (focus-accent disagreement) match the expectations for focus constituents of the sentence. In case of a mismatch, the load on unification processes increases. Our results show two anatomically distinct effects of focus-accent disagreement, one located in the posterior left inferior frontal gyrus (LIFG, BA6/44), and one in the more anterior-ventral LIFG (BA 47/45). Our results confirm that information structure is taken into account during unification, and imply an important role for the LIFG in unification processes, in line with previous fMRI studies.
Collapse
Affiliation(s)
- Tessa M van Leeuwen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | - Monique J A Lamers
- Department of Language and Communication, VU University, Amsterdam, The Netherlands; The Eargroup, Herentalsebaan 75, B-2100 Antwerp-Deurne, Belgium
| | - Karl Magnus Petersson
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Carlos Gussenhoven
- Department of Linguistics, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Toni Rietveld
- Department of Linguistics, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Benedikt Poser
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands; Erwin L. Hahn Institute for Magnetic Resonance Imaging, University Duisburg-Essen, Essen, Germany
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| |
Collapse
|
24
|
Kollndorfer K, Furtner J, Krajnik J, Prayer D, Schöpf V. Attention shifts the language network reflecting paradigm presentation. Front Hum Neurosci 2013; 7:809. [PMID: 24324429 PMCID: PMC3838991 DOI: 10.3389/fnhum.2013.00809] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2013] [Accepted: 11/07/2013] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES Functional magnetic resonance imaging (fMRI) is a reliable and non-invasive method with which to localize language function in pre-surgical planning. In clinical practice, visual stimulus presentation is often difficult or impossible, due to the patient's restricted language or attention abilities. Therefore, our aim was to investigate modality-specific differences in visual and auditory stimulus presentation. METHODS Ten healthy subjects participated in an fMRI study comprising two experiments with visual and auditory stimulus presentation. In both experiments, two language paradigms (one for language comprehension and one for language production) used in clinical practice were investigated. In addition to standard data analysis by the means of the general linear model (GLM), independent component analysis (ICA) was performed to achieve more detailed information on language processing networks. RESULTS GLM analysis revealed modality-specific brain activation for both language paradigms for the contrast visual > auditory in the area of the intraparietal sulcus and the hippocampus, two areas related to attention and working memory. Using group ICA, a language network was detected for both paradigms independent of stimulus presentation modality. The investigation of language lateralization revealed no significant variations. Visually presented stimuli further activated an attention-shift network, which could not be identified for the auditory presented language. CONCLUSION The results of this study indicate that the visually presented language stimuli additionally activate an attention-shift network. These findings will provide important information for pre-surgical planning in order to preserve reading abilities after brain surgery, significantly improving surgical outcomes. Our findings suggest that the presentation modality for language paradigms should be adapted on behalf of individual indication.
Collapse
Affiliation(s)
- Kathrin Kollndorfer
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna Vienna, Austria
| | | | | | | | | |
Collapse
|