1
|
Coolen T, Mihai Dumitrescu A, Wens V, Bourguignon M, Rovai A, Sadeghi N, Urbain C, Goldman S, De Tiège X. Spectrotemporal cortical dynamics and semantic control during sentence completion. Clin Neurophysiol 2024; 163:90-101. [PMID: 38714152 DOI: 10.1016/j.clinph.2024.04.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 03/27/2024] [Accepted: 04/14/2024] [Indexed: 05/09/2024]
Abstract
OBJECTIVE To investigate cortical oscillations during a sentence completion task (SC) using magnetoencephalography (MEG), focusing on the semantic control network (SCN), its leftward asymmetry, and the effects of semantic control load. METHODS Twenty right-handed adults underwent MEG while performing SC, consisting of low cloze (LC: multiple responses) and high cloze (HC: single response) stimuli. Spectrotemporal power modulations as event-related synchronizations (ERS) and desynchronizations (ERD) were analyzed: first, at the whole-brain level; second, in key SCN regions, posterior middle/inferior temporal gyri (pMTG/ITG) and inferior frontal gyri (IFG), under different semantic control loads. RESULTS Three cortical response patterns emerged: early (0-200 ms) theta-band occipital ERS; intermediate (200-700 ms) semantic network alpha/beta-band ERD; late (700-3000 ms) dorsal language stream alpha/beta/gamma-band ERD. Under high semantic control load (LC), pMTG/ITG showed prolonged left-sided engagement (ERD) and right-sided inhibition (ERS). Left IFG exhibited heightened late (2500-2550 ms) beta-band ERD with increased semantic control load (LC vs. HC). CONCLUSIONS SC involves distinct cortical responses and depends on the left IFG and asymmetric engagement of the pMTG/ITG for semantic control. SIGNIFICANCE Future use of SC in neuromagnetic preoperative language mapping and for understanding the pathophysiology of language disorders in neurological conditions.
Collapse
Affiliation(s)
- Tim Coolen
- Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles (LN(2)T), Brussels, Belgium; Université Libre de Bruxelles, Hôpital Universitaire de Bruxelles (HUB), CUB Hôpital Erasme, Department of Radiology, Brussels, Belgium.
| | - Alexandru Mihai Dumitrescu
- Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles (LN(2)T), Brussels, Belgium
| | - Vincent Wens
- Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles (LN(2)T), Brussels, Belgium
| | - Mathieu Bourguignon
- Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles (LN(2)T), Brussels, Belgium; Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Laboratory of Neurophysiology and Movement Biomechanics, Brussels, Belgium
| | - Antonin Rovai
- Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles (LN(2)T), Brussels, Belgium
| | - Niloufar Sadeghi
- Université Libre de Bruxelles, Hôpital Universitaire de Bruxelles (HUB), CUB Hôpital Erasme, Department of Radiology, Brussels, Belgium
| | - Charline Urbain
- Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles (LN(2)T), Brussels, Belgium; Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Centre for Research in Cognition and Neurosciences (CRCN), Neuropsychology and Functional Neuroimaging Research Unit (UR2NF), Brussels, Belgium
| | - Serge Goldman
- Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles (LN(2)T), Brussels, Belgium
| | - Xavier De Tiège
- Université Libre de Bruxelles (ULB), ULB Neuroscience Institute (UNI), Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles (LN(2)T), Brussels, Belgium
| |
Collapse
|
2
|
Ailion A, Duong P, Maiman M, Tsuboyama M, Smith ML. Clinical recommendations for conducting pediatric functional language and memory mapping during the phase I epilepsy presurgical workup. Clin Neuropsychol 2024; 38:1060-1084. [PMID: 37985747 DOI: 10.1080/13854046.2023.2281708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 11/02/2023] [Indexed: 11/22/2023]
Abstract
Objective: Pediatric epilepsy surgery effectively controls seizures but may risk cognitive, language, or memory decline. Historically, the intra-carotid anesthetic procedure (IAP or Wada Test) was pivotal for language and memory function. However, advancements in noninvasive mapping, notably functional magnetic resonance imaging (fMRI), have transformed clinical practice, reducing IAP's role in presurgical evaluations. Method: We conducted a critical narrative review on mapping technologies, including factors to consider for discordance. Results: Neuropsychological findings suggest that if pre-surgery function remains intact and the surgery targets the eloquent cortex, there is a high chance for decline. Memory and language decline are particularly pronounced post-left anterior temporal lobe resection (ATL), making presurgical cognitive assessment crucial for predicting postoperative outcomes. However, the risk of functional decline is not always clear - particularly with higher rates of atypical organization in pediatric epilepsy patients and discordant findings from cognitive mapping. We found little research to date on the use of IAP and other newer technologies for lateralization/localization in pediatric epilepsy. Based on this review, we introduce an IAP decision tree to systematically navigate discordance in IAP decisions for epilepsy presurgical workup. Conclusions: Future research should be aimed at pediatric populations to improve the precision of functional mapping, determine which methods predict post-surgical deficits and then create evidence-based practice guidelines to standardize mapping procedures. Explicit directives are needed for resolving conflicts between developing mapping procedures and established clinical measures. The proposed decision tree is the first step to standardize when to consider IAP or invasive mapping, in coordination with the multidisciplinary epilepsy surgical team.
Collapse
Affiliation(s)
- Alyssa Ailion
- Department of Psychiatry, Boston Children's Hospital, Harvard Medical School
- Department of Neurology, Boston Children's Hospital, Harvard Medical School
| | - Priscilla Duong
- Department of Psychiatry, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University School of Medicine
| | - Moshe Maiman
- Department of Psychiatry, Boston Children's Hospital, Harvard Medical School
| | - Melissa Tsuboyama
- Department of Neurology, Boston Children's Hospital, Harvard Medical School
| | - Mary Lou Smith
- Department of Psychology, The Hospital for Sick Children, University of Toronto Mississauga
| |
Collapse
|
3
|
Mamashli F, Khan S, Hatamimajoumerd E, Jas M, Uluç I, Lankinen K, Obleser J, Friederici AD, Maess B, Ahveninen J. Characterizing directional dynamics of semantic prediction based on inter-regional temporal generalization. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.13.580183. [PMID: 38405823 PMCID: PMC10888763 DOI: 10.1101/2024.02.13.580183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
The event-related potential/field component N400(m) has been widely used as a neural index for semantic prediction. It has long been hypothesized that feedback information from inferior frontal areas plays a critical role in generating the N400. However, due to limitations in causal connectivity estimation, direct testing of this hypothesis has remained difficult. Here, magnetoencephalography (MEG) data was obtained during a classic N400 paradigm where the semantic predictability of a fixed target noun was manipulated in simple German sentences. To estimate causality, we implemented a novel approach based on machine learning and temporal generalization to estimate the effect of inferior frontal gyrus (IFG) on temporal areas. In this method, a support vector machine (SVM) classifier is trained on each time point of the neural activity in IFG to classify less predicted (LP) and highly predicted (HP) nouns and then tested on all time points of superior/middle temporal sub-regions activity (and vice versa, to establish spatio-temporal evidence for or against causality). The decoding accuracy was significantly above chance level when the classifier was trained on IFG activity and tested on future activity in superior and middle temporal gyrus (STG/MTG). The results present new evidence for a model predictive speech comprehension where predictive IFG activity is fed back to shape subsequent activity in STG/MTG, implying a feedback mechanism in N400 generation. In combination with the also observed strong feedforward effect from left STG/MTG to IFG, our findings provide evidence of dynamic feedback and feedforward influences between IFG and temporal areas during N400 generation.
Collapse
Affiliation(s)
- Fahimeh Mamashli
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Sheraz Khan
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Elaheh Hatamimajoumerd
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115
| | - Mainak Jas
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Işıl Uluç
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Kaisu Lankinen
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck 23562, Germany
| | - Angela D. Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Burkhard Maess
- MEG and Cortical Networks Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Jyrki Ahveninen
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| |
Collapse
|
4
|
Ji Z, Song RR, Swan AR, Angeles Quinto A, Lee RR, Huang M. Magnetoencephalography Language Mapping Using Auditory Memory Retrieval and Silent Repeating Task. J Clin Neurophysiol 2024; 41:148-154. [PMID: 35512180 PMCID: PMC9633581 DOI: 10.1097/wnp.0000000000000947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
PURPOSE The study aims to (1) examine the spatiotemporal map of magnetoencephalography-evoked responses during an Auditory Memory Retrieval and Silent Repeating (AMRSR) task, and determine the hemispheric dominance for language, and (2) evaluate the accuracy of the AMRSR task in Wernicke and Broca area localization. METHODS In 30 patients with brain tumors and/or epilepsies, the AMRSR task was used to evoke magnetoencephalography responses. We applied Fast VEctor-based Spatial-Temporal Analyses with minimum L1-norm source imaging method to the magnetoencephalography responses for localizing the brain areas evoked by the AMRSR task. RESULTS The Fast-VEctor-based Spatial-Temporal Analysis found consistent activation in the posterior superior temporal gyrus around 300 to 500 ms, and another activation in the frontal cortex (pars opercularis and/or pars triangularis) around 600 to 900 ms, which were localized to the Wernicke area (BA 22) and Broca area (BA 44 and BA 45), respectively. The language-dominant hemispheric laterization elicited by the AMRSR task was comparable with the result from an Auditory Dichotic task result given to the same patient, with the exception that AMRSR is more sensitive on bilateral language laterization cases on finding the Wernicke and Broca areas. CONCLUSIONS For all patients who successfully finished the AMRSR task, Fast-VEctor-based Spatial-Temporal Analysis could establish accurate and robust localizations of Broca and Wernicke area and determine hemispheric dominance. For subjects with normal auditory functionality, the AMRSR paradigm evaluation showed significant promise in providing reliable assessments of cerebral language dominance and language network localization.
Collapse
Affiliation(s)
- Zhengwei Ji
- Radiology Department, University of California, San Diego, California, U.S.A
| | - Ryan R. Song
- Department of Molecular and Cell Biology, University of California, Berkeley, California, U.S.A.; and
| | - Ashley Robb Swan
- Radiology Department, University of California, San Diego, California, U.S.A
| | | | - Roland R. Lee
- Radiology Department, University of California, San Diego, California, U.S.A
- Radiology Service, San Diego VA Healthcare System, San Diego, California, U.S.A
| | - Mingxiong Huang
- Radiology Department, University of California, San Diego, California, U.S.A
- Radiology Service, San Diego VA Healthcare System, San Diego, California, U.S.A
| |
Collapse
|
5
|
Kochi R, Osawa SI, Jin K, Ishida M, Kanno A, Iwasaki M, Suzuki K, Kawashima R, Tominaga T, Nakasato N. Language MEG predicts postoperative verbal memory change in left mesial temporal lobe epilepsy. Clin Neurophysiol 2023; 156:69-75. [PMID: 37890232 DOI: 10.1016/j.clinph.2023.09.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 08/28/2023] [Accepted: 09/05/2023] [Indexed: 10/29/2023]
Abstract
OBJECTIVE To clarify whether preoperative language magnetoencephalography (MEG) predicts postoperative verbal memory (VM) changes in left mesial temporal lobe epilepsy (LMTLE). METHODS We reviewed 18 right-handed patients with LMTLE who underwent anterior temporal lobectomy or selective amygdala hippocampectomy, 12 with (HS+) and 6 without hippocampal sclerosis (HS-). Patients underwent neuropsychological assessment before and after surgery. MEG was measured with an auditory verbal learning task in patients preoperatively and in 15 right-handed controls. Dynamic statistical parametric mapping (dSPM) was used for source imaging of task-related activity. Language laterality index (LI) was calculated by z-score of dSPM in language-related regions. LI in the region of HS+ and HS- was compared to controls. The correlation between LI and postoperative VM change was assessed in HS+ and HS-. RESULTS Preoperative LI in supramarginal gyrus showed greater right-shifted lateralization in both HS+ and HS- than in controls. Right-shifted LI in supramarginal gyrus was correlated with postoperative VM increase in HS+ (p = 0.019), but not in HS-. CONCLUSIONS Right-shifted language lateralization in dSPM of MEG signals may predict favorable VM outcome in HS+ of LMTLE. SIGNIFICANCE Findings warrant further investigation of the relation between regional language laterality index and postoperative verbal memory changes.
Collapse
Affiliation(s)
- Ryuzaburo Kochi
- Department of Neurosurgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Shin-Ichiro Osawa
- Department of Neurosurgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Kazutaka Jin
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan.
| | - Makoto Ishida
- Department of Advanced Spintronics Medical Engineering, Tohoku University Graduate School of Engineering, Sendai, Miyagi, Japan
| | - Akitake Kanno
- Department of Advanced Spintronics Medical Engineering, Tohoku University Graduate School of Engineering, Sendai, Miyagi, Japan
| | - Masaki Iwasaki
- Department of Neurosurgery, National Center Hospital, National Center of Neurology and Psychiatry, Kodaira, Tokyo, Japan
| | - Kyoko Suzuki
- Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Ryuta Kawashima
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Teiji Tominaga
- Department of Neurosurgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Nobukazu Nakasato
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan; Department of Advanced Spintronics Medical Engineering, Tohoku University Graduate School of Engineering, Sendai, Miyagi, Japan
| |
Collapse
|
6
|
Guilleminot P, Graef C, Butters E, Reichenbach T. Audiotactile Stimulation Can Improve Syllable Discrimination through Multisensory Integration in the Theta Frequency Band. J Cogn Neurosci 2023; 35:1760-1772. [PMID: 37677062 DOI: 10.1162/jocn_a_02045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Syllables are an essential building block of speech. We recently showed that tactile stimuli linked to the perceptual centers of syllables in continuous speech can improve speech comprehension. The rate of syllables lies in the theta frequency range, between 4 and 8 Hz, and the behavioral effect appears linked to multisensory integration in this frequency band. Because this neural activity may be oscillatory, we hypothesized that a behavioral effect may also occur not only while but also after this activity has been evoked or entrained through vibrotactile pulses. Here, we show that audiotactile integration regarding the perception of single syllables, both on the neural and on the behavioral level, is consistent with this hypothesis. We first stimulated participants with a series of vibrotactile pulses and then presented them with a syllable in background noise. We show that, at a delay of 200 msec after the last vibrotactile pulse, audiotactile integration still occurred in the theta band and syllable discrimination was enhanced. Moreover, the dependence of both the neural multisensory integration as well as of the behavioral discrimination on the delay of the audio signal with respect to the last tactile pulse was consistent with a damped oscillation. In addition, the multisensory gain is correlated with the syllable discrimination score. Our results therefore evidence the role of the theta band in audiotactile integration and provide evidence that these effects may involve oscillatory activity that still persists after the tactile stimulation.
Collapse
|
7
|
Abarrategui B, Mariani V, Rizzi M, Berta L, Scarpa P, Zauli FM, Squarza S, Banfi P, d’Orio P, Cardinale F, Del Vecchio M, Caruana F, Avanzini P, Sartori I. Language lateralization mapping (reversibly) masked by non-dominant focal epilepsy: a case report. Front Hum Neurosci 2023; 17:1254779. [PMID: 37900727 PMCID: PMC10600519 DOI: 10.3389/fnhum.2023.1254779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 09/15/2023] [Indexed: 10/31/2023] Open
Abstract
Language lateralization in patients with focal epilepsy frequently diverges from the left-lateralized pattern that prevails in healthy right-handed people, but the mechanistic explanations are still a matter of debate. Here, we debate the complex interaction between focal epilepsy, language lateralization, and functional neuroimaging techniques by introducing the case of a right-handed patient with unaware focal seizures preceded by aphasia, in whom video-EEG and PET examination suggested the presence of focal cortical dysplasia in the right superior temporal gyrus, despite a normal structural MRI. The functional MRI for language was inconclusive, and the neuropsychological evaluation showed mild deficits in language functions. A bilateral stereo-EEG was proposed confirming the right superior temporal gyrus origin of seizures, revealing how ictal aphasia emerged only once seizures propagated to the left superior temporal gyrus and confirming, by cortical mapping, the left lateralization of the posterior language region. Stereo-EEG-guided radiofrequency thermocoagulations of the (right) focal cortical dysplasia not only reduced seizure frequency but led to the normalization of the neuropsychological assessment and the "restoring" of a classical left-lateralized functional MRI pattern of language. This representative case demonstrates that epileptiform activity in the superior temporal gyrus can interfere with the functioning of the contralateral homologous cortex and its associated network. In the case of presurgical evaluation in patients with epilepsy, this interference effect must be carefully taken into consideration. The multimodal language lateralization assessment reported for this patient further suggests the sensitivity of different explorations to this interference effect. Finally, the neuropsychological and functional MRI changes after thermocoagulations provide unique cues on the network pathophysiology of focal cortical dysplasia and the role of diverse techniques in indexing language lateralization in complex scenarios.
Collapse
Affiliation(s)
- Belén Abarrategui
- “Claudio Munari” Epilepsy Surgery Center, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Department of Neurology, Hospital Universitario Puerta de Hierro, Majadahonda, Spain
| | - Valeria Mariani
- “Claudio Munari” Epilepsy Surgery Center, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Neurology and Stroke Unit, ASST Santi Paolo e Carlo, Presidio San Carlo Borromeo, Milan, Italy
| | - Michele Rizzi
- “Claudio Munari” Epilepsy Surgery Center, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Luca Berta
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Pina Scarpa
- Cognitive Neuropsychology Centre, Department of Neuroscience, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Flavia Maria Zauli
- “Claudio Munari” Epilepsy Surgery Center, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Department of Biomedical and Clinical Sciences, Università degli Studi di Milano, Milan, Italy
- Department of Philosophy “P. Martinetti”, Università degli Studi di Milano, Milan, Italy
| | - Silvia Squarza
- Department of Neuroradiology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Paola Banfi
- Neurology and Stroke Unit, ASST Sette Laghi Ospedale di Circolo, Varese, Italy
| | - Piergiorgio d’Orio
- “Claudio Munari” Epilepsy Surgery Center, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Unit of Neuroscience, Department of Medicine and Surgery, Università degli Studi di Parma, Parma, Italy
- Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Parma, Italy
| | - Francesco Cardinale
- “Claudio Munari” Epilepsy Surgery Center, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Unit of Neuroscience, Department of Medicine and Surgery, Università degli Studi di Parma, Parma, Italy
- Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Parma, Italy
| | - Maria Del Vecchio
- Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Parma, Italy
| | - Fausto Caruana
- Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Parma, Italy
| | - Pietro Avanzini
- Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Parma, Italy
| | - Ivana Sartori
- “Claudio Munari” Epilepsy Surgery Center, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| |
Collapse
|
8
|
Yasmin S, Irsik VC, Johnsrude IS, Herrmann B. The effects of speech masking on neural tracking of acoustic and semantic features of natural speech. Neuropsychologia 2023; 186:108584. [PMID: 37169066 DOI: 10.1016/j.neuropsychologia.2023.108584] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/30/2023] [Accepted: 05/08/2023] [Indexed: 05/13/2023]
Abstract
Listening environments contain background sounds that mask speech and lead to communication challenges. Sensitivity to slow acoustic fluctuations in speech can help segregate speech from background noise. Semantic context can also facilitate speech perception in noise, for example, by enabling prediction of upcoming words. However, not much is known about how different degrees of background masking affect the neural processing of acoustic and semantic features during naturalistic speech listening. In the current electroencephalography (EEG) study, participants listened to engaging, spoken stories masked at different levels of multi-talker babble to investigate how neural activity in response to acoustic and semantic features changes with acoustic challenges, and how such effects relate to speech intelligibility. The pattern of neural response amplitudes associated with both acoustic and semantic speech features across masking levels was U-shaped, such that amplitudes were largest for moderate masking levels. This U-shape may be due to increased attentional focus when speech comprehension is challenging, but manageable. The latency of the neural responses increased linearly with increasing background masking, and neural latency change associated with acoustic processing most closely mirrored the changes in speech intelligibility. Finally, tracking responses related to semantic dissimilarity remained robust until severe speech masking (-3 dB SNR). The current study reveals that neural responses to acoustic features are highly sensitive to background masking and decreasing speech intelligibility, whereas neural responses to semantic features are relatively robust, suggesting that individuals track the meaning of the story well even in moderate background sound.
Collapse
Affiliation(s)
- Sonia Yasmin
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada.
| | - Vanessa C Irsik
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada; School of Communication and Speech Disorders,The University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest, M6A 2E1, Toronto, ON, Canada; Department of Psychology,University of Toronto, M5S 1A1, Toronto, ON, Canada
| |
Collapse
|
9
|
Cocquyt EM, Van Laeken H, van Mierlo P, De Letter M. Test-retest reliability of electroencephalographic and magnetoencephalographic measures elicited during language tasks: A literature review. Eur J Neurosci 2023; 57:1353-1367. [PMID: 36864752 DOI: 10.1111/ejn.15948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/10/2023] [Accepted: 02/22/2023] [Indexed: 03/04/2023]
Abstract
Electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings during language processing can provide relevant insights on neuroplasticity in clinical populations (including patients with aphasia). To use EEG and MEG in a longitudinal way, the outcome measures should be consistent across time in healthy individuals. Therefore, the current study provides a review on the test-retest reliability of EEG and MEG measures elicited during language paradigms in healthy adults. PubMed, Web of Science and Embase were searched for relevant articles based on specific eligibility criteria. In total, 11 articles were included in this literature review. The test-retest reliability of the P1, N1 and P2 is systematically considered to be satisfactory, whereas findings are more variable for event-related potentials/fields occurring later in time. The within subject consistency of EEG and MEG measures during language processing can be influenced by multiple variables such as the stimulus presentation mode, the offline reference choice and the required amount of cognitive resources during the task. To conclude, most of the available results are favourable regarding the longitudinal use of EEG and MEG measures elicited during language paradigms in healthy young individuals. In view to the use of these techniques in patients with aphasia, future research should focus on whether the same findings apply to different age groups.
Collapse
Affiliation(s)
| | - Heleen Van Laeken
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Pieter van Mierlo
- Department of Electronics and Information Systems, Medical Image and Signal Processing Group, Ghent University, Ghent, Belgium
| | - Miet De Letter
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| |
Collapse
|
10
|
Demichelis G, Duran D, Ciullo G, Lorusso L, Zago S, Palermo S, Nigri A, Leonardi M, Bruzzone MG, Fedeli D. A multimodal imaging approach to foreign accent syndrome. A case report. Neurocase 2022; 28:467-476. [PMID: 36682057 DOI: 10.1080/13554794.2023.2168558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
This article describes a case of Foreign accent syndrome (FAS) in an Italian woman who developed a Canadian-like foreign accent without brain damage (functional FAS). The patient underwent an in-depth neuroimaging and (neuro)psychological evaluation. Language networks in the frontotemporal-parietal areas were typically activated bilaterally through fMRI and MEG assessments based on task-based data. Resting-state fMRI showed preserved connectivity between language areas. An obsessive-compulsive personality profile and mild anxiety were found, suggesting psychological and psychiatric factors may be relevant. Accordingly with our findings, multimodal imaging is beneficial to understand FAS neurological and functional etiologies.
Collapse
Affiliation(s)
- Greta Demichelis
- Department of Neuroradiology, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Dunja Duran
- Clinical Epileptology and Experimental Neurophysiology Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Giuseppe Ciullo
- Department of Neuroradiology, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Lorenzo Lorusso
- Neuroscience Department, Neurology and Stroke Unit, A.S.S.T Lecco, Merate, Italy
| | - Stefano Zago
- U.O.C. Di Neurologia, IRCCS Fondazione Ospedale Maggiore Policlinico, University of Milan, Milan, Italy
| | - Sara Palermo
- Department of Neuroradiology, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy.,Dipartimento di Psicologia, Università degli Studi di Torino, Torino, Italy
| | - Anna Nigri
- Department of Neuroradiology, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Matilde Leonardi
- Department of Neurology, Public Health, Disability Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Maria Grazia Bruzzone
- Department of Neuroradiology, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Davide Fedeli
- Department of Neuroradiology, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| |
Collapse
|
11
|
Common Neuroanatomical Substrate of Cholinergic Pathways and Language-Related Brain Regions as an Explanatory Framework for Evaluating the Efficacy of Cholinergic Pharmacotherapy in Post-Stroke Aphasia: A Review. Brain Sci 2022; 12:brainsci12101273. [PMID: 36291207 PMCID: PMC9599395 DOI: 10.3390/brainsci12101273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 09/08/2022] [Accepted: 09/17/2022] [Indexed: 11/18/2022] Open
Abstract
Despite the relative scarcity of studies focusing on pharmacotherapy in aphasia, there is evidence in the literature indicating that remediation of language disorders via pharmaceutical agents could be a promising aphasia treatment option. Among the various agents used to treat chronic aphasic deficits, cholinergic drugs have provided meaningful results. In the current review, we focused on published reports investigating the impact of acetylcholine on language and other cognitive disturbances. It has been suggested that acetylcholine plays an important role in neuroplasticity and is related to several aspects of cognition, such as memory and attention. Moreover, cholinergic input is diffused to a wide network of cortical areas, which have been associated with language sub-processes. This could be a possible explanation for the positive reported outcomes of cholinergic drugs in aphasia recovery, and specifically in distinct language processes, such as naming and comprehension, as well as overall communication competence. However, evidence with regard to functional alterations in specific brain areas after pharmacotherapy is rather limited. Finally, despite the positive results derived from the relevant studies, cholinergic pharmacotherapy treatment in post-stroke aphasia has not been widely implemented. The present review aims to provide an overview of the existing literature in the common neuroanatomical substrate of cholinergic pathways and language related brain areas as a framework for interpreting the efficacy of cholinergic pharmacotherapy interventions in post-stroke aphasia, following an integrated approach by converging evidence from neuroanatomy, neurophysiology, and neuropsychology.
Collapse
|
12
|
Wu S, Ramdas A, Wehbe L. Brainprints: identifying individuals from magnetoencephalograms. Commun Biol 2022; 5:852. [PMID: 35995976 PMCID: PMC9395342 DOI: 10.1038/s42003-022-03727-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 07/15/2022] [Indexed: 01/02/2023] Open
Abstract
Magnetoencephalography (MEG) is used to study a wide variety of cognitive processes. Increasingly, researchers are adopting principles of open science and releasing their MEG data. While essential for reproducibility, sharing MEG data has unforeseen privacy risks. Individual differences may make a participant identifiable from their anonymized recordings. However, our ability to identify individuals based on these individual differences has not yet been assessed. Here, we propose interpretable MEG features to characterize individual difference. We term these features brainprints (brain fingerprints). We show through several datasets that brainprints accurately identify individuals across days, tasks, and even between MEG and Electroencephalography (EEG). Furthermore, we identify consistent brainprint components that are important for identification. We study the dependence of identifiability on the amount of data available. We also relate identifiability to the level of preprocessing and the experimental task. Our findings reveal specific aspects of individual variability in MEG. They also raise concerns about unregulated sharing of brain data, even if anonymized.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.,Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Aaditya Ramdas
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Statistics and Data Science, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Leila Wehbe
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA. .,Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
13
|
Palana J, Schwartz S, Tager-Flusberg H. Evaluating the Use of Cortical Entrainment to Measure Atypical Speech Processing: A Systematic Review. Neurosci Biobehav Rev 2021; 133:104506. [PMID: 34942267 DOI: 10.1016/j.neubiorev.2021.12.029] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 12/12/2021] [Accepted: 12/18/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Cortical entrainment has emerged as promising means for measuring continuous speech processing in young, neurotypical adults. However, its utility for capturing atypical speech processing has not been systematically reviewed. OBJECTIVES Synthesize evidence regarding the merit of measuring cortical entrainment to capture atypical speech processing and recommend avenues for future research. METHOD We systematically reviewed publications investigating entrainment to continuous speech in populations with auditory processing differences. RESULTS In the 25 publications reviewed, most studies were conducted on older and/or hearing-impaired adults, for whom slow-wave entrainment to speech was often heightened compared to controls. Research conducted on populations with neurodevelopmental disorders, in whom slow-wave entrainment was often reduced, was less common. Across publications, findings highlighted associations between cortical entrainment and speech processing performance differences. CONCLUSIONS Measures of cortical entrainment offer useful means of capturing speech processing differences and future research should leverage them more extensively when studying populations with neurodevelopmental disorders.
Collapse
Affiliation(s)
- Joseph Palana
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA; Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Harvard Medical School, Boston Children's Hospital, 1 Autumn Street, Boston, MA, 02215, USA
| | - Sophie Schwartz
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA
| | - Helen Tager-Flusberg
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA.
| |
Collapse
|
14
|
Wang Q, Siok WT. Intracranial recording in patients with aphasia using nanomaterial-based flexible electronics: promises and challenges. BEILSTEIN JOURNAL OF NANOTECHNOLOGY 2021; 12:330-342. [PMID: 33889479 PMCID: PMC8042484 DOI: 10.3762/bjnano.12.27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 03/18/2021] [Indexed: 06/12/2023]
Abstract
In recent years, researchers have studied how nanotechnology could enhance neuroimaging techniques. The application of nanomaterial-based flexible electronics has the potential to advance conventional intracranial electroencephalography (iEEG) by utilising brain-compatible soft nanomaterials. The resultant technique has significantly high spatial and temporal resolution, both of which enhance the localisation of brain functions and the mapping of dynamic language processing. This review presents findings on aphasia, an impairment in language and communication, and discusses how different brain imaging techniques, including positron emission tomography, magnetic resonance imaging, and iEEG, have advanced our understanding of the neural networks underlying language and reading processing. We then outline the strengths and weaknesses of iEEG in studying human cognition and the development of intracranial recordings that use brain-compatible flexible electrodes. We close by discussing the potential advantages and challenges of future investigations adopting nanomaterial-based flexible electronics for intracranial recording in patients with aphasia.
Collapse
Affiliation(s)
- Qingchun Wang
- Department of Linguistics, The University of Hong Kong, Hong Kong, China
| | - Wai Ting Siok
- Department of Linguistics, The University of Hong Kong, Hong Kong, China
| |
Collapse
|
15
|
Nora A, Renvall H, Ronimus M, Kere J, Lyytinen H, Salmelin R. Children at risk for dyslexia show deficient left-hemispheric memory representations for new spoken word forms. Neuroimage 2021; 229:117739. [PMID: 33454404 DOI: 10.1016/j.neuroimage.2021.117739] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 01/07/2021] [Accepted: 01/09/2021] [Indexed: 11/28/2022] Open
Abstract
Developmental dyslexia is a specific learning disorder with impairments in reading and spelling acquisition. Apart from literacy problems, dyslexics show inefficient speech encoding and deficient novel word learning, with underlying problems in phonological processing and learning. These problems have been suggested to be related to deficient specialization of the left hemisphere for language processing. To examine this possibility, we tracked with magnetoencephalography (MEG) the activation of the bilateral temporal cortices during formation of neural memory traces for new spoken word forms in 7-8-year-old children with high familial dyslexia risk and in controls. The at-risk children improved equally to their peers in overt repetition of recurring new word forms, but were poorer in explicit recognition of the recurring word forms. Both groups showed reduced activation for the recurring word forms 400-1200 ms after word onset in the right auditory cortex, replicating the results of our previous study on typically developing children (Nora et al., 2017, Children show right-lateralized effects of spoken word-form learning. PLoS ONE 12(2): e0171034). However, only the control group consistently showed a similar reduction of activation for recurring word forms in the left temporal areas. The results highlight the importance of left-hemispheric phonological processing for efficient phonological representations and its disruption in dyslexia.
Collapse
Affiliation(s)
- A Nora
- Department of Neuroscience and Biomedical Engineering, and Aalto NeuroImaging, Aalto University, P.O. Box 12200, FI-00076 Aalto, Finland.
| | - H Renvall
- Department of Neuroscience and Biomedical Engineering, and Aalto NeuroImaging, Aalto University, P.O. Box 12200, FI-00076 Aalto, Finland
| | - M Ronimus
- Niilo Mäki Instituutti, FI-40100 Jyväskylä, Finland
| | - J Kere
- Department of Biosciences, Karolinska Institutet, SE-171 77 Stockholm, Sweden
| | - H Lyytinen
- Department of Psychology, University of Jyväskylä, FI-40014 Jyväskylä, Finland
| | - R Salmelin
- Department of Neuroscience and Biomedical Engineering, and Aalto NeuroImaging, Aalto University, P.O. Box 12200, FI-00076 Aalto, Finland
| |
Collapse
|
16
|
Dynamic Time-Locking Mechanism in the Cortical Representation of Spoken Words. eNeuro 2020; 7:ENEURO.0475-19.2020. [PMID: 32513662 PMCID: PMC7470935 DOI: 10.1523/eneuro.0475-19.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 05/15/2020] [Accepted: 06/01/2020] [Indexed: 11/21/2022] Open
Abstract
Human speech has a unique capacity to carry and communicate rich meanings. However, it is not known how the highly dynamic and variable perceptual signal is mapped to existing linguistic and semantic representations. In this novel approach, we used the natural acoustic variability of sounds and mapped them to magnetoencephalography (MEG) data using physiologically-inspired machine-learning models. We aimed at determining how well the models, differing in their representation of temporal information, serve to decode and reconstruct spoken words from MEG recordings in 16 healthy volunteers. We discovered that dynamic time-locking of the cortical activation to the unfolding speech input is crucial for the encoding of the acoustic-phonetic features of speech. In contrast, time-locking was not highlighted in cortical processing of non-speech environmental sounds that conveyed the same meanings as the spoken words, including human-made sounds with temporal modulation content similar to speech. The amplitude envelope of the spoken words was particularly well reconstructed based on cortical evoked responses. Our results indicate that speech is encoded cortically with especially high temporal fidelity. This speech tracking by evoked responses may partly reflect the same underlying neural mechanism as the frequently reported entrainment of the cortical oscillations to the amplitude envelope of speech. Furthermore, the phoneme content was reflected in cortical evoked responses simultaneously with the spectrotemporal features, pointing to an instantaneous transformation of the unfolding acoustic features into linguistic representations during speech processing.
Collapse
|
17
|
Borghesani V, Hinkley LBN, Ranasinghe KG, Thompson MMC, Shwe W, Mizuiri D, Lauricella M, Europa E, Honma S, Miller Z, Miller B, Vossel K, Henry MML, Houde JF, Gorno-Tempini ML, Nagarajan SS. Taking the sublexical route: brain dynamics of reading in the semantic variant of primary progressive aphasia. Brain 2020; 143:2545-2560. [PMID: 32789455 PMCID: PMC7447517 DOI: 10.1093/brain/awaa212] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 05/05/2020] [Accepted: 05/21/2020] [Indexed: 01/29/2023] Open
Abstract
Reading aloud requires mapping an orthographic form to a phonological one. The mapping process relies on sublexical statistical regularities (e.g. 'oo' to |uː|) or on learned lexical associations between a specific visual form and a series of sounds (e.g. yacht to/jɑt/). Computational, neuroimaging, and neuropsychological evidence suggest that sublexical, phonological and lexico-semantic processes rely on partially distinct neural substrates: a dorsal (occipito-parietal) and a ventral (occipito-temporal) route, respectively. Here, we investigated the spatiotemporal features of orthography-to-phonology mapping, capitalizing on the time resolution of magnetoencephalography and the unique clinical model offered by patients with semantic variant of primary progressive aphasia (svPPA). Behaviourally, patients with svPPA manifest marked lexico-semantic impairments including difficulties in reading words with exceptional orthographic to phonological correspondence (irregular words). Moreover, they present with focal neurodegeneration in the anterior temporal lobe, affecting primarily the ventral, occipito-temporal, lexical route. Therefore, this clinical population allows for testing of specific hypotheses on the neural implementation of the dual-route model for reading, such as whether damage to one route can be compensated by over-reliance on the other. To this end, we reconstructed and analysed time-resolved whole-brain activity in 12 svPPA patients and 12 healthy age-matched control subjects while reading irregular words (e.g. yacht) and pseudowords (e.g. pook). Consistent with previous findings that the dorsal route is involved in sublexical, phonological processes, in control participants we observed enhanced neural activity over dorsal occipito-parietal cortices for pseudowords, when compared to irregular words. This activation was manifested in the beta-band (12-30 Hz), ramping up slowly over 500 ms after stimulus onset and peaking at ∼800 ms, around response selection and production. Consistent with our prediction, svPPA patients did not exhibit this temporal pattern of neural activity observed in controls this contrast. Furthermore, a direct comparison of neural activity between patients and controls revealed a dorsal spatiotemporal cluster during irregular word reading. These findings suggest that the sublexical/phonological route is involved in processing both irregular and pseudowords in svPPA. Together these results provide further evidence supporting a dual-route model for reading aloud mediated by the interplay between lexico-semantic and sublexical/phonological neurocognitive systems. When the ventral route is damaged, as in the case of neurodegeneration affecting the anterior temporal lobe, partial compensation appears to be possible by over-recruitment of the slower, serial attention-dependent, dorsal one.
Collapse
Affiliation(s)
- Valentina Borghesani
- Memory and Aging Center, Department of Neurology, University of California San Francisco, USA
| | - Leighton B N Hinkley
- Department of Radiology and Biomedical Imaging, University of California San Francisco, USA
| | - Kamalini G Ranasinghe
- Memory and Aging Center, Department of Neurology, University of California San Francisco, USA
| | - Megan M C Thompson
- Department of Radiology and Biomedical Imaging, University of California San Francisco, USA
- UC Berkeley-UC San Francisco Graduate Program in Bioengineering, University of California, San Francisco, USA
| | - Wendy Shwe
- Memory and Aging Center, Department of Neurology, University of California San Francisco, USA
| | - Danielle Mizuiri
- Department of Radiology and Biomedical Imaging, University of California San Francisco, USA
| | - Michael Lauricella
- Memory and Aging Center, Department of Neurology, University of California San Francisco, USA
| | - Eduardo Europa
- Memory and Aging Center, Department of Neurology, University of California San Francisco, USA
| | - Susanna Honma
- Department of Radiology and Biomedical Imaging, University of California San Francisco, USA
| | - Zachary Miller
- Memory and Aging Center, Department of Neurology, University of California San Francisco, USA
| | - Bruce Miller
- Memory and Aging Center, Department of Neurology, University of California San Francisco, USA
| | - Keith Vossel
- Department of Neurology, University of Minnesota, Minneapolis, USA
| | - Maya M L Henry
- Department of Communication Sciences and Disorders, University of Texas at Austin, USA
| | - John F Houde
- Department of Otolaryngology, University of California San Francisco, USA
| | - Maria L Gorno-Tempini
- Memory and Aging Center, Department of Neurology, University of California San Francisco, USA
- Department of Neurology, Dyslexia Center, University of California, San Francisco, CA, USA
| | - Srikantan S Nagarajan
- Department of Radiology and Biomedical Imaging, University of California San Francisco, USA
- Department of Otolaryngology, University of California San Francisco, USA
| |
Collapse
|
18
|
de Tommaso M, Betti V, Bocci T, Bolognini N, Di Russo F, Fattapposta F, Ferri R, Invitto S, Koch G, Miniussi C, Piccione F, Ragazzoni A, Sartucci F, Rossi S, Valeriani M. Pearl and pitfalls in brain functional analysis by event-related potentials: a narrative review by the Italian Psychophysiology and Cognitive Neuroscience Society on methodological limits and clinical reliability-part II. Neurol Sci 2020; 41:3503-3515. [PMID: 32683566 DOI: 10.1007/s10072-020-04527-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2019] [Accepted: 06/21/2020] [Indexed: 12/13/2022]
Abstract
This review focuses on new and/or less standardized event-related potentials methods, in order to improve their knowledge for future clinical applications. The olfactory event-related potentials (OERPs) assess the olfactory functions in time domain, with potential utility in anosmia and degenerative diseases. The transcranial magnetic stimulation-electroencephalography (TMS-EEG) could support the investigation of the intracerebral connections with very high temporal discrimination. Its application in the diagnosis of disorders of consciousness has achieved recent confirmation. Magnetoencephalography (MEG) and event-related fields (ERF) could improve spatial accuracy of scalp signals, with potential large application in pre-surgical study of epileptic patients. Although these techniques have methodological limits, such as high inter- and intraindividual variability and high costs, their diffusion among researchers and clinicians is hopeful, pending their standardization.
Collapse
Affiliation(s)
- Marina de Tommaso
- Applied Neurophysiology and Pain Unit-AnpLab-University of Bari Aldo Moro, Bari, Italy
| | - Viviana Betti
- Department of Psychology, Sapienza University of Rome, Rome, Italy.,Fondazione Santa Lucia, Istituto Di Ricovero e Cura a Carattere Scientifico, Rome, Italy
| | - Tommaso Bocci
- Dipartimento di Scienze della Salute, University of Milano, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology & NeuroMi, University of Milano Bicocca, Milan, Italy.,Laboratory of Neuropsychology, IRCCS Istituto Auxologico, Milan, Italy
| | - Francesco Di Russo
- Dept. of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy
| | | | | | - Sara Invitto
- INSPIRE - Laboratory of Cognitive and Psychophysiological Olfactory Processes, University of Salento, Lecce, Italy
| | - Giacomo Koch
- Fondazione Santa Lucia, Istituto Di Ricovero e Cura a Carattere Scientifico, Rome, Italy.,Neuroscience Department, Policlinico Tor Vergata, Rome, Italy
| | - Carlo Miniussi
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy.,Cognitive Neuroscience Section, IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, Brescia, Italy
| | - Francesco Piccione
- Brain Imaging and Neural Dynamics Research Group, IRCCS San Camillo Hospital, Venice, Italy
| | - Aldo Ragazzoni
- Unit of Neurology and Clinical Neurophysiology, Fondazione PAS, Scandicci, Florence, Italy
| | - Ferdinando Sartucci
- Section of Neurophysiopathology, Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy.,CNR Institute of Neuroscience, Pisa, Italy
| | - Simone Rossi
- Department of Medicine, Surgery and Neuroscience Siena Brain Investigation and Neuromodulation LAb (SI-BIN Lab), University of Siena, Siena, Italy
| | - Massimiliano Valeriani
- Neurology Unit, Bambino Gesù Hospital, Rome, Italy. .,Center for Sensory-Motor Interaction, Aalborg University, Aalborg, Denmark.
| |
Collapse
|
19
|
Pfeiffer C, Hollenstein N, Zhang C, Langer N. Neural dynamics of sentiment processing during naturalistic sentence reading. Neuroimage 2020; 218:116934. [PMID: 32416227 DOI: 10.1016/j.neuroimage.2020.116934] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 04/24/2020] [Accepted: 05/07/2020] [Indexed: 12/15/2022] Open
Abstract
When we read, our eyes move through the text in a series of fixations and high-velocity saccades to extract visual information. This process allows the brain to obtain meaning, e.g., about sentiment, or the emotional valence, expressed in the written text. How exactly the brain extracts the sentiment of single words during naturalistic reading is largely unknown. This is due to the challenges of naturalistic imaging, which has previously led researchers to employ highly controlled, timed word-by-word presentations of custom reading materials that lack ecological validity. Here, we aimed to assess the electrical neural correlates of word sentiment processing during naturalistic reading of English sentences. We used a publicly available dataset of simultaneous electroencephalography (EEG), eye-tracking recordings, and word-level semantic annotations from 7129 words in 400 sentences (Zurich Cognitive Language Processing Corpus; Hollenstein et al., 2018). We computed fixation-related potentials (FRPs), which are evoked electrical responses time-locked to the onset of fixations. A general linear mixed model analysis of FRPs cleaned from visual- and motor-evoked activity showed a topographical difference between the positive and negative sentiment condition in the 224-304 ms interval after fixation onset in left-central and right-posterior electrode clusters. An additional analysis that included word-, phrase-, and sentence-level sentiment predictors showed the same FRP differences for the word-level sentiment, but no additional FRP differences for phrase- and sentence-level sentiment. Furthermore, decoding analysis that classified word sentiment (positive or negative) from sentiment-matched 40-trial average FRPs showed a 0.60 average accuracy (95% confidence interval: [0.58, 0.61]). Control analyses ruled out that these results were based on differences in eye movements or linguistic features other than word sentiment. Our results extend previous research by showing that the emotional valence of lexico-semantic stimuli evoke a fast electrical neural response upon word fixation during naturalistic reading. These results provide an important step to identify the neural processes of lexico-semantic processing in ecologically valid conditions and can serve to improve computer algorithms for natural language processing.
Collapse
Affiliation(s)
- Christian Pfeiffer
- Methods of Plasticity Research Laboratory, Department of Psychology, University of Zurich, Switzerland; University Research Priority Program (URPP) Dynamics of Healthy Aging, Zurich, Switzerland.
| | | | - Ce Zhang
- Department of Computer Science, ETH, Zurich, Switzerland
| | - Nicolas Langer
- Methods of Plasticity Research Laboratory, Department of Psychology, University of Zurich, Switzerland; University Research Priority Program (URPP) Dynamics of Healthy Aging, Zurich, Switzerland; Neuroscience Center Zurich (ZNZ), Zurich, Switzerland
| |
Collapse
|
20
|
Karas PJ, Magnotti JF, Metzger BA, Zhu LL, Smith KB, Yoshor D, Beauchamp MS. The visual speech head start improves perception and reduces superior temporal cortex responses to auditory speech. eLife 2019; 8:e48116. [PMID: 31393261 PMCID: PMC6687434 DOI: 10.7554/elife.48116] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 07/17/2019] [Indexed: 12/30/2022] Open
Abstract
Visual information about speech content from the talker's mouth is often available before auditory information from the talker's voice. Here we examined perceptual and neural responses to words with and without this visual head start. For both types of words, perception was enhanced by viewing the talker's face, but the enhancement was significantly greater for words with a head start. Neural responses were measured from electrodes implanted over auditory association cortex in the posterior superior temporal gyrus (pSTG) of epileptic patients. The presence of visual speech suppressed responses to auditory speech, more so for words with a visual head start. We suggest that the head start inhibits representations of incompatible auditory phonemes, increasing perceptual accuracy and decreasing total neural responses. Together with previous work showing visual cortex modulation (Ozker et al., 2018b) these results from pSTG demonstrate that multisensory interactions are a powerful modulator of activity throughout the speech perception network.
Collapse
Affiliation(s)
- Patrick J Karas
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - John F Magnotti
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Brian A Metzger
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Lin L Zhu
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Kristen B Smith
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Daniel Yoshor
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | | |
Collapse
|
21
|
Kolozsvári OB, Xu W, Leppänen PHT, Hämäläinen JA. Top-Down Predictions of Familiarity and Congruency in Audio-Visual Speech Perception at Neural Level. Front Hum Neurosci 2019; 13:243. [PMID: 31354459 PMCID: PMC6639789 DOI: 10.3389/fnhum.2019.00243] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 06/28/2019] [Indexed: 11/13/2022] Open
Abstract
During speech perception, listeners rely on multimodal input and make use of both auditory and visual information. When presented with speech, for example syllables, the differences in brain responses to distinct stimuli are not, however, caused merely by the acoustic or visual features of the stimuli. The congruency of the auditory and visual information and the familiarity of a syllable, that is, whether it appears in the listener's native language or not, also modulates brain responses. We investigated how the congruency and familiarity of the presented stimuli affect brain responses to audio-visual (AV) speech in 12 adult Finnish native speakers and 12 adult Chinese native speakers. They watched videos of a Chinese speaker pronouncing syllables (/pa/, /pha/, /ta/, /tha/, /fa/) during a magnetoencephalography (MEG) measurement where only /pa/ and /ta/ were part of Finnish phonology while all the stimuli were part of Chinese phonology. The stimuli were presented in audio-visual (congruent or incongruent), audio only, or visual only conditions. The brain responses were examined in five time-windows: 75-125, 150-200, 200-300, 300-400, and 400-600 ms. We found significant differences for the congruency comparison in the fourth time-window (300-400 ms) in both sensor and source level analysis. Larger responses were observed for the incongruent stimuli than for the congruent stimuli. For the familiarity comparisons no significant differences were found. The results are in line with earlier studies reporting on the modulation of brain responses for audio-visual congruency around 250-500 ms. This suggests a much stronger process for the general detection of a mismatch between predictions based on lip movements and the auditory signal than for the top-down modulation of brain responses based on phonological information.
Collapse
Affiliation(s)
- Orsolya B Kolozsvári
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland.,Jyväskylä Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Jyväskylä, Finland
| | - Weiyong Xu
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland.,Jyväskylä Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Jyväskylä, Finland
| | - Paavo H T Leppänen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland.,Jyväskylä Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Jyväskylä, Finland
| | - Jarmo A Hämäläinen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland.,Jyväskylä Centre for Interdisciplinary Brain Research (CIBR), University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
22
|
Nogueira W, Cosatti G, Schierholz I, Egger M, Mirkovic B, Buchner A. Toward Decoding Selective Attention From Single-Trial EEG Data in Cochlear Implant Users. IEEE Trans Biomed Eng 2019; 67:38-49. [PMID: 30932825 DOI: 10.1109/tbme.2019.2907638] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Previous results showed that it is possible to decode an attended speech source from EEG data via the reconstruction of the speech envelope in normal hearing (NH) listeners. However, so far it is unknown that how the performance of such a decoder is affected by the decrease in spectral resolution and the electrical artifacts introduced by a cochlear implant (CI) in users of these prostheses. NH listeners and bilateral CI users participated in the present study. Speeches from two audio books, one uttered by a male voice and one by a female voice, were presented to NH listeners and CI users. Participants were instructed to attend to one of the two speech streams presented dichotically while a 96-channel EEG was recorded. Speech envelope reconstruction from the EEG data was obtained by training decoders using a regularized least square estimation method. Decoding accuracy was defined as the percentage of accurately reconstructed trials for each subject. For NH listeners, the experiment was repeated using a vocoder to reduce spectral resolution and simulate speech perception with a CI in NH listeners. The results showed a decoding accuracy of 80.9 % using the original sound files in NH listeners. The performance dropped to 73.2 % in the vocoder condition and to 71.5 % in the group of CI users. In sum, although the accuracy drops when the spectral resolution becomes worse, the results show the feasibility to decode the attended sound source in NH listeners with a vocoder simulation, and even in CI users, albeit more training data are needed.
Collapse
|
23
|
Hultén A, Schoffelen JM, Uddén J, Lam NH, Hagoort P. How the brain makes sense beyond the processing of single words – An MEG study. Neuroimage 2019; 186:586-594. [DOI: 10.1016/j.neuroimage.2018.11.035] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Revised: 11/20/2018] [Accepted: 11/21/2018] [Indexed: 11/30/2022] Open
|
24
|
Richlan F. The Functional Neuroanatomy of Letter-Speech Sound Integration and Its Relation to Brain Abnormalities in Developmental Dyslexia. Front Hum Neurosci 2019; 13:21. [PMID: 30774591 PMCID: PMC6367238 DOI: 10.3389/fnhum.2019.00021] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Accepted: 01/18/2019] [Indexed: 01/20/2023] Open
Abstract
This mini-review provides a comparison of the brain systems associated with developmental dyslexia and the brain systems associated with letter-speech sound (LSS) integration. First, the findings on the functional neuroanatomy of LSS integration are summarized in order to obtain a comprehensive overview of the brain regions involved in this process. To this end, neurocognitive studies investigating LSS integration in both normal and abnormal reading development are taken into account. The neurobiological basis underlying LSS integration is consequently compared with existing neurocognitive models of functional and structural brain abnormalities in developmental dyslexia-focusing on superior temporal and occipito-temporal (OT) key regions. Ultimately, the commonalities and differences between the brain systems engaged by LSS integration and the brain systems identified with abnormalities in developmental dyslexia are investigated. This comparison will add to our understanding of the relation between LSS integration and normal and abnormal reading development.
Collapse
Affiliation(s)
- Fabio Richlan
- Centre for Cognitive Neuroscience and Department of Psychology, University of Salzburg, Salzburg, Austria
| |
Collapse
|
25
|
Weiss Lucas C, Kallioniemi E, Neuschmelting V, Nettekoven C, Pieczewski J, Jonas K, Goldbrunner R, Karhu J, Grefkes C, Julkunen P. Cortical Inhibition of Face and Jaw Muscle Activity and Discomfort Induced by Repetitive and Paired-Pulse TMS During an Overt Object Naming Task. Brain Topogr 2019; 32:418-434. [DOI: 10.1007/s10548-019-00698-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Accepted: 01/16/2019] [Indexed: 01/27/2023]
|
26
|
McCloy DR, Lee AKC. Investigating the fit between phonological feature systems and brain responses to speech using EEG. LANGUAGE, COGNITION AND NEUROSCIENCE 2019; 34:662-676. [PMID: 32984429 PMCID: PMC7518517 DOI: 10.1080/23273798.2019.1569246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Accepted: 01/03/2019] [Indexed: 06/11/2023]
Abstract
This paper describes a technique to assess the correspondence between patterns of similarity in the brain's response to speech sounds and the patterns of similarity encoded in phonological feature systems, by quantifying the recoverability of phonological features from the neural data using supervised learning. The technique is applied to EEG recordings collected during passive listening to consonant-vowel syllables. Three published phonological feature systems are compared, and are shown to differ in their ability to recover certain speech sound contrasts from the neural data. For the phonological feature system that best reflects patterns of similarity in the neural data, a leave-one-out analysis indicates some consistency across subjects in which features have greatest impact on the fit, but considerable across-subject heterogeneity remains in the rank ordering of features in this regard.
Collapse
Affiliation(s)
- Daniel R McCloy
- University of Washington, Institute for Learning and Brain Sciences, Seattle, WA, United States
| | - Adrian K C Lee
- University of Washington, Institute for Learning and Brain Sciences, Seattle, WA, United States
| |
Collapse
|
27
|
Do ‘early’ brain responses reveal word form prediction during language comprehension? A critical review. Neurosci Biobehav Rev 2019; 96:367-400. [DOI: 10.1016/j.neubiorev.2018.11.019] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 10/01/2018] [Accepted: 11/27/2018] [Indexed: 11/23/2022]
|
28
|
Traut T, Sardesh N, Bulubas L, Findlay A, Honma SM, Mizuiri D, Berger MS, Hinkley LB, Nagarajan SS, Tarapore PE. MEG imaging of recurrent gliomas reveals functional plasticity of hemispheric language specialization. Hum Brain Mapp 2018; 40:1082-1092. [PMID: 30549134 DOI: 10.1002/hbm.24430] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 10/04/2018] [Accepted: 10/05/2018] [Indexed: 11/09/2022] Open
Abstract
In patients with gliomas, changes in hemispheric specialization for language determined by magnetoencephalography (MEG) were analyzed to elucidate the impact of treatment and tumor recurrence on language networks. Demonstration of reorganization of language networks in these patients has significant implications on the prevention of postoperative functional loss and recovery. Whole-brain activity during an auditory verb generation task was estimated from MEG recordings in a group of 73 patients with recurrent gliomas. Hemisphere of language dominance was estimated using the language laterality index (LI), a measure derived from the task. The initial scan was performed prior to resection; patients subsequently underwent surgery and adjuvant treatment. A second scan was performed upon recurrence prior to repeat resection. The relationship between the shift in LI between scans and demographics, anatomic location, pathology, and adjuvant treatment was analyzed. Laterality shifts were observed between scans; the median percent change was 29.1% across all patients. Laterality shift magnitude and relative direction were associated with the initial position of language dominance; patients with increased lateralization experienced greater shifts than those presenting more bilateral representation. A change in LI from left or right to bilateral (or vice versa) occurred in 23.3% of patients; complete switch occurred in 5.5% of patients. Patients with tumors within the language-dominant hemisphere experienced significantly greater shifts than those with contralateral tumors. The majority of patients with glioma experience shifts in language network organization over time which correlate with the relative position of language lateralization and tumor location.
Collapse
Affiliation(s)
- Tavish Traut
- Biomagnetic Imaging Lab, Department of Radiology and Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, California
| | - Nina Sardesh
- Biomagnetic Imaging Lab, Department of Radiology and Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, California
| | - Lucia Bulubas
- Biomagnetic Imaging Lab, Department of Radiology and Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, California.,Department of Neurosurgery, Klinikum Rechts der Isar, TU München, Munich, Germany.,TUM-Neuroimaging Center, Klinikum Rechts der Isar, TU München, Munich, Germany
| | - Anne Findlay
- Biomagnetic Imaging Lab, Department of Radiology and Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, California
| | - Susanne M Honma
- Biomagnetic Imaging Lab, Department of Radiology and Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, California
| | - Danielle Mizuiri
- Biomagnetic Imaging Lab, Department of Radiology and Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, California
| | - Mitchel S Berger
- Department of Neurological Surgery, University of California, San Francisco (UCSF), San Francisco, California
| | - Leighton B Hinkley
- Biomagnetic Imaging Lab, Department of Radiology and Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, California
| | - Srikantan S Nagarajan
- Biomagnetic Imaging Lab, Department of Radiology and Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, California
| | - Phiroz E Tarapore
- Biomagnetic Imaging Lab, Department of Radiology and Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, California.,Department of Neurological Surgery, University of California, San Francisco (UCSF), San Francisco, California
| |
Collapse
|
29
|
Liebenthal E, Möttönen R. An interactive model of auditory-motor speech perception. BRAIN AND LANGUAGE 2018; 187:33-40. [PMID: 29268943 PMCID: PMC6005717 DOI: 10.1016/j.bandl.2017.12.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 10/03/2017] [Accepted: 12/02/2017] [Indexed: 05/30/2023]
Abstract
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research.
Collapse
Affiliation(s)
- Einat Liebenthal
- Department of Psychiatry, Brigham & Women's Hospital, Harvard Medical School, Boston, USA.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, UK; School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
30
|
Hari R, Baillet S, Barnes G, Burgess R, Forss N, Gross J, Hämäläinen M, Jensen O, Kakigi R, Mauguière F, Nakasato N, Puce A, Romani GL, Schnitzler A, Taulu S. IFCN-endorsed practical guidelines for clinical magnetoencephalography (MEG). Clin Neurophysiol 2018; 129:1720-1747. [PMID: 29724661 PMCID: PMC6045462 DOI: 10.1016/j.clinph.2018.03.042] [Citation(s) in RCA: 90] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Revised: 03/18/2018] [Accepted: 03/24/2018] [Indexed: 12/22/2022]
Abstract
Magnetoencephalography (MEG) records weak magnetic fields outside the human head and thereby provides millisecond-accurate information about neuronal currents supporting human brain function. MEG and electroencephalography (EEG) are closely related complementary methods and should be interpreted together whenever possible. This manuscript covers the basic physical and physiological principles of MEG and discusses the main aspects of state-of-the-art MEG data analysis. We provide guidelines for best practices of patient preparation, stimulus presentation, MEG data collection and analysis, as well as for MEG interpretation in routine clinical examinations. In 2017, about 200 whole-scalp MEG devices were in operation worldwide, many of them located in clinical environments. Yet, the established clinical indications for MEG examinations remain few, mainly restricted to the diagnostics of epilepsy and to preoperative functional evaluation of neurosurgical patients. We are confident that the extensive ongoing basic MEG research indicates potential for the evaluation of neurological and psychiatric syndromes, developmental disorders, and the integrity of cortical brain networks after stroke. Basic and clinical research is, thus, paving way for new clinical applications to be identified by an increasing number of practitioners of MEG.
Collapse
Affiliation(s)
- Riitta Hari
- Department of Art, Aalto University, Helsinki, Finland.
| | - Sylvain Baillet
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Gareth Barnes
- Wellcome Centre for Human Neuroimaging, University College of London, London, UK
| | - Richard Burgess
- Epilepsy Center, Neurological Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Nina Forss
- Clinical Neuroscience, Neurology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Joachim Gross
- Centre for Cognitive Neuroimaging, University of Glasgow, Glasgow, UK; Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Germany
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA; NatMEG, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Ole Jensen
- Centre for Human Brain Health, University of Birmingham, Birmingham, UK
| | - Ryusuke Kakigi
- Department of Integrative Physiology, National Institute of Physiological Sciences, Okazaki, Japan
| | - François Mauguière
- Department of Functional Neurology and Epileptology, Neurological Hospital & University of Lyon, Lyon, France
| | | | - Aina Puce
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Gian-Luca Romani
- Department of Neuroscience, Imaging and Clinical Sciences, Università degli Studi G. D'Annunzio, Chieti, Italy
| | - Alfons Schnitzler
- Institute of Clinical Neuroscience and Medical Psychology, and Department of Neurology, Heinrich-Heine-University, Düsseldorf, Germany
| | - Samu Taulu
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA; Department of Physics, University of Washington, Seattle, WA, USA
| |
Collapse
|
31
|
Wilenius J, Lehtinen H, Paetau R, Salmelin R, Kirveskari E. A simple magnetoencephalographic auditory paradigm may aid in confirming left-hemispheric language dominance in epilepsy patients. PLoS One 2018; 13:e0200073. [PMID: 29966017 PMCID: PMC6028140 DOI: 10.1371/journal.pone.0200073] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2017] [Accepted: 06/19/2018] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVE The intracarotid amobarbital procedure (IAP) is the current "gold standard" in the preoperative assessment of language lateralization in epilepsy surgery candidates. It is, however, invasive and has several limitations. Here we tested a simple noninvasive language lateralization test performed with magnetoencephalography (MEG). METHODS We recorded auditory MEG responses to pairs of vowels and pure tones in 16 epilepsy surgery candidates who had undergone IAP. For each individual, we selected the pair of planar gradiometer sensors with the strongest N100m response to vowels in each hemisphere and-from the vector sum of signals of this gradiometer pair-calculated the vowel/tone amplitude ratio in the left (L) and right (R) hemisphere and, subsequently, the laterality index: LI = (L-R)/(L+R). In addition to the analysis using a single sensor pair, an alternative analysis was performed using averaged responses over 18 temporal sensor pairs in both hemispheres. RESULTS The laterality index did not correlate significantly with the lateralization data obtained from the IAP. However, an MEG pattern of stronger responses to vowels than tones in the left hemisphere and stronger responses to tones than vowels in the right hemisphere was associated with left-hemispheric language dominance in the IAP in all the six patients who showed this pattern. This results in a specificity of 100% and a sensitivity of 67% of this MEG pattern in predicting left-hemispheric language dominance (p = 0.01, Fisher's exact test). In the analysis using averaged responses over temporal channels, one additional patient who was left-dominant in IAP showed this particular MEG pattern, increasing the sensitivity to 78% (p = 0.003). SIGNIFICANCE This simple MEG paradigm shows promise in feasibly and noninvasively confirming left-hemispheric language dominance in epilepsy surgery candidates. It may aid in reducing the need for the IAP, if the results are confirmed in larger patient samples.
Collapse
Affiliation(s)
- Juha Wilenius
- Clinical Neurosciences, Department of Clinical Neurophysiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Henri Lehtinen
- Epilepsy Unit, Department of Pediatric Neurology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Ritva Paetau
- Clinical Neurosciences, Department of Clinical Neurophysiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
- Epilepsy Unit, Department of Pediatric Neurology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Erika Kirveskari
- Clinical Neurosciences, Department of Clinical Neurophysiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| |
Collapse
|
32
|
Large-scale functional networks connect differently for processing words and symbol strings. PLoS One 2018; 13:e0196773. [PMID: 29718993 PMCID: PMC5931649 DOI: 10.1371/journal.pone.0196773] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Accepted: 04/19/2018] [Indexed: 11/19/2022] Open
Abstract
Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.
Collapse
|
33
|
Presurgical electromagnetic functional brain mapping in refractory focal epilepsy. ZEITSCHRIFT FUR EPILEPTOLOGIE 2018. [DOI: 10.1007/s10309-018-0189-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
34
|
Ozker M, Yoshor D, Beauchamp MS. Converging Evidence From Electrocorticography and BOLD fMRI for a Sharp Functional Boundary in Superior Temporal Gyrus Related to Multisensory Speech Processing. Front Hum Neurosci 2018; 12:141. [PMID: 29740294 PMCID: PMC5928751 DOI: 10.3389/fnhum.2018.00141] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 03/28/2018] [Indexed: 01/15/2023] Open
Abstract
Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker’s mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published). The second dataset consisted of indirect measures of neural activity using blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Both ECoG and fMRI participants viewed the same clear and noisy audiovisual speech stimuli and performed the same speech recognition task. Both techniques demonstrated a sharp functional boundary in the STG, spatially coincident with an anatomical boundary defined by the posterior edge of Heschl’s gyrus. Cortex on the anterior side of the boundary responded more strongly to clear audiovisual speech than to noisy audiovisual speech while cortex on the posterior side of the boundary did not. For both ECoG and fMRI measurements, the transition between the functionally distinct regions happened within 10 mm of anterior-to-posterior distance along the STG. We relate this boundary to the multisensory neural code underlying speech perception and propose that it represents an important functional division within the human speech perception network.
Collapse
Affiliation(s)
- Muge Ozker
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Daniel Yoshor
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States.,Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
35
|
Hyder R, Kamel N, Boon TT, Reza F. Mapping of language brain areas in patients with brain tumors. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2015:626-9. [PMID: 26736340 DOI: 10.1109/embc.2015.7318440] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Language cortex in the human brain shows high variability among normal individuals and may exhibit a considerable shift from its original position due to tumor growth. Mapping the precise location of language areas is important before surgery to avoid postoperative language deficits. In this paper, the Magnetoencephalography (MEG) recording and the MRI scanning of six brain tumorous subjects are used to localize the language specific areas. MEG recordings were performed during two silent reading tasks; silent word reading and silent picture naming. MEG source imaging is performed using distributed source modeling technique called CLARA ("Classical LORETA Analysis Recursively Applied"). Estimated MEG sources are overlaid on individual MRI of each patient to improve interpretation of MEG source imaging results. The results show successful identification of the essential language areas and clear definition of the time course of neural activation connecting them.
Collapse
|
36
|
Hakala T, Hultén A, Lehtonen M, Lagus K, Salmelin R. Information properties of morphologically complex words modulate brain activity during word reading. Hum Brain Mapp 2018. [PMID: 29524274 PMCID: PMC5969226 DOI: 10.1002/hbm.24025] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Neuroimaging studies of the reading process point to functionally distinct stages in word recognition. Yet, current understanding of the operations linked to those various stages is mainly descriptive in nature. Approaches developed in the field of computational linguistics may offer a more quantitative approach for understanding brain dynamics. Our aim was to evaluate whether a statistical model of morphology, with well‐defined computational principles, can capture the neural dynamics of reading, using the concept of surprisal from information theory as the common measure. The Morfessor model, created for unsupervised discovery of morphemes, is based on the minimum description length principle and attempts to find optimal units of representation for complex words. In a word recognition task, we correlated brain responses to word surprisal values derived from Morfessor and from other psycholinguistic variables that have been linked with various levels of linguistic abstraction. The magnetoencephalography data analysis focused on spatially, temporally and functionally distinct components of cortical activation observed in reading tasks. The early occipital and occipito‐temporal responses were correlated with parameters relating to visual complexity and orthographic properties, whereas the later bilateral superior temporal activation was correlated with whole‐word based and morphological models. The results show that the word processing costs estimated by the statistical Morfessor model are relevant for brain dynamics of reading during late processing stages.
Collapse
Affiliation(s)
- Tero Hakala
- Department of Neuroscience and Biomedical Engineering, Aalto University, Helsinki, Finland.,Aalto NeuroImaging, Aalto University, Helsinki, Finland
| | - Annika Hultén
- Department of Neuroscience and Biomedical Engineering, Aalto University, Helsinki, Finland.,Aalto NeuroImaging, Aalto University, Helsinki, Finland
| | - Minna Lehtonen
- Department of Psychology, Åbo Akademi University, Turku, Finland.,MultiLing Center for Multilingualism in Society across the Lifespan, Department of Linguistics and Scandinavian studies, University of Oslo, Oslo, Norway.,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Krista Lagus
- Department of Political and Economic Studies, University of Helsinki, Helsinki, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Helsinki, Finland.,Aalto NeuroImaging, Aalto University, Helsinki, Finland
| |
Collapse
|
37
|
Babajani-Feremi A, Holder CM, Narayana S, Fulton SP, Choudhri AF, Boop FA, Wheless JW. Predicting postoperative language outcome using presurgical fMRI, MEG, TMS, and high gamma ECoG. Clin Neurophysiol 2018; 129:560-571. [PMID: 29414401 DOI: 10.1016/j.clinph.2017.12.031] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 11/17/2017] [Accepted: 12/05/2017] [Indexed: 11/16/2022]
Abstract
OBJECTIVE To predict the postoperative language outcome using the support vector regression (SVR) and results of multimodal presurgical language mapping. METHODS Eleven patients with epilepsy received presurgical language mapping using functional MRI (fMRI), magnetoencephalography (MEG), transcranial magnetic stimulation (TMS), and high-gamma electrocorticography (hgECoG), as well as pre- and postoperative neuropsychological evaluation of language. We constructed 15 (24-1) SVR models by considering the extent of resected language areas identified by all subsets of four modalities as input feature vector and the postoperative language outcome as output. We trained and cross-validated SVR models, and compared the cross-validation (CV) errors of all models for prediction of language outcome. RESULTS Seven patients had some level of postoperative language decline and two of them had significant postoperative decline in naming. Some parts of language areas identified by four modalities were resected in these patients. We found that an SVR model consisting of fMRI, MEG, and hgECoG provided minimum CV error, although an SVR model consisting of fMRI and MEG was the optimal model that facilitated the best trade-off between model complexity and prediction accuracy. CONCLUSIONS A multimodal SVR can be used to predict the language outcome. SIGNIFICANCE The developed multimodal SVR models in this study can be utilized to calculate the language outcomes of different resection plans prior to surgery and select the optimal surgical plan.
Collapse
Affiliation(s)
- Abbas Babajani-Feremi
- University of Tennessee Health Science Center, Department of Pediatrics and Department of Anatomy and Neurobiology, Le Bonheur Children's Hospital, Neuroscience Institute, Memphis, TN, USA.
| | - Christen M Holder
- University of Tennessee Health Science Center, Department of Pediatrics, Le Bonheur Children's Hospital, Neuroscience Institute, Memphis, TN, USA
| | - Shalini Narayana
- University of Tennessee Health Science Center, Department of Pediatrics and Department of Anatomy and Neurobiology, Le Bonheur Children's Hospital, Neuroscience Institute, Memphis, TN, USA
| | - Stephen P Fulton
- University of Tennessee Health Science Center, Department of Pediatrics, Le Bonheur Children's Hospital, Neuroscience Institute, Memphis, TN, USA
| | - Asim F Choudhri
- University of Tennessee Health Science Center, Department of Pediatrics, Le Bonheur Children's Hospital, Neuroscience Institute, Memphis, TN, USA
| | - Frederick A Boop
- University of Tennessee Health Science Center, Department of Pediatrics, Le Bonheur Children's Hospital, Neuroscience Institute, Memphis, TN, USA
| | - James W Wheless
- University of Tennessee Health Science Center, Department of Pediatrics, Le Bonheur Children's Hospital, Neuroscience Institute, Memphis, TN, USA
| |
Collapse
|
38
|
Wan N, Hancock AS, Moon TK, Gillam RB. A functional near-infrared spectroscopic investigation of speech production during reading. Hum Brain Mapp 2017; 39:1428-1437. [PMID: 29266623 DOI: 10.1002/hbm.23932] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2016] [Revised: 10/28/2017] [Accepted: 12/11/2017] [Indexed: 12/14/2022] Open
Abstract
This study was designed to test the extent to which speaking processes related to articulation and voicing influence Functional Near Infrared Spectroscopy (fNIRS) measures of cortical hemodynamics and functional connectivity. Participants read passages in three conditions (oral reading, silent mouthing, and silent reading) while undergoing fNIRS imaging. Area under the curve (AUC) analyses of the oxygenated and deoxygenated hemodynamic response function concentration values were compared for each task across five regions of interest. There were significant region main effects for both oxy and deoxy AUC analyses, and a significant region × task interaction for deoxy AUC favoring the oral reading condition over the silent reading condition for two nonmotor regions. Assessment of functional connectivity using Granger Causality revealed stronger networks between motor areas during oral reading and stronger networks between language areas during silent reading. There was no evidence that the hemodynamic flow from motor areas during oral reading compromised measures of language-related neural activity in nonmotor areas. However, speech movements had small, but measurable effects on fNIRS measures of neural connections between motor and nonmotor brain areas across the perisylvian region, even after wavelet filtering. Therefore, researchers studying speech processes with fNIRS should use wavelet filtering during preprocessing to reduce speech motion artifacts, incorporate a nonspeech communication or language control task into the research design, and conduct a connectivity analysis to adequately assess the impact of functional speech on the hemodynamic response across the perisylvian region.
Collapse
Affiliation(s)
- Nick Wan
- Department of Psychology, Utah State University, Logan, Utah, 84321
| | - Allison S Hancock
- Department of Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, Utah, 84321
| | - Todd K Moon
- Department of Electrical and Computer Engineering, Utah State University, Logan, Utah, 84321
| | - Ronald B Gillam
- Department of Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, Utah, 84321
| |
Collapse
|
39
|
Golob EJ, Lewald J, Getzmann S, Mock JR. Numerical value biases sound localization. Sci Rep 2017; 7:17252. [PMID: 29222526 PMCID: PMC5722947 DOI: 10.1038/s41598-017-17429-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 11/27/2017] [Indexed: 11/18/2022] Open
Abstract
Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1–9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.
Collapse
Affiliation(s)
- Edward J Golob
- Department of Psychology, Tulane University, New Orleans, LA, USA. .,Program in Neuroscience, Tulane University, New Orleans, LA, USA. .,Department of Psychology, University of Texas, San Antonio, USA.
| | - Jörg Lewald
- Faculty of Psychology, Ruhr University Bochum, D-44780, Bochum, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Ardeystrasse 67, D-44139, Dortmund, Germany
| | - Stephan Getzmann
- Faculty of Psychology, Ruhr University Bochum, D-44780, Bochum, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Ardeystrasse 67, D-44139, Dortmund, Germany
| | - Jeffrey R Mock
- Department of Psychology, Tulane University, New Orleans, LA, USA.,Department of Psychology, University of Texas, San Antonio, USA
| |
Collapse
|
40
|
Eye Can Hear Clearly Now: Inverse Effectiveness in Natural Audiovisual Speech Processing Relies on Long-Term Crossmodal Temporal Integration. J Neurosci 2017; 36:9888-95. [PMID: 27656026 DOI: 10.1523/jneurosci.1396-16.2016] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Accepted: 08/03/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Speech comprehension is improved by viewing a speaker's face, especially in adverse hearing conditions, a principle known as inverse effectiveness. However, the neural mechanisms that help to optimize how we integrate auditory and visual speech in such suboptimal conversational environments are not yet fully understood. Using human EEG recordings, we examined how visual speech enhances the cortical representation of auditory speech at a signal-to-noise ratio that maximized the perceptual benefit conferred by multisensory processing relative to unisensory processing. We found that the influence of visual input on the neural tracking of the audio speech signal was significantly greater in noisy than in quiet listening conditions, consistent with the principle of inverse effectiveness. Although envelope tracking during audio-only speech was greatly reduced by background noise at an early processing stage, it was markedly restored by the addition of visual speech input. In background noise, multisensory integration occurred at much lower frequencies and was shown to predict the multisensory gain in behavioral performance at a time lag of ∼250 ms. Critically, we demonstrated that inverse effectiveness, in the context of natural audiovisual (AV) speech processing, relies on crossmodal integration over long temporal windows. Our findings suggest that disparate integration mechanisms contribute to the efficient processing of AV speech in background noise. SIGNIFICANCE STATEMENT The behavioral benefit of seeing a speaker's face during conversation is especially pronounced in challenging listening environments. However, the neural mechanisms underlying this phenomenon, known as inverse effectiveness, have not yet been established. Here, we examine this in the human brain using natural speech-in-noise stimuli that were designed specifically to maximize the behavioral benefit of audiovisual (AV) speech. We find that this benefit arises from our ability to integrate multimodal information over longer periods of time. Our data also suggest that the addition of visual speech restores early tracking of the acoustic speech signal during excessive background noise. These findings support and extend current mechanistic perspectives on AV speech perception.
Collapse
|
41
|
Raghavan M, Li Z, Carlson C, Anderson CT, Stout J, Sabsevitz DS, Swanson SJ, Binder JR. MEG language lateralization in partial epilepsy using dSPM of auditory event-related fields. Epilepsy Behav 2017; 73:247-255. [PMID: 28662463 DOI: 10.1016/j.yebeh.2017.06.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Revised: 05/19/2017] [Accepted: 06/05/2017] [Indexed: 11/17/2022]
Abstract
OBJECTIVE Methods employed to determine hemispheric language dominance using magnetoencephalography (MEG) have differed significantly across studies in the choice of language-task, the nature of the physiological response studied, recording hardware, and source modeling methods. Our goal was to determine whether an analysis based on distributed source modeling can replicate the results of prior studies that have used dipole-modeling of event-related fields (ERFs) generated by an auditory word-recognition task to determine language dominance in patients with epilepsy. METHODS We analyzed data from 45 adult patients with drug-resistant partial epilepsy who performed an auditory word-recognition task during MEG recording and also completed a language fMRI study as part of their evaluation for epilepsy surgery. Source imaging of auditory ERFs was performed using dynamic statistical parametric mapping (dSPM). Language laterality indices (LIs) were calculated for four regions of interest (ROIs) by counting above-threshold activations within a 300-600ms time window after stimulus onset. Language laterality (LL) classifications based on these LIs were compared to the results from fMRI. RESULTS The most lateralized MEG responses to language stimuli were observed in a parietal region that included the angular and supramarginal gyri (AngSmg). In this region, using a half-maximal threshold, source activations were left dominant in 32 (71%) patients, right dominant in 8 (18%), and symmetric in 5 patients (11%). The best agreement between MEG and fMRI on the ternary classification of regional language dominance into left, right, or symmetric groups was also found at the AngSmg ROI (69%). This was followed by the whole-hemisphere and temporal ROIs (both 62%). The frontal ROI showed the least agreement with fMRI (51%). Gross discordances between MEG and FMRI findings were disproportionately of the type where MEG favored atypical right-hemispheric language in a patient with right-hemispheric seizure origin (p<0.05 at three of the four ROIs). SIGNIFICANCE In a parietal region that includes the angular and supramarginal gyri, language laterality estimates based on dSPM of ERFs during auditory word-recognition shows a degree of MEG-fMRI concordance that is comparable to previously published estimates for MEG-Wada concordance using dipole counting methods and the same task. Our data also suggest that MEG language laterality estimates based on this task may be influenced by the laterality of epileptic networks in some patients. This has not been reported previously and deserves further study.
Collapse
Affiliation(s)
- Manoj Raghavan
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA.
| | - Zhimin Li
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Chad Carlson
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | | | - Jeffrey Stout
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - David S Sabsevitz
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Sara J Swanson
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Jeffrey R Binder
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
42
|
Abstract
We live our lives surrounded by symbols (e.g., road signs, logos, but especially words and numbers), and throughout our life we use them to evoke, communicate and reflect upon ideas and things that are not currently present to our senses. Symbols are represented in our brains at different levels of complexity: at the first and most simple level, as physical entities, in the corresponding primary and secondary sensory cortices. The crucial property of symbols, however, is that, despite the simplicity of their surface forms, they have the power of evoking higher order multifaceted representations that are implemented in distributed neural networks spanning a large portion of the cortex. The rich internal states that reflect our knowledge of the meaning of symbols are what we call semantic representations. In this review paper, we summarize our current knowledge of both the cognitive and neural substrates of semantic representations, focusing on concrete words (i.e., nouns or verbs referring to concrete objects and actions), which, together with numbers, are the most-studied and well defined classes of symbols. Following a systematic descriptive approach, we will organize this literature review around two key questions: what is the content of semantic representations? And, how are semantic representations implemented in the brain, in terms of localization and dynamics? While highlighting the main current opposing perspectives on these topics, we propose that a fruitful way to make substantial progress in this domain would be to adopt a geometrical view of semantic representations as points in high dimensional space, and to operationally partition the space of concrete word meaning into motor-perceptual and conceptual dimensions. By giving concrete examples of the kinds of research that can be done within this perspective, we illustrate how we believe this framework will foster theoretical speculations as well as empirical research.
Collapse
Affiliation(s)
- Valentina Borghesani
- École Doctorale Cerveau-Cognition-Comportement, Université Pierre et Marie Curie - Paris 6, 75005 Paris, France; Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, U992, F-91191 Gif/Yvette, France; Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy.
| | - Manuela Piazza
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, U992, F-91191 Gif/Yvette, France; Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy
| |
Collapse
|
43
|
Butorina AV, Pavlova AA, Nikolaeva AY, Prokofyev AO, Bondarev DP, Stroganova TA. Simultaneous Processing of Noun Cue and to-be-Produced Verb in Verb Generation Task: Electromagnetic Evidence. Front Hum Neurosci 2017; 11:279. [PMID: 28611613 PMCID: PMC5447679 DOI: 10.3389/fnhum.2017.00279] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Accepted: 05/12/2017] [Indexed: 11/13/2022] Open
Abstract
A long-standing but implicit assumption is that words strongly associated with a presented cue are automatically activated in the memory through rapid spread of activation within brain semantic networks. The current study was aimed to provide direct evidence of such rapid access to words’ semantic representations and to investigate its neural sources using magnetoencephalography (MEG) and distributed source localization technique. Thirty-three neurotypical subjects underwent the MEG recording during verb generation task, which was to produce verbs related to the presented noun cues. Brain responses evoked by the noun cues were examined while manipulating the strength of association between the noun and the potential verb responses. The strong vs. weak noun-verb association led to a greater noun-related neural response at 250–400 ms after cue onset, and faster verb production. The cortical sources of the differential response were localized in left temporal pole, previously implicated in semantic access, and left ventrolateral prefrontal cortex (VLPFC), thought to subserve controlled semantic retrieval. The strength of the left VLPFC’s response to the nouns with strong verb associates was positively correlated to the speed of verbs production. Our findings empirically validate the theoretical expectation that in case of a strongly connected noun-verb pair, successful access to target verb representation may occur already at the stage of lexico-semantic analysis of the presented noun. Moreover, the MEG results suggest that contrary to the previous conclusion derived from fMRI studies left VLPFC supports selection of the target verb representations, even if they were retrieved from semantic memory rapidly and effortlessly. The discordance between MEG and fMRI findings in verb generation task may stem from different modes of neural activation captured by phase-locked activity in MEG and slow changes of blood-oxygen-level-dependent (BOLD) signal in fMRI.
Collapse
Affiliation(s)
- Anna V Butorina
- MEG Center, Moscow State University of Psychology and EducationMoscow, Russia
| | - Anna A Pavlova
- MEG Center, Moscow State University of Psychology and EducationMoscow, Russia
| | | | - Andrey O Prokofyev
- MEG Center, Moscow State University of Psychology and EducationMoscow, Russia
| | - Denis P Bondarev
- MEG Center, Moscow State University of Psychology and EducationMoscow, Russia.,National Research Center "Kurchatov Institute"Moscow, Russia
| | | |
Collapse
|
44
|
Papanicolaou AC, Kilintari M, Rezaie R, Narayana S, Babajani-Feremi A. The Role of the Primary Sensory Cortices in Early Language Processing. J Cogn Neurosci 2017; 29:1755-1765. [PMID: 28557692 DOI: 10.1162/jocn_a_01147] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
The results of this magnetoencephalography study challenge two long-standing assumptions regarding the brain mechanisms of language processing: First, that linguistic processing proper follows sensory feature processing effected by bilateral activation of the primary sensory cortices that lasts about 100 msec from stimulus onset. Second, that subsequent linguistic processing is effected by left hemisphere networks outside the primary sensory areas, including Broca's and Wernicke's association cortices. Here we present evidence that linguistic analysis begins almost synchronously with sensory, prelinguistic verbal input analysis and that the primary cortices are also engaged in these linguistic analyses and become, consequently, part of the left hemisphere language network during language tasks. These findings call for extensive revision of our conception of linguistic processing in the brain.
Collapse
Affiliation(s)
- Andrew C Papanicolaou
- University of Tennessee Health Science Center.,Le Bonheur Children's Hospital, Memphis, TN
| | - Marina Kilintari
- University of Tennessee Health Science Center.,Le Bonheur Children's Hospital, Memphis, TN.,University College London
| | - Roozbeh Rezaie
- University of Tennessee Health Science Center.,Le Bonheur Children's Hospital, Memphis, TN
| | - Shalini Narayana
- University of Tennessee Health Science Center.,Le Bonheur Children's Hospital, Memphis, TN
| | - Abbas Babajani-Feremi
- University of Tennessee Health Science Center.,Le Bonheur Children's Hospital, Memphis, TN
| |
Collapse
|
45
|
|
46
|
Maezawa H. Cortical Mechanisms of Tongue Sensorimotor Functions in Humans: A Review of the Magnetoencephalography Approach. Front Hum Neurosci 2017; 11:134. [PMID: 28400725 PMCID: PMC5368248 DOI: 10.3389/fnhum.2017.00134] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2017] [Accepted: 03/08/2017] [Indexed: 11/13/2022] Open
Abstract
The tongue plays important roles in a variety of critical human oral functions, including speech production, swallowing, mastication and respiration. These sophisticated tongue movements are in part finely regulated by cortical entrainment. Many studies have examined sensorimotor processing in the limbs using magnetoencephalography (MEG), which has high spatiotemporal resolution. Such studies have employed multiple methods of analysis, including somatosensory evoked fields (SEFs), movement-related cortical fields (MRCFs), event-related desynchronization/synchronization (ERD/ERS) associated with somatosensory stimulation or movement and cortico-muscular coherence (CMC) during sustained movement. However, the cortical mechanisms underlying the sensorimotor functions of the tongue remain unclear, as contamination artifacts induced by stimulation and/or muscle activity within the orofacial region complicates MEG analysis in the oral region. Recently, several studies have obtained MEG recordings from the tongue region using improved stimulation methods and movement tasks. In the present review, we provide a detailed overview of tongue sensorimotor processing in humans, based on the findings of recent MEG studies. In addition, we review the clinical applications of MEG for sensory disturbances of the tongue caused by damage to the lingual nerve. Increased knowledge of the physiological and pathophysiological mechanisms underlying tongue sensorimotor processing may improve our understanding of the cortical entrainment of human oral functions.
Collapse
Affiliation(s)
- Hitoshi Maezawa
- Department of Oral Physiology, Graduate School of Dental Medicine, Hokkaido University Sapporo, Japan
| |
Collapse
|
47
|
Indexing cortical entrainment to natural speech at the phonemic level: Methodological considerations for applied research. Hear Res 2017; 348:70-77. [PMID: 28246030 DOI: 10.1016/j.heares.2017.02.015] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2016] [Revised: 02/15/2017] [Accepted: 02/17/2017] [Indexed: 11/21/2022]
Abstract
Speech is central to human life. As such, any delay or impairment in receptive speech processing can have a profoundly negative impact on the social and professional life of a person. Thus, being able to assess the integrity of speech processing in different populations is an important goal. Current standardized assessment is mostly based on psychometric measures that do not capture the full extent of a person's speech processing abilities and that are difficult to administer in some subjects groups. A potential alternative to these tests would be to derive "direct", objective measures of speech processing from cortical activity. One such approach was recently introduced and showed that it is possible to use electroencephalography (EEG) to index cortical processing at the level of phonemes from responses to continuous natural speech. However, a large amount of data was required for such analyses. This limits the usefulness of this approach for assessing speech processing in particular cohorts for whom data collection is difficult. Here, we used EEG data from 10 subjects to assess whether measures reflecting phoneme-level processing could be reliably obtained using only 10 min of recording time from each subject. This was done successfully using a generic modeling approach wherein the data from a training group composed of 9 subjects were combined to derive robust predictions of the EEG signal for new subjects. This allowed the derivation of indices of cortical activity at the level of phonemes and the disambiguation of responses to specific phonetic features (e.g., stop, plosive, and nasal consonants) with limited data. This objective approach has the potential to complement psychometric measures of speech processing in a wide variety of subjects.
Collapse
|
48
|
Kawase T, Yahata I, Kanno A, Sakamoto S, Takanashi Y, Takata S, Nakasato N, Kawashima R, Katori Y. Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study. PLoS One 2016; 11:e0168740. [PMID: 28030631 PMCID: PMC5193434 DOI: 10.1371/journal.pone.0168740] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 12/05/2016] [Indexed: 11/18/2022] Open
Abstract
The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage.
Collapse
Affiliation(s)
- Tetsuaki Kawase
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
- Laboratory of Rehabilitative Auditory Science, Tohoku University Graduate School of Biomedical Engineering, Sendai, Miyagi, Japan
- Department of Audiology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
- * E-mail:
| | - Izumi Yahata
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Akitake Kanno
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Shuichi Sakamoto
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi, Japan
| | - Yoshitaka Takanashi
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Shiho Takata
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Nobukazu Nakasato
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Ryuta Kawashima
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Yukio Katori
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| |
Collapse
|
49
|
Pastori C, Francione S, Pelle F, de Curtis M, Gnatkovsky V. Fluency tasks generate beta-gamma activity in language-related cortical areas of patients during stereo-EEG monitoring. BRAIN AND LANGUAGE 2016; 163:50-56. [PMID: 27684988 DOI: 10.1016/j.bandl.2016.09.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Revised: 09/07/2016] [Accepted: 09/12/2016] [Indexed: 06/06/2023]
Abstract
A quantitative method was developed to map cortical areas responsive to cognitive tasks during intracerebral stereo-EEG recording sessions in drug-resistant patients candidate for epilepsy surgery. Frequency power changes were evaluated with a computer-assisted analysis in 7 patients during phonemic fluency tasks. All patients were right-handed and were explored with depth electrodes in the dominant frontal lobe. We demonstrate that fluency tasks enhance beta-gamma frequencies and reduce background activities in language network regions of the dominant hemisphere. Non-reproducible changes were observed in other explored brain areas during cognitive tests execution.
Collapse
Affiliation(s)
- Chiara Pastori
- Unit of Epileptology and Experimental Neurophysiology, Fondazione Istituto Neurologico Carlo Besta, Milano, Italy
| | - Stefano Francione
- Claudio Munari Epilepsy Surgery Center, Ospedale Niguarda, Milano, Italy
| | - Federica Pelle
- Claudio Munari Epilepsy Surgery Center, Ospedale Niguarda, Milano, Italy
| | - Marco de Curtis
- Unit of Epileptology and Experimental Neurophysiology, Fondazione Istituto Neurologico Carlo Besta, Milano, Italy
| | - Vadym Gnatkovsky
- Unit of Epileptology and Experimental Neurophysiology, Fondazione Istituto Neurologico Carlo Besta, Milano, Italy.
| |
Collapse
|
50
|
Crosse MJ, Di Liberto GM, Bednar A, Lalor EC. The Multivariate Temporal Response Function (mTRF) Toolbox: A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli. Front Hum Neurosci 2016; 10:604. [PMID: 27965557 PMCID: PMC5127806 DOI: 10.3389/fnhum.2016.00604] [Citation(s) in RCA: 271] [Impact Index Per Article: 33.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Accepted: 11/11/2016] [Indexed: 01/05/2023] Open
Abstract
Understanding how brains process sensory signals in natural environments is one of the key goals of twenty-first century neuroscience. While brain imaging and invasive electrophysiology will play key roles in this endeavor, there is also an important role to be played by noninvasive, macroscopic techniques with high temporal resolution such as electro- and magnetoencephalography. But challenges exist in determining how best to analyze such complex, time-varying neural responses to complex, time-varying and multivariate natural sensory stimuli. There has been a long history of applying system identification techniques to relate the firing activity of neurons to complex sensory stimuli and such techniques are now seeing increased application to EEG and MEG data. One particular example involves fitting a filter—often referred to as a temporal response function—that describes a mapping between some feature(s) of a sensory stimulus and the neural response. Here, we first briefly review the history of these system identification approaches and describe a specific technique for deriving temporal response functions known as regularized linear regression. We then introduce a new open-source toolbox for performing this analysis. We describe how it can be used to derive (multivariate) temporal response functions describing a mapping between stimulus and response in both directions. We also explain the importance of regularizing the analysis and how this regularization can be optimized for a particular dataset. We then outline specifically how the toolbox implements these analyses and provide several examples of the types of results that the toolbox can produce. Finally, we consider some of the limitations of the toolbox and opportunities for future development and application.
Collapse
Affiliation(s)
- Michael J Crosse
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College DublinDublin, Ireland; Department of Pediatrics and Department of Neuroscience, Albert Einstein College of MedicineThe Bronx, NY, USA
| | - Giovanni M Di Liberto
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College Dublin Dublin, Ireland
| | - Adam Bednar
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College DublinDublin, Ireland; Department of Biomedical Engineering and Department of Neuroscience, University of RochesterRochester, NY, USA
| | - Edmund C Lalor
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College DublinDublin, Ireland; Department of Biomedical Engineering and Department of Neuroscience, University of RochesterRochester, NY, USA
| |
Collapse
|