1
|
Corsini A, Tomassini A, Pastore A, Delis I, Fadiga L, D'Ausilio A. Speech perception difficulty modulates theta-band encoding of articulatory synergies. J Neurophysiol 2024; 131:480-491. [PMID: 38323331 DOI: 10.1152/jn.00388.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/04/2024] [Accepted: 01/25/2024] [Indexed: 02/08/2024] Open
Abstract
The human brain tracks available speech acoustics and extrapolates missing information such as the speaker's articulatory patterns. However, the extent to which articulatory reconstruction supports speech perception remains unclear. This study explores the relationship between articulatory reconstruction and task difficulty. Participants listened to sentences and performed a speech-rhyming task. Real kinematic data of the speaker's vocal tract were recorded via electromagnetic articulography (EMA) and aligned to corresponding acoustic outputs. We extracted articulatory synergies from the EMA data with principal component analysis (PCA) and employed partial information decomposition (PID) to separate the electroencephalographic (EEG) encoding of acoustic and articulatory features into unique, redundant, and synergistic atoms of information. We median-split sentences into easy (ES) and hard (HS) based on participants' performance and found that greater task difficulty involved greater encoding of unique articulatory information in the theta band. We conclude that fine-grained articulatory reconstruction plays a complementary role in the encoding of speech acoustics, lending further support to the claim that motor processes support speech perception.NEW & NOTEWORTHY Top-down processes originating from the motor system contribute to speech perception through the reconstruction of the speaker's articulatory movement. This study investigates the role of such articulatory simulation under variable task difficulty. We show that more challenging listening tasks lead to increased encoding of articulatory kinematics in the theta band and suggest that, in such situations, fine-grained articulatory reconstruction complements acoustic encoding.
Collapse
Affiliation(s)
- Alessandro Corsini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Aldo Pastore
- Laboratorio NEST, Scuola Normale Superiore, Pisa, Italy
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, Leeds, United Kingdom
| | - Luciano Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Alessandro D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| |
Collapse
|
2
|
Lin JFL, Imada T, Meltzoff AN, Hiraishi H, Ikeda T, Takahashi T, Hasegawa C, Yoshimura Y, Kikuchi M, Hirata M, Minabe Y, Asada M, Kuhl PK. Dual-MEG interbrain synchronization during turn-taking verbal interactions between mothers and children. Cereb Cortex 2022; 33:4116-4134. [PMID: 36130088 PMCID: PMC10068303 DOI: 10.1093/cercor/bhac330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 07/26/2022] [Accepted: 07/27/2022] [Indexed: 11/14/2022] Open
Abstract
Verbal interaction and imitation are essential for language learning and development in young children. However, it is unclear how mother-child dyads synchronize oscillatory neural activity at the cortical level in turn-based speech interactions. Our study investigated interbrain synchrony in mother-child pairs during a turn-taking paradigm of verbal imitation. A dual-MEG (magnetoencephalography) setup was used to measure brain activity from interactive mother-child pairs simultaneously. Interpersonal neural synchronization was compared between socially interactive and noninteractive tasks (passive listening to pure tones). Interbrain networks showed increased synchronization during the socially interactive compared to noninteractive conditions in the theta and alpha bands. Enhanced interpersonal brain synchrony was observed in the right angular gyrus, right triangular, and left opercular parts of the inferior frontal gyrus. Moreover, these parietal and frontal regions appear to be the cortical hubs exhibiting a high number of interbrain connections. These cortical areas could serve as a neural marker for the interactive component in verbal social communication. The present study is the first to investigate mother-child interbrain neural synchronization during verbal social interactions using a dual-MEG setup. Our results advance our understanding of turn-taking during verbal interaction between mother-child dyads and suggest a role for social "gating" in language learning.
Collapse
Affiliation(s)
- Jo-Fu Lotus Lin
- Institute for Learning & Brain Sciences (I-LABS), University of Washington, Portage Bay Building, University of Washington, Seattle, WA 98105, USA.,Research Center for Child Mental Development, Graduate School of Medical Science, Kanazawa University, 13-1 Takaramachi, Kanazawa-City, Ishikawa-Ken 920-8640, Japan.,Institute of Linguistics, National Tsing Hua University, 101, Section 2, Kuang-Fu Road, Hsinchu 300044, Taiwan
| | - Toshiaki Imada
- Institute for Learning & Brain Sciences (I-LABS), University of Washington, Portage Bay Building, University of Washington, Seattle, WA 98105, USA.,Research Center for Child Mental Development, Graduate School of Medical Science, Kanazawa University, 13-1 Takaramachi, Kanazawa-City, Ishikawa-Ken 920-8640, Japan
| | - Andrew N Meltzoff
- Institute for Learning & Brain Sciences (I-LABS), University of Washington, Portage Bay Building, University of Washington, Seattle, WA 98105, USA
| | - Hirotoshi Hiraishi
- Hamamatsu University School of Medicine, 1 Chome-20-1 Handayama, Higashi Ward, Hamamatsu, Shizuoka 431-3192, Japan
| | - Takashi Ikeda
- Research Center for Child Mental Development, Graduate School of Medical Science, Kanazawa University, 13-1 Takaramachi, Kanazawa-City, Ishikawa-Ken 920-8640, Japan
| | | | - Chiaki Hasegawa
- Research Center for Child Mental Development, Graduate School of Medical Science, Kanazawa University, 13-1 Takaramachi, Kanazawa-City, Ishikawa-Ken 920-8640, Japan
| | - Yuko Yoshimura
- Research Center for Child Mental Development, Graduate School of Medical Science, Kanazawa University, 13-1 Takaramachi, Kanazawa-City, Ishikawa-Ken 920-8640, Japan
| | - Mitsuru Kikuchi
- Research Center for Child Mental Development, Graduate School of Medical Science, Kanazawa University, 13-1 Takaramachi, Kanazawa-City, Ishikawa-Ken 920-8640, Japan
| | - Masayuki Hirata
- Department of Neurosurgery, Osaka University Medical School, 2 Chome-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Yoshio Minabe
- Research Center for Child Mental Development, Graduate School of Medical Science, Kanazawa University, 13-1 Takaramachi, Kanazawa-City, Ishikawa-Ken 920-8640, Japan
| | - Minoru Asada
- Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Patricia K Kuhl
- Institute for Learning & Brain Sciences (I-LABS), University of Washington, Portage Bay Building, University of Washington, Seattle, WA 98105, USA
| |
Collapse
|
3
|
Zhou T, Yu T, Li Z, Zhou X, Wen J, Li X. Functional mapping of language-related areas from natural, narrative speech during awake craniotomy surgery. Neuroimage 2021; 245:118720. [PMID: 34774771 DOI: 10.1016/j.neuroimage.2021.118720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 10/22/2021] [Accepted: 11/09/2021] [Indexed: 10/19/2022] Open
Abstract
Accurate localization of brain regions responsible for language and cognitive functions in epilepsy patients is important. Electrocorticography (ECoG)-based real-time functional mapping (RTFM) has been shown to be a safer alternative to electrical cortical stimulation mapping (ESM), which is currently the clinical/gold standard. Conventional methods for analyzing RTFM data mostly account for the ECoG signal in certain frequency bands, especially high gamma. Compared to ESM, they have limited accuracy when assessing channel responses. In the present study, we developed a novel RTFM method based on tensor component analysis (TCA) to address the limitations of current estimation methods. Our approach analyzes the whole frequency spectrum of the ECoG signal during natural continuous speech. We construct third-order tensors that contain multichannel time-frequency information and use TCA to extract low-dimensional temporal, spectral and spatial modes. Temporal modulation scores (correlation values) are then calculated between the time series of voice envelope features and TCA-estimated temporal courses, and significant temporal modulation determines which components' channel weightings are displayed to the neurosurgeon as a guide for follow-up ESM. In our experiments, data from thirteen patients with refractory epilepsy were recorded during preoperative evaluation for their epileptogenic zones (EZs), which were located adjacent to the eloquent cortex. Our results showed higher detection accuracy of our proposed method in a narrative speech task, suggesting that our method complements ESM and is an improvement over the prior RTFM method. To our knowledge, this is the first TCA-based method to pinpoint language-specific brain regions during continuous speech that uses whole-band ECoG.
Collapse
Affiliation(s)
- Tianyi Zhou
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University at Zhuhai 519087, China.
| | - Tao Yu
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing 100053, China.
| | - Zheng Li
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University at Zhuhai 519087, China.
| | - Xiaoxia Zhou
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing 100053, China.
| | - Jianbin Wen
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China.
| | - Xiaoli Li
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University at Zhuhai 519087, China; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
4
|
Pérez A, Monahan PJ, Lambon Ralph MA. Joint recording of EEG and audio signals in hyperscanning and pseudo-hyperscanning experiments. MethodsX 2021; 8:101347. [PMID: 34430250 PMCID: PMC8374354 DOI: 10.1016/j.mex.2021.101347] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 04/06/2021] [Indexed: 11/06/2022] Open
Abstract
Hyperscanning is an emerging technique that allows for the study of brain similarities between interacting individuals. This methodology has powerful implications for understanding the neural basis of joint actions, such as conversation; however, it also demands precise time-locking between the different brain recordings and sensory stimulation. Such precise timing, nevertheless, is often difficult to achieve. Recording auditory stimuli jointly with the ongoing high temporal resolution neurophysiological signal presents an effective way to control timing asynchronies offline between the digital trigger sent by the stimulation program and the actual onset of the auditory stimulus delivered to participants via speakers/headphones. This configuration is particularly challenging in hyperscanning setups due to the general increased complexity of the methodology. In other designs using the related technique of pseudo-hyperscanning, combined brain-auditory recordings are also a highly desirable feature, since reliable offline synchronization can be performed by using the shared audio signal. Here, we describe two hardware configurations wherein the real-time delivered auditory stimulus is recorded jointly with ongoing electroencephalographic (EEG) recordings. Specifically, we describe and provide customized implementations for joint EEG-audio recording in hyperscanning and pseudo-hyperscanning paradigms using hardware and software from Brain Products GmbH.•Joint EEG-audio recording configuration for hyperscanning and pseudo-hyperscanning paradigms.•Near zero-latency playback of auditory signal captured by a microphone.•Precise alignment between EEG and auditory stimulation.
Collapse
Affiliation(s)
- Alejandro Pérez
- MRC Cognition and Brain Sciences Unit, University of Cambridge, United Kingdom
- Department of Language Studies, University of Toronto Scarborough, Canada
- Department of Psychology, University of Toronto Scarborough, Canada
| | - Philip J. Monahan
- Department of Language Studies, University of Toronto Scarborough, Canada
- Department of Psychology, University of Toronto Scarborough, Canada
- Department of Linguistics, University of Toronto, Canada
| | | |
Collapse
|
5
|
Pezzulo G, Donnarumma F, Dindo H, D'Ausilio A, Konvalinka I, Castelfranchi C. The future of sensorimotor communication research: Reply to comments on "The body talks: Sensorimotor communication and its brain and kinematic signatures". Phys Life Rev 2019; 28:46-51. [PMID: 31147277 DOI: 10.1016/j.plrev.2019.03.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 03/15/2019] [Indexed: 11/25/2022]
Affiliation(s)
- Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
| | - Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Haris Dindo
- Computer Science Engineering, University of Palermo, Palermo, Italy
| | - Alessandro D'Ausilio
- IIT Istituto Italiano di Tecnologia, CTNSC@UniFe - Center of Translational Neurophysiology for Speech and Communication, Ferrara, Italy
| | - Ivana Konvalinka
- Section for Cognitive Systems, DTU Compute, Technical University of Denmark, Kongens Lyngby, Denmark
| | | |
Collapse
|
6
|
Anderlini D, Wallis G, Marinovic W. Language as a Predictor of Motor Recovery: The Case for a More Global Approach to Stroke Rehabilitation. Neurorehabil Neural Repair 2019; 33:167-178. [PMID: 30757952 DOI: 10.1177/1545968319829454] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Stroke is the third leading cause of death in the developed world and the primary cause of adult disability. The most common site of stroke is the middle cerebral artery (MCA), an artery that supplies a range of areas involved in both language and motor function. As a consequence, many stroke patients experience a combination of language and motor deficits. Indeed, those suffering from Broca's aphasia have an 80% chance of also suffering hemiplegia. Despite the prevalence of multifaceted disability in patients, the current trend in both clinical trials and clinical practice is toward compartmentalization of dysfunction. In this article, we review evidence that aphasia and hemiplegia do not just coexist, but that they interact. We review a number of clinical reports describing how therapies for one type of deficit can improve recovery in the other and vice versa. We go on to describe how language deficits should be seen as a warning to clinicians that the patient is likely to experience motor impairment and slower motor recovery, aiding clinicians to optimize their choice of therapy. We explore these findings and offer a tentative link between language and arm function through their shared need for sequential action, which we term fluency. We propose that area BA44 (part of Broca's area) acts as a hub for fluency in both movement and language, both in terms of production and comprehension.
Collapse
Affiliation(s)
- Deanna Anderlini
- 1 The University of Queensland, St Lucia, Queensland, Australia.,2 Royal Brisbane and Women's Hospital, Brisbane, Queensland, Australia
| | - Guy Wallis
- 1 The University of Queensland, St Lucia, Queensland, Australia
| | | |
Collapse
|
7
|
Mukherjee S, Badino L, Hilt PM, Tomassini A, Inuggi A, Fadiga L, Nguyen N, D'Ausilio A. The neural oscillatory markers of phonetic convergence during verbal interaction. Hum Brain Mapp 2018; 40:187-201. [PMID: 30240542 DOI: 10.1002/hbm.24364] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 07/19/2018] [Accepted: 08/05/2018] [Indexed: 12/12/2022] Open
Abstract
During a conversation, the neural processes supporting speech production and perception overlap in time and, based on context, expectations and the dynamics of interaction, they are also continuously modulated in real time. Recently, the growing interest in the neural dynamics underlying interactive tasks, in particular in the language domain, has mainly tackled the temporal aspects of turn-taking in dialogs. Besides temporal coordination, an under-investigated phenomenon is the implicit convergence of the speakers toward a shared phonetic space. Here, we used dual electroencephalography (dual-EEG) to record brain signals from subjects involved in a relatively constrained interactive task where they were asked to take turns in chaining words according to a phonetic rhyming rule. We quantified participants' initial phonetic fingerprints and tracked their phonetic convergence during the interaction via a robust and automatic speaker verification technique. Results show that phonetic convergence is associated to left frontal alpha/low-beta desynchronization during speech preparation and by high-beta suppression before and during listening to speech in right centro-parietal and left frontal sectors, respectively. By this work, we provide evidence that mutual adaptation of speech phonetic targets, correlates with specific alpha and beta oscillatory dynamics. Alpha and beta oscillatory dynamics may index the coordination of the "when" as well as the "how" speech interaction takes place, reinforcing the suggestion that perception and production processes are highly interdependent and co-constructed during a conversation.
Collapse
Affiliation(s)
- Sankar Mukherjee
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - Leonardo Badino
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - Pauline M Hilt
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - Alberto Inuggi
- Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Luciano Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy.,Section of Human Physiology, University of Ferrara, Ferrara, Italy
| | - Noël Nguyen
- CNRS, LPL, Aix Marseille University, Aix-en-Provence, France
| | - Alessandro D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy.,Section of Human Physiology, University of Ferrara, Ferrara, Italy
| |
Collapse
|