1
|
Manes JL, Kurani AS, Herschel E, Roberts AC, Tjaden K, Parrish T, Corcos DM. Premotor cortex is hypoactive during sustained vowel production in individuals with Parkinson's disease and hypophonia. Front Hum Neurosci 2023; 17:1250114. [PMID: 37941570 PMCID: PMC10629592 DOI: 10.3389/fnhum.2023.1250114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/09/2023] [Indexed: 11/10/2023] Open
Abstract
Introduction Hypophonia is a common feature of Parkinson's disease (PD); however, the contribution of motor cortical activity to reduced phonatory scaling in PD is still not clear. Methods In this study, we employed a sustained vowel production task during functional magnetic resonance imaging to compare brain activity between individuals with PD and hypophonia and an older healthy control (OHC) group. Results When comparing vowel production versus rest, the PD group showed fewer regions with significant BOLD activity compared to OHCs. Within the motor cortices, both OHC and PD groups showed bilateral activation of the laryngeal/phonatory area (LPA) of the primary motor cortex as well as activation of the supplementary motor area. The OHC group also recruited additional activity in the bilateral trunk motor area and right dorsal premotor cortex (PMd). A voxel-wise comparison of PD and HC groups showed that activity in right PMd was significantly lower in the PD group compared to OHC (p < 0.001, uncorrected). Right PMd activity was positively correlated with maximum phonation time in the PD group and negatively correlated with perceptual severity ratings of loudness and pitch. Discussion Our findings suggest that hypoactivation of PMd may be associated with abnormal phonatory control in PD.
Collapse
Affiliation(s)
- Jordan L. Manes
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, United States
| | - Ajay S. Kurani
- Ken and Ruth Davee Department of Neurology, Northwestern University, Chicago, IL, United States
- Department of Radiology, Northwestern University, Chicago, IL, United States
| | - Ellen Herschel
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, United States
| | - Angela C. Roberts
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
- Canadian Centre for Activity and Aging, Western University, London, ON, Canada
- Department of Computer Science, Western University, London, ON, Canada
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Kris Tjaden
- Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, United States
| | - Todd Parrish
- Department of Radiology, Northwestern University, Chicago, IL, United States
| | - Daniel M. Corcos
- Department of Physical Therapy and Human Movement Sciences, Northwestern University, Chicago, IL, United States
| |
Collapse
|
2
|
Franken MK, Liu BC, Ostry DJ. Towards a somatosensory theory of speech perception. J Neurophysiol 2022; 128:1683-1695. [PMID: 36416451 PMCID: PMC9762980 DOI: 10.1152/jn.00381.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Revised: 11/19/2022] [Accepted: 11/19/2022] [Indexed: 11/24/2022] Open
Abstract
Speech perception is known to be a multimodal process, relying not only on auditory input but also on the visual system and possibly on the motor system as well. To date there has been little work on the potential involvement of the somatosensory system in speech perception. In the present review, we identify the somatosensory system as another contributor to speech perception. First, we argue that evidence in favor of a motor contribution to speech perception can just as easily be interpreted as showing somatosensory involvement. Second, physiological and neuroanatomical evidence for auditory-somatosensory interactions across the auditory hierarchy indicates the availability of a neural infrastructure that supports somatosensory involvement in auditory processing in general. Third, there is accumulating evidence for somatosensory involvement in the context of speech specifically. In particular, tactile stimulation modifies speech perception, and speech auditory input elicits activity in somatosensory cortical areas. Moreover, speech sounds can be decoded from activity in somatosensory cortex; lesions to this region affect perception, and vowels can be identified based on somatic input alone. We suggest that the somatosensory involvement in speech perception derives from the somatosensory-auditory pairing that occurs during speech production and learning. By bringing together findings from a set of studies that have not been previously linked, the present article identifies the somatosensory system as a presently unrecognized contributor to speech perception.
Collapse
Affiliation(s)
| | | | - David J Ostry
- McGill University, Montreal, Quebec, Canada
- Haskins Laboratories, New Haven, Connecticut
| |
Collapse
|
3
|
Togo M, Matsumoto R, Usami K, Kobayashi K, Takeyama H, Nakae T, Shimotake A, Kikuchi T, Yoshida K, Matsuhashi M, Kunieda T, Miyamoto S, Takahashi R, Ikeda A. Distinct connectivity patterns in human medial parietal cortices: Evidence from standardized connectivity map using cortico-cortical evoked potential. Neuroimage 2022; 263:119639. [PMID: 36155245 DOI: 10.1016/j.neuroimage.2022.119639] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 09/10/2022] [Accepted: 09/19/2022] [Indexed: 11/21/2022] Open
Abstract
The medial parietal cortices are components of the default mode network (DMN), which are active in the resting state. The medial parietal cortices include the precuneus and the dorsal posterior cingulate cortex (dPCC). Few studies have mentioned differences in the connectivity in the medial parietal cortices, and these differences have not yet been precisely elucidated. Electrophysiological connectivity is essential for understanding cortical function or functional differences. Since little is known about electrophysiological connections from the medial parietal cortices in humans, we evaluated distinct connectivity patterns in the medial parietal cortices by constructing a standardized connectivity map using cortico-cortical evoked potential (CCEP). This study included nine patients with partial epilepsy or a brain tumor who underwent chronic intracranial electrode placement covering the medial parietal cortices. Single-pulse electrical stimuli were delivered to the medial parietal cortices (38 pairs of electrodes). Responses were standardized using the z-score of the baseline activity, and a response density map was constructed in the Montreal Neurological Institutes (MNI) space. The precuneus tended to connect with the inferior parietal lobule (IPL), the occipital cortex, superior parietal lobule (SPL), and the dorsal premotor area (PMd) (the four most active regions, in descending order), while the dPCC tended to connect to the middle cingulate cortex, SPL, precuneus, and IPL. The connectivity pattern differs significantly between the precuneus and dPCC stimulation (p<0.05). Regarding each part of the medial parietal cortices, the distributions of parts of CCEP responses resembled those of the functional connectivity database. Based on how the dPCC was connected to the medial frontal area, SPL, and IPL, its connectivity pattern could not be explained by DMN alone, but suggested a mixture of DMN and the frontoparietal cognitive network. These findings improve our understanding of the connectivity profile within the medial parietal cortices. The electrophysiological connectivity is the basis of propagation of electrical activities in patients with epilepsy. In addition, it helps us to better understand the epileptic network arising from the medial parietal cortices.
Collapse
Affiliation(s)
- Masaya Togo
- Department of Neurology, Kyoto University Graduate School of Medicine, Japan; Division of Neurology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe, 650-0017, Japan
| | - Riki Matsumoto
- Department of Neurology, Kyoto University Graduate School of Medicine, Japan; Division of Neurology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe, 650-0017, Japan.
| | - Kiyohide Usami
- Department of Neurology, Kyoto University Graduate School of Medicine, Japan
| | - Katsuya Kobayashi
- Department of Neurology, Kyoto University Graduate School of Medicine, Japan
| | - Hirofumi Takeyama
- Department of Respiratory Care and Sleep Control Medicine, Kyoto University Graduate School of Medicine, Japan; Department of Neurology, Japanese Red Cross Otsu Hospital, Japan
| | - Takuro Nakae
- Department of Neurosurgery, Shiga General Hospital, Japan
| | - Akihiro Shimotake
- Department of Neurology, Kyoto University Graduate School of Medicine, Japan
| | - Takayuki Kikuchi
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Japan
| | - Kazumichi Yoshida
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Japan
| | - Masao Matsuhashi
- Departments of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Takeharu Kunieda
- Department of Neurosurgery, Ehime University Graduate School of Medicine, Japan
| | - Susumu Miyamoto
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Japan
| | - Ryosuke Takahashi
- Department of Neurology, Kyoto University Graduate School of Medicine, Japan
| | - Akio Ikeda
- Departments of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| |
Collapse
|
4
|
Mercier MR, Dubarry AS, Tadel F, Avanzini P, Axmacher N, Cellier D, Vecchio MD, Hamilton LS, Hermes D, Kahana MJ, Knight RT, Llorens A, Megevand P, Melloni L, Miller KJ, Piai V, Puce A, Ramsey NF, Schwiedrzik CM, Smith SE, Stolk A, Swann NC, Vansteensel MJ, Voytek B, Wang L, Lachaux JP, Oostenveld R. Advances in human intracranial electroencephalography research, guidelines and good practices. Neuroimage 2022; 260:119438. [PMID: 35792291 DOI: 10.1016/j.neuroimage.2022.119438] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/23/2022] [Accepted: 06/30/2022] [Indexed: 12/11/2022] Open
Abstract
Since the second-half of the twentieth century, intracranial electroencephalography (iEEG), including both electrocorticography (ECoG) and stereo-electroencephalography (sEEG), has provided an intimate view into the human brain. At the interface between fundamental research and the clinic, iEEG provides both high temporal resolution and high spatial specificity but comes with constraints, such as the individual's tailored sparsity of electrode sampling. Over the years, researchers in neuroscience developed their practices to make the most of the iEEG approach. Here we offer a critical review of iEEG research practices in a didactic framework for newcomers, as well addressing issues encountered by proficient researchers. The scope is threefold: (i) review common practices in iEEG research, (ii) suggest potential guidelines for working with iEEG data and answer frequently asked questions based on the most widespread practices, and (iii) based on current neurophysiological knowledge and methodologies, pave the way to good practice standards in iEEG research. The organization of this paper follows the steps of iEEG data processing. The first section contextualizes iEEG data collection. The second section focuses on localization of intracranial electrodes. The third section highlights the main pre-processing steps. The fourth section presents iEEG signal analysis methods. The fifth section discusses statistical approaches. The sixth section draws some unique perspectives on iEEG research. Finally, to ensure a consistent nomenclature throughout the manuscript and to align with other guidelines, e.g., Brain Imaging Data Structure (BIDS) and the OHBM Committee on Best Practices in Data Analysis and Sharing (COBIDAS), we provide a glossary to disambiguate terms related to iEEG research.
Collapse
|
5
|
Ienca M, Fins JJ, Jox RJ, Jotterand F, Voeneky S, Andorno R, Ball T, Castelluccia C, Chavarriaga R, Chneiweiss H, Ferretti A, Friedrich O, Hurst S, Merkel G, Molnár-Gábor F, Rickli JM, Scheibner J, Vayena E, Yuste R, Kellmeyer P. Towards a Governance Framework for Brain Data. NEUROETHICS-NETH 2022. [DOI: 10.1007/s12152-022-09498-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
AbstractThe increasing availability of brain data within and outside the biomedical field, combined with the application of artificial intelligence (AI) to brain data analysis, poses a challenge for ethics and governance. We identify distinctive ethical implications of brain data acquisition and processing, and outline a multi-level governance framework. This framework is aimed at maximizing the benefits of facilitated brain data collection and further processing for science and medicine whilst minimizing risks and preventing harmful use. The framework consists of four primary areas of regulatory intervention: binding regulation, ethics and soft law, responsible innovation, and human rights.
Collapse
|
6
|
Castelhano J, Duarte I, Bernardino I, Pelle F, Francione S, Sales F, Castelo-Branco M. Intracranial recordings in humans reveal specific hippocampal spectral and dorsal vs. ventral connectivity signatures during visual, attention and memory tasks. Sci Rep 2022; 12:3488. [PMID: 35241722 PMCID: PMC8894428 DOI: 10.1038/s41598-022-07225-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 02/10/2022] [Indexed: 11/29/2022] Open
Abstract
Invasive brain recordings using many electrodes across a wide range of tasks provide a unique opportunity to study the role of oscillatory patterning and functional connectivity. We used large-scale recordings (stereo EEG) within and beyond the human hippocampus to investigate the role of distinct frequency oscillations during real-time execution of visual, attention and memory tasks in eight epileptic patients. We found that activity patterns in the hippocampus showed task and frequency dependent properties. Importantly, we found distinct connectivity signatures, in particular concerning parietal-hippocampal connectivity, thus revealing large scale synchronization of networks involved in memory tasks. Comparing the power per frequency band, across tasks and hippocampal regions (anterior/posterior) we confirmed a main effect of frequency band (p = 0.002). Gamma band activity was higher for visuo-spatial memory tasks in the anterior hippocampus. Further, we found that alpha and beta band activity in posterior hippocampus had larger modulation for high memory load visual tasks (p = 0.004). Three functional connectivity task related networks were identified: (dorsal) parietal-hippocampus (visual attention and memory), ventral stream- hippocampus and hippocampal-frontal connections (mainly tasks involving face recognition or object based search). These findings support the critical role of oscillatory patterning in the hippocampus during visual and memory tasks and suggests the presence of task related spectral and functional connectivity signatures. These results show that the use of large scale human intracranial recordings can validate the role of oscillatory and functional connectivity patterns across a broad range of cognitive domains.
Collapse
Affiliation(s)
- João Castelhano
- ICNAS, University of Coimbra, Polo 3, Azinhaga de Santa Comba, Celas, 3000-548, Coimbra, Portugal.,CIBIT, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Isabel Duarte
- ICNAS, University of Coimbra, Polo 3, Azinhaga de Santa Comba, Celas, 3000-548, Coimbra, Portugal
| | - Inês Bernardino
- ICNAS, University of Coimbra, Polo 3, Azinhaga de Santa Comba, Celas, 3000-548, Coimbra, Portugal.,CIBIT, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Federica Pelle
- Claudio Munari Epilepsy Surgery Center, Niguarda Hospital, Milan, Italy
| | - Stefano Francione
- Claudio Munari Epilepsy Surgery Center, Niguarda Hospital, Milan, Italy
| | | | - Miguel Castelo-Branco
- ICNAS, University of Coimbra, Polo 3, Azinhaga de Santa Comba, Celas, 3000-548, Coimbra, Portugal. .,CIBIT, Faculty of Medicine, University of Coimbra, Coimbra, Portugal.
| |
Collapse
|
7
|
Glanz O, Hader M, Schulze-Bonhage A, Auer P, Ball T. A Study of Word Complexity Under Conditions of Non-experimental, Natural Overt Speech Production Using ECoG. Front Hum Neurosci 2022; 15:711886. [PMID: 35185491 PMCID: PMC8854223 DOI: 10.3389/fnhum.2021.711886] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 12/15/2021] [Indexed: 11/25/2022] Open
Abstract
The linguistic complexity of words has largely been studied on the behavioral level and in experimental settings. Only little is known about the neural processes underlying it in uninstructed, spontaneous conversations. We built up a multimodal neurolinguistic corpus composed of synchronized audio, video, and electrocorticographic (ECoG) recordings from the fronto-temporo-parietal cortex to address this phenomenon based on uninstructed, spontaneous speech production. We performed extensive linguistic annotations of the language material and calculated word complexity using several numeric parameters. We orthogonalized the parameters with the help of a linear regression model. Then, we correlated the spectral components of neural activity with the individual linguistic parameters and with the residuals of the linear regression model, and compared the results. The proportional relation between the number of consonants and vowels, which was the most informative parameter with regard to the neural representation of word complexity, showed effects in two areas: the frontal one was at the junction of the premotor cortex, the prefrontal cortex, and Brodmann area 44. The postcentral one lay directly above the lateral sulcus and comprised the ventral central sulcus, the parietal operculum and the adjacent inferior parietal cortex. Beyond the physiological findings summarized here, our methods may be useful for those interested in ways of studying neural effects related to natural language production and in surmounting the intrinsic problem of collinearity between multiple features of spontaneously spoken material.
Collapse
Affiliation(s)
- Olga Glanz
- GRK 1624 “Frequency Effects in Language,” University of Freiburg, Freiburg, Germany
- Department of German Linguistics, University of Freiburg, Freiburg, Germany
- The Hermann Paul School of Linguistics, University of Freiburg, Freiburg, Germany
- BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany
- Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany
- Translational Neurotechnology Lab, Department of Neurosurgery, Faculty of Medicine, Medical Center—University of Freiburg, University of Freiburg, Freiburg, Germany
- Olga Glanz (Iljina),
| | - Marina Hader
- BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany
- Translational Neurotechnology Lab, Department of Neurosurgery, Faculty of Medicine, Medical Center—University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Andreas Schulze-Bonhage
- Department of Neurosurgery, Faculty of Medicine, Epilepsy Center, Medical Center—University of Freiburg, University of Freiburg, Freiburg, Germany
- Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Peter Auer
- GRK 1624 “Frequency Effects in Language,” University of Freiburg, Freiburg, Germany
- Department of German Linguistics, University of Freiburg, Freiburg, Germany
- The Hermann Paul School of Linguistics, University of Freiburg, Freiburg, Germany
| | - Tonio Ball
- BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany
- Translational Neurotechnology Lab, Department of Neurosurgery, Faculty of Medicine, Medical Center—University of Freiburg, University of Freiburg, Freiburg, Germany
- Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
- *Correspondence: Tonio Ball,
| |
Collapse
|
8
|
Castellucci GA, Kovach CK, Howard MA, Greenlee JDW, Long MA. A speech planning network for interactive language use. Nature 2022; 602:117-122. [PMID: 34987226 PMCID: PMC9990513 DOI: 10.1038/s41586-021-04270-z] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 11/19/2021] [Indexed: 11/09/2022]
Abstract
During conversation, people take turns speaking by rapidly responding to their partners while simultaneously avoiding interruption1,2. Such interactions display a remarkable degree of coordination, as gaps between turns are typically about 200 milliseconds3-approximately the duration of an eyeblink4. These latencies are considerably shorter than those observed in simple word-production tasks, which indicates that speakers often plan their responses while listening to their partners2. Although a distributed network of brain regions has been implicated in speech planning5-9, the neural dynamics underlying the specific preparatory processes that enable rapid turn-taking are poorly understood. Here we use intracranial electrocorticography to precisely measure neural activity as participants perform interactive tasks, and we observe a functionally and anatomically distinct class of planning-related cortical dynamics. We localize these responses to a frontotemporal circuit centred on the language-critical caudal inferior frontal cortex10 (Broca's region) and the caudal middle frontal gyrus-a region not normally implicated in speech planning11-13. Using a series of motor tasks, we then show that this planning network is more active when preparing speech as opposed to non-linguistic actions. Finally, we delineate planning-related circuitry during natural conversation that is nearly identical to the network mapped with our interactive tasks, and we find this circuit to be most active before participant speech during unconstrained turn-taking. Therefore, we have identified a speech planning network that is central to natural language generation during social interaction.
Collapse
Affiliation(s)
- Gregg A Castellucci
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | | | - Matthew A Howard
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | | | - Michael A Long
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
9
|
Neural oscillations track natural but not artificial fast speech: Novel insights from speech-brain coupling using MEG. Neuroimage 2021; 244:118577. [PMID: 34525395 DOI: 10.1016/j.neuroimage.2021.118577] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/27/2021] [Accepted: 09/12/2021] [Indexed: 11/20/2022] Open
Abstract
Neural oscillations contribute to speech parsing via cortical tracking of hierarchical linguistic structures, including syllable rate. While the properties of neural entrainment have been largely probed with speech stimuli at either normal or artificially accelerated rates, the important case of natural fast speech has been largely overlooked. Using magnetoencephalography, we found that listening to naturally-produced speech was associated with cortico-acoustic coupling, both at normal (∼6 syllables/s) and fast (∼9 syllables/s) rates, with a corresponding shift in peak entrainment frequency. Interestingly, time-compressed sentences did not yield such coupling, despite being generated at the same rate as the natural fast sentences. Additionally, neural activity in right motor cortex exhibited stronger tuning to natural fast rather than to artificially accelerated speech, and showed evidence for stronger phase-coupling with left temporo-parietal and motor areas. These findings are highly relevant for our understanding of the role played by auditory and motor cortex oscillations in the perception of naturally produced speech.
Collapse
|
10
|
Fiveash A, Bedoin N, Gordon RL, Tillmann B. Processing rhythm in speech and music: Shared mechanisms and implications for developmental speech and language disorders. Neuropsychology 2021; 35:771-791. [PMID: 34435803 PMCID: PMC8595576 DOI: 10.1037/neu0000766] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE Music and speech are complex signals containing regularities in how they unfold in time. Similarities between music and speech/language in terms of their auditory features, rhythmic structure, and hierarchical structure have led to a large body of literature suggesting connections between the two domains. However, the precise underlying mechanisms behind this connection remain to be elucidated. METHOD In this theoretical review article, we synthesize previous research and present a framework of potentially shared neural mechanisms for music and speech rhythm processing. We outline structural similarities of rhythmic signals in music and speech, synthesize prominent music and speech rhythm theories, discuss impaired timing in developmental speech and language disorders, and discuss music rhythm training as an additional, potentially effective therapeutic tool to enhance speech/language processing in these disorders. RESULTS We propose the processing rhythm in speech and music (PRISM) framework, which outlines three underlying mechanisms that appear to be shared across music and speech/language processing: Precise auditory processing, synchronization/entrainment of neural oscillations to external stimuli, and sensorimotor coupling. The goal of this framework is to inform directions for future research that integrate cognitive and biological evidence for relationships between rhythm processing in music and speech. CONCLUSION The current framework can be used as a basis to investigate potential links between observed timing deficits in developmental disorders, impairments in the proposed mechanisms, and pathology-specific deficits which can be targeted in treatment and training supporting speech therapy outcomes. On these grounds, we propose future research directions and discuss implications of our framework. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Anna Fiveash
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| | - Nathalie Bedoin
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
- University of Lyon 2, CNRS, UMR5596, Lyon, F-69000, France
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| |
Collapse
|
11
|
Fuentes-Claramonte P, Soler-Vidal J, Salgado-Pineda P, García-León MÁ, Ramiro N, Santo-Angles A, Llanos Torres M, Tristany J, Guerrero-Pedraza A, Munuera J, Sarró S, Salvador R, Hinzen W, McKenna PJ, Pomarol-Clotet E. Auditory hallucinations activate language and verbal short-term memory, but not auditory, brain regions. Sci Rep 2021; 11:18890. [PMID: 34556714 PMCID: PMC8460641 DOI: 10.1038/s41598-021-98269-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 08/31/2021] [Indexed: 11/12/2022] Open
Abstract
Auditory verbal hallucinations (AVH, ‘hearing voices’) are an important symptom of schizophrenia but their biological basis is not well understood. One longstanding approach proposes that they are perceptual in nature, specifically that they reflect spontaneous abnormal neuronal activity in the auditory cortex, perhaps with additional ‘top down’ cognitive influences. Functional imaging studies employing the symptom capture technique—where activity when patients experience AVH is compared to times when they do not—have had mixed findings as to whether the auditory cortex is activated. Here, using a novel variant of the symptom capture technique, we show that the experience of AVH does not induce auditory cortex activation, even while real speech does, something that effectively rules out all theories that propose a perceptual component to AVH. Instead, we find that the experience of AVH activates language regions and/or regions that are engaged during verbal short-term memory.
Collapse
Affiliation(s)
- Paola Fuentes-Claramonte
- FIDMAG Hermanas Hospitalarias Research Foundation, C/. Dr. Antoni Pujadas 38, 08830, Sant Boi de Llobregat, Barcelona, Spain.,CIBERSAM, Madrid, Spain
| | - Joan Soler-Vidal
- FIDMAG Hermanas Hospitalarias Research Foundation, C/. Dr. Antoni Pujadas 38, 08830, Sant Boi de Llobregat, Barcelona, Spain.,CIBERSAM, Madrid, Spain.,Universitat de Barcelona, Barcelona, Spain.,Benito Menni Complex Asistencial en Salut Mental, Sant Boi de Llobregat, Spain
| | - Pilar Salgado-Pineda
- FIDMAG Hermanas Hospitalarias Research Foundation, C/. Dr. Antoni Pujadas 38, 08830, Sant Boi de Llobregat, Barcelona, Spain.,CIBERSAM, Madrid, Spain
| | - María Ángeles García-León
- FIDMAG Hermanas Hospitalarias Research Foundation, C/. Dr. Antoni Pujadas 38, 08830, Sant Boi de Llobregat, Barcelona, Spain.,CIBERSAM, Madrid, Spain
| | | | - Aniol Santo-Angles
- FIDMAG Hermanas Hospitalarias Research Foundation, C/. Dr. Antoni Pujadas 38, 08830, Sant Boi de Llobregat, Barcelona, Spain
| | | | | | | | | | - Salvador Sarró
- FIDMAG Hermanas Hospitalarias Research Foundation, C/. Dr. Antoni Pujadas 38, 08830, Sant Boi de Llobregat, Barcelona, Spain.,CIBERSAM, Madrid, Spain
| | - Raymond Salvador
- FIDMAG Hermanas Hospitalarias Research Foundation, C/. Dr. Antoni Pujadas 38, 08830, Sant Boi de Llobregat, Barcelona, Spain.,CIBERSAM, Madrid, Spain
| | - Wolfram Hinzen
- ICREA (Institució Catalana de Recerca i Estudis Avançats), Barcelona, Spain.,Universitat Pompeu Fabra, Barcelona, Spain
| | - Peter J McKenna
- FIDMAG Hermanas Hospitalarias Research Foundation, C/. Dr. Antoni Pujadas 38, 08830, Sant Boi de Llobregat, Barcelona, Spain. .,CIBERSAM, Madrid, Spain.
| | - Edith Pomarol-Clotet
- FIDMAG Hermanas Hospitalarias Research Foundation, C/. Dr. Antoni Pujadas 38, 08830, Sant Boi de Llobregat, Barcelona, Spain.,CIBERSAM, Madrid, Spain
| |
Collapse
|
12
|
Li Z, Li J, Hong B, Nolte G, Engel AK, Zhang D. Speaker-Listener Neural Coupling Reveals an Adaptive Mechanism for Speech Comprehension in a Noisy Environment. Cereb Cortex 2021; 31:4719-4729. [PMID: 33969389 DOI: 10.1093/cercor/bhab118] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 03/25/2021] [Indexed: 01/01/2023] Open
Abstract
Comprehending speech in noise is an essential cognitive skill for verbal communication. However, it remains unclear how our brain adapts to the noisy environment to achieve comprehension. The present study investigated the neural mechanisms of speech comprehension in noise using an functional near-infrared spectroscopy-based inter-brain approach. A group of speakers was invited to tell real-life stories. The recorded speech audios were added with meaningless white noise at four signal-to-noise levels and then played to listeners. Results showed that speaker-listener neural couplings of listener's left inferior frontal gyri (IFG), that is, sensorimotor system, and right middle temporal gyri (MTG), angular gyri (AG), that is, auditory system, were significantly higher in listening conditions than in the baseline. More importantly, the correlation between neural coupling of listener's left IFG and the comprehension performance gradually became more positive with increasing noise level, indicating an adaptive role of sensorimotor system in noisy speech comprehension; however, the top behavioral correlations for the coupling of listener's right MTG and AG were only obtained in mild noise conditions, indicating a different and less robust mechanism. To sum up, speaker-listener coupling analysis provides added value and new sight to understand the neural mechanism of speech-in-noise comprehension.
Collapse
Affiliation(s)
- Zhuoran Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China.,Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Jiawei Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China.,Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Bo Hong
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, Hamburg 20246, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, Hamburg 20246, Germany
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China.,Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| |
Collapse
|
13
|
Berezutskaya J, Baratin C, Freudenburg ZV, Ramsey NF. High-density intracranial recordings reveal a distinct site in anterior dorsal precentral cortex that tracks perceived speech. Hum Brain Mapp 2020; 41:4587-4609. [PMID: 32744403 PMCID: PMC7555065 DOI: 10.1002/hbm.25144] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 06/23/2020] [Accepted: 07/06/2020] [Indexed: 01/15/2023] Open
Abstract
Various brain regions are implicated in speech processing, and the specific function of some of them is better understood than others. In particular, involvement of the dorsal precentral cortex (dPCC) in speech perception remains debated, and attribution of the function of this region is more or less restricted to motor processing. In this study, we investigated high-density intracranial responses to speech fragments of a feature film, aiming to determine whether dPCC is engaged in perception of continuous speech. Our findings show that dPCC exhibited preference to speech over other tested sounds. Moreover, the identified area was involved in tracking of speech auditory properties including speech spectral envelope, its rhythmic phrasal pattern and pitch contour. DPCC also showed the ability to filter out noise from the perceived speech. Comparing these results to data from motor experiments showed that the identified region had a distinct location in dPCC, anterior to the hand motor area and superior to the mouth articulator region. The present findings uncovered with high-density intracranial recordings help elucidate the functional specialization of PCC and demonstrate the unique role of its anterior dorsal region in continuous speech perception.
Collapse
Affiliation(s)
- Julia Berezutskaya
- Brain Center, Department of Neurology and NeurosurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Clarissa Baratin
- Brain Center, Department of Neurology and NeurosurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
- Université Grenoble AlpesGrenoble Institut des NeurosciencesGrenobleFrance
| | - Zachary V. Freudenburg
- Brain Center, Department of Neurology and NeurosurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Nicolas F. Ramsey
- Brain Center, Department of Neurology and NeurosurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
| |
Collapse
|
14
|
Measuring children's auditory statistical learning via serial recall. J Exp Child Psychol 2020; 200:104964. [PMID: 32858420 DOI: 10.1016/j.jecp.2020.104964] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2019] [Revised: 07/10/2020] [Accepted: 07/23/2020] [Indexed: 11/24/2022]
Abstract
Statistical learning (SL) has been a prominent focus of research in developmental and adult populations, guided by the assumption that it is a fundamental component of learning underlying higher-order cognition. In developmental populations, however, there have been recent concerns regarding the degree to which many current tasks reliably measure SL, particularly in younger children. In the current article, we present the results of two studies that measured auditory statistical learning (ASL) of linguistic stimuli in children aged 5-8 years. Children listened to 6 min of continuous syllables comprising four trisyllabic pseudowords. Following the familiarization phase, children completed (a) a two-alternative forced-choice task and (b) a serial recall task in which they repeated either target sequences embedded during familiarization or foils, manipulated for sequence length. Results showed that, although both measures consistently revealed learning at the group level, the recall task better captured learning across the full range of abilities and was more reliable at the individual level. We conclude that, as has also been demonstrated in adults, the method holds promise for future studies of individual differences in ASL of linguistic stimuli.
Collapse
|
15
|
Ladányi E, Persici V, Fiveash A, Tillmann B, Gordon RL. Is atypical rhythm a risk factor for developmental speech and language disorders? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2020; 11:e1528. [PMID: 32244259 PMCID: PMC7415602 DOI: 10.1002/wcs.1528] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 03/07/2020] [Accepted: 03/09/2020] [Indexed: 01/07/2023]
Abstract
Although a growing literature points to substantial variation in speech/language abilities related to individual differences in musical abilities, mainstream models of communication sciences and disorders have not yet incorporated these individual differences into childhood speech/language development. This article reviews three sources of evidence in a comprehensive body of research aligning with three main themes: (a) associations between musical rhythm and speech/language processing, (b) musical rhythm in children with developmental speech/language disorders and common comorbid attentional and motor disorders, and (c) individual differences in mechanisms underlying rhythm processing in infants and their relationship with later speech/language development. In light of converging evidence on associations between musical rhythm and speech/language processing, we propose the Atypical Rhythm Risk Hypothesis, which posits that individuals with atypical rhythm are at higher risk for developmental speech/language disorders. The hypothesis is framed within the larger epidemiological literature in which recent methodological advances allow for large-scale testing of shared underlying biology across clinically distinct disorders. A series of predictions for future work testing the Atypical Rhythm Risk Hypothesis are outlined. We suggest that if a significant body of evidence is found to support this hypothesis, we can envision new risk factor models that incorporate atypical rhythm to predict the risk of developing speech/language disorders. Given the high prevalence of speech/language disorders in the population and the negative long-term social and economic consequences of gaps in identifying children at-risk, these new lines of research could potentially positively impact access to early identification and treatment. This article is categorized under: Linguistics > Language in Mind and Brain Neuroscience > Development Linguistics > Language Acquisition.
Collapse
Affiliation(s)
- Enikő Ladányi
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Valentina Persici
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee, USA.,Department of Psychology, Università degli Studi di Milano - Bicocca, Milan, Italy.,Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee, USA
| | - Anna Fiveash
- Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CRNL, INSERM, University of Lyon 1, U1028, CNRS, UMR5292, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CRNL, INSERM, University of Lyon 1, U1028, CNRS, UMR5292, Lyon, France
| | - Reyna L Gordon
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee, USA.,Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee, USA.,Vanderbilt Genetics Institute, Vanderbilt University, Nashville, Tennessee, USA.,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
16
|
Müsch K, Himberger K, Tan KM, Valiante TA, Honey CJ. Transformation of speech sequences in human sensorimotor circuits. Proc Natl Acad Sci U S A 2020; 117:3203-3213. [PMID: 31996476 PMCID: PMC7022155 DOI: 10.1073/pnas.1910939117] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
After we listen to a series of words, we can silently replay them in our mind. Does this mental replay involve a reactivation of our original perceptual dynamics? We recorded electrocorticographic (ECoG) activity across the lateral cerebral cortex as people heard and then mentally rehearsed spoken sentences. For each region, we tested whether silent rehearsal of sentences involved reactivation of sentence-specific representations established during perception or transformation to a distinct representation. In sensorimotor and premotor cortex, we observed reliable and temporally precise responses to speech; these patterns transformed to distinct sentence-specific representations during mental rehearsal. In contrast, we observed less reliable and less temporally precise responses in prefrontal and temporoparietal cortex; these higher-order representations, which were sensitive to sentence semantics, were shared across perception and rehearsal of the same sentence. The mental rehearsal of natural speech involves the transformation of stimulus-locked speech representations in sensorimotor and premotor cortex, combined with diffuse reactivation of higher-order semantic representations.
Collapse
Affiliation(s)
- Kathrin Müsch
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD 21218;
| | - Kevin Himberger
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD 21218
| | - Kean Ming Tan
- Department of Statistics, University of Michigan, Ann Arbor, MI 48109
| | - Taufik A Valiante
- Krembil Research Institute, Toronto Western Hospital, Toronto, ON M5T 2S8, Canada
| | - Christopher J Honey
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD 21218
| |
Collapse
|
17
|
Huggins JE, Guger C, Aarnoutse E, Allison B, Anderson CW, Bedrick S, Besio W, Chavarriaga R, Collinger JL, Do AH, Herff C, Hohmann M, Kinsella M, Lee K, Lotte F, Müller-Putz G, Nijholt A, Pels E, Peters B, Putze F, Rupp R, Schalk G, Scott S, Tangermann M, Tubig P, Zander T. Workshops of the Seventh International Brain-Computer Interface Meeting: Not Getting Lost in Translation. BRAIN-COMPUTER INTERFACES 2019; 6:71-101. [PMID: 33033729 PMCID: PMC7539697 DOI: 10.1080/2326263x.2019.1697163] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 10/30/2019] [Indexed: 12/11/2022]
Abstract
The Seventh International Brain-Computer Interface (BCI) Meeting was held May 21-25th, 2018 at the Asilomar Conference Grounds, Pacific Grove, California, United States. The interactive nature of this conference was embodied by 25 workshops covering topics in BCI (also called brain-machine interface) research. Workshops covered foundational topics such as hardware development and signal analysis algorithms, new and imaginative topics such as BCI for virtual reality and multi-brain BCIs, and translational topics such as clinical applications and ethical assumptions of BCI development. BCI research is expanding in the diversity of applications and populations for whom those applications are being developed. BCI applications are moving toward clinical readiness as researchers struggle with the practical considerations to make sure that BCI translational efforts will be successful. This paper summarizes each workshop, providing an overview of the topic of discussion, references for additional information, and identifying future issues for research and development that resulted from the interactions and discussion at the workshop.
Collapse
Affiliation(s)
- Jane E Huggins
- Department of Physical Medicine and Rehabilitation, Department of Biomedical Engineering, Neuroscience Graduate Program, University of Michigan, Ann Arbor, Michigan, United States, 325 East Eisenhower, Room 3017; Ann Arbor, Michigan 48108-5744
| | - Christoph Guger
- g.tec medical engineering GmbH/Guger Technologies OG, Austria, Sierningstrasse 14, 4521 Schiedlberg, Austria
| | - Erik Aarnoutse
- UMC Utrecht Brain Center, Department of Neurology & Neurosurgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - Brendan Allison
- Dept. of Cognitive Science, Mail Code 0515, University of California at San Diego, La Jolla, United States
| | - Charles W Anderson
- Department of Computer Science, Molecular, Cellular and Integrative Neurosience Program, Colorado State University, Fort Collins, CO 80523
| | - Steven Bedrick
- Center for Spoken Language Understanding, Oregon Health & Science University, Portland, OR 97239
| | - Walter Besio
- Department of Electrical, Computer, & Biomedical Engineering and Interdisciplinary Neuroscience Program, University of Rhode Island, Kingston, Rhode Island, USA, CREmedical Corp. Kingston, Rhode Island, USA
| | - Ricardo Chavarriaga
- Defitech Chair in Brain-Machine Interface (CNBI), Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne - EPFL, Switzerland
| | - Jennifer L Collinger
- University of Pittsburgh, Department of Physical Medicine and Rehabilitation, VA Pittsburgh Healthcare System, Department of Veterans Affairs, 3520 5th Ave, Pittsburgh, PA, 15213
| | - An H Do
- UC Irvine Brain Computer Interface Lab, Department of Neurology, University of California, Irvine
| | - Christian Herff
- School of Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Matthias Hohmann
- Max Planck Institute for Intelligent Systems, Department for Empirical Inference, Max-Planck-Ring 4, 72074 Tübingen, Germany
| | - Michelle Kinsella
- Oregon Health & Science University, Institute on Development & Disability, 707 SW Gaines St, #1290, Portland, OR 97239
| | - Kyuhwa Lee
- Swiss Federal Institute of Technology in Lausanne-EPFL
| | - Fabien Lotte
- Inria Bordeaux Sud-Ouest, LaBRI (Univ. Bordeaux/CNRS/Bordeaux INP), 200 avenue de la vieille tour, 33405, Talence Cedex, France
| | | | - Anton Nijholt
- Faculty EEMCS, University of Twente, Enschede, The Netherlands
| | - Elmar Pels
- UMC Utrecht Brain Center, Department of Neurology & Neurosurgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - Betts Peters
- Oregon Health & Science University, Institute on Development & Disability, 707 SW Gaines St, #1290, Portland, OR 97239
| | - Felix Putze
- University of Bremen, Germany, Cognitive Systems Lab, University of Bremen, Enrique-Schmidt-Straße 5 (Cartesium), 28359 Bremen
| | - Rüdiger Rupp
- Spinal Cord Injury Center, Heidelberg University Hospital
| | - Gerwin Schalk
- National Center for Adaptive Neurotechnologies, Wadsworth Center, NYS Dept. of Health, Dept. of Neurology, Albany Medical College, Dept. of Biomed. Sci., State Univ. of New York at Albany, Center for Medical Sciences 2003, 150 New Scotland Avenue, Albany, New York 12208
| | - Stephanie Scott
- Department of Media Communications, Colorado State University, Fort Collins, CO 80523
| | - Michael Tangermann
- Brain State Decoding Lab, Cluster of Excellence BrainLinks-BrainTools, Computer Science Dept., University of Freiburg, Germany, Autonomous Intelligent Systems Lab, Computer Science Dept., University of Freiburg, Germany
| | - Paul Tubig
- Department of Philosophy, Center for Neurotechnology, University of Washington, Savery Hall, Room 361, Seattle, WA 98195
| | - Thorsten Zander
- Team PhyPA, Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany, 7 Zander Laboratories B.V., Amsterdam, The Netherlands
| |
Collapse
|
18
|
Herff C, Diener L, Angrick M, Mugler E, Tate MC, Goldrick MA, Krusienski DJ, Slutzky MW, Schultz T. Generating Natural, Intelligible Speech From Brain Activity in Motor, Premotor, and Inferior Frontal Cortices. Front Neurosci 2019; 13:1267. [PMID: 31824257 PMCID: PMC6882773 DOI: 10.3389/fnins.2019.01267] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Accepted: 11/07/2019] [Indexed: 12/17/2022] Open
Abstract
Neural interfaces that directly produce intelligible speech from brain activity would allow people with severe impairment from neurological disorders to communicate more naturally. Here, we record neural population activity in motor, premotor and inferior frontal cortices during speech production using electrocorticography (ECoG) and show that ECoG signals alone can be used to generate intelligible speech output that can preserve conversational cues. To produce speech directly from neural data, we adapted a method from the field of speech synthesis called unit selection, in which units of speech are concatenated to form audible output. In our approach, which we call Brain-To-Speech, we chose subsequent units of speech based on the measured ECoG activity to generate audio waveforms directly from the neural recordings. Brain-To-Speech employed the user's own voice to generate speech that sounded very natural and included features such as prosody and accentuation. By investigating the brain areas involved in speech production separately, we found that speech motor cortex provided more information for the reconstruction process than the other cortical areas.
Collapse
Affiliation(s)
- Christian Herff
- School of Mental Health & Neuroscience, Maastricht University, Maastricht, Netherlands
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| | - Lorenz Diener
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| | - Miguel Angrick
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| | - Emily Mugler
- Department of Neurology, Northwestern University, Chicago, IL, United States
| | - Matthew C. Tate
- Department of Neurosurgery, Northwestern University, Chicago, IL, United States
| | - Matthew A. Goldrick
- Department of Linguistics, Northwestern University, Chicago, IL, United States
| | - Dean J. Krusienski
- Biomedical Engineering Department, Virginia Commonwealth University, Richmond, VA, United States
| | - Marc W. Slutzky
- Department of Neurology, Northwestern University, Chicago, IL, United States
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Physical Medicine & Rehabilitation, Northwestern University, Chicago, IL, United States
| | - Tanja Schultz
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| |
Collapse
|
19
|
Kern M, Bert S, Glanz O, Schulze-Bonhage A, Ball T. Human motor cortex relies on sparse and action-specific activation during laughing, smiling and speech production. Commun Biol 2019; 2:118. [PMID: 30937400 PMCID: PMC6435746 DOI: 10.1038/s42003-019-0360-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 02/05/2019] [Indexed: 11/09/2022] Open
Abstract
Smiling, laughing, and overt speech production are fundamental to human everyday communication. However, little is known about how the human brain achieves the highly accurate and differentiated control of such orofacial movement during natural conditions. Here, we utilized the high spatiotemporal resolution of subdural recordings to elucidate how human motor cortex is functionally engaged during control of real-life orofacial motor behaviour. For each investigated movement class-lip licking, speech production, laughing and smiling-our findings reveal a characteristic brain activity pattern within the mouth motor cortex with both spatial segregation and overlap between classes. Our findings thus show that motor cortex relies on sparse and action-specific activation during real-life orofacial behaviour, apparently organized in distinct but overlapping subareas that control different types of natural orofacial movements.
Collapse
Affiliation(s)
- Markus Kern
- Medical AI Lab, Department of Neurosurgery, Medical Center – University of Freiburg, Freiburg, 79106 Germany
- Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, 79104 Germany
- Epilepsy Center, Department of Neurosurgery, Medical Center – University of Freiburg, Freiburg, 79106 Germany
- BrainLinks-BrainTools Cluster of Excellence, University of Freiburg, Freiburg, 79110 Germany
| | - Sina Bert
- Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, 79104 Germany
- Epilepsy Center, Department of Neurosurgery, Medical Center – University of Freiburg, Freiburg, 79106 Germany
- BrainLinks-BrainTools Cluster of Excellence, University of Freiburg, Freiburg, 79110 Germany
| | - Olga Glanz
- Medical AI Lab, Department of Neurosurgery, Medical Center – University of Freiburg, Freiburg, 79106 Germany
- Epilepsy Center, Department of Neurosurgery, Medical Center – University of Freiburg, Freiburg, 79106 Germany
- BrainLinks-BrainTools Cluster of Excellence, University of Freiburg, Freiburg, 79110 Germany
- Hermann Paul School Linguistics, University of Freiburg, Freiburg, 79085 Germany
- GRK 1624, University of Freiburg, Freiburg, 79098 Germany
| | - Andreas Schulze-Bonhage
- Epilepsy Center, Department of Neurosurgery, Medical Center – University of Freiburg, Freiburg, 79106 Germany
| | - Tonio Ball
- Medical AI Lab, Department of Neurosurgery, Medical Center – University of Freiburg, Freiburg, 79106 Germany
- BrainLinks-BrainTools Cluster of Excellence, University of Freiburg, Freiburg, 79110 Germany
| |
Collapse
|
20
|
Rabbani Q, Milsap G, Crone NE. The Potential for a Speech Brain-Computer Interface Using Chronic Electrocorticography. Neurotherapeutics 2019; 16:144-165. [PMID: 30617653 PMCID: PMC6361062 DOI: 10.1007/s13311-018-00692-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
A brain-computer interface (BCI) is a technology that uses neural features to restore or augment the capabilities of its user. A BCI for speech would enable communication in real time via neural correlates of attempted or imagined speech. Such a technology would potentially restore communication and improve quality of life for locked-in patients and other patients with severe communication disorders. There have been many recent developments in neural decoders, neural feature extraction, and brain recording modalities facilitating BCI for the control of prosthetics and in automatic speech recognition (ASR). Indeed, ASR and related fields have developed significantly over the past years, and many lend many insights into the requirements, goals, and strategies for speech BCI. Neural speech decoding is a comparatively new field but has shown much promise with recent studies demonstrating semantic, auditory, and articulatory decoding using electrocorticography (ECoG) and other neural recording modalities. Because the neural representations for speech and language are widely distributed over cortical regions spanning the frontal, parietal, and temporal lobes, the mesoscopic scale of population activity captured by ECoG surface electrode arrays may have distinct advantages for speech BCI, in contrast to the advantages of microelectrode arrays for upper-limb BCI. Nevertheless, there remain many challenges for the translation of speech BCIs to clinical populations. This review discusses and outlines the current state-of-the-art for speech BCI and explores what a speech BCI using chronic ECoG might entail.
Collapse
Affiliation(s)
- Qinwan Rabbani
- Department of Electrical Engineering, The Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA.
| | - Griffin Milsap
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Nathan E Crone
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
21
|
Pérez A, Dumas G, Karadag M, Duñabeitia JA. Differential brain-to-brain entrainment while speaking and listening in native and foreign languages. Cortex 2018; 111:303-315. [PMID: 30598230 DOI: 10.1016/j.cortex.2018.11.026] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 09/28/2018] [Accepted: 11/29/2018] [Indexed: 10/27/2022]
Abstract
The study explores interbrain neural coupling when interlocutors engage in a conversation whether it be in their native or nonnative language. To this end, electroencephalographic hyperscanning was used to study brain-to-brain phase synchronization during a two-person turn-taking verbal exchange with no visual contact, in either a native or a foreign language context. Results show that the coupling strength between brain signals is increased in both, the native language context and the foreign language context, specifically, in the alpha frequency band. A difference in brain-to speech entrainment to native and foreign languages is also shown. These results indicate that between brain similarities in the timing of neural activations and their spatial distributions change depending on the language code used. We argue that factors like linguistic alignment, joint attention and brain-entrainment to speech operate with a language-idiosyncratic neural configuration, modulating the alignment of neural activity between speakers and listeners. Other possible factors leading to the differential interbrain synchronization patterns as well as the potential features of brain-to-brain entrainment as a mechanism are briefly discussed. We concluded that linguistic context should be considered when addressing interpersonal communication. The findings here open doors to quantifying linguistic interactions.
Collapse
Affiliation(s)
- Alejandro Pérez
- Centre for French & Linguistics, University of Toronto Scarborough, Toronto, Canada; Psychology Department, University of Toronto Scarborough, Toronto, Canada; BCBL, Basque Center on Cognition Brain and Language, Donostia-San Sebastián, Spain.
| | - Guillaume Dumas
- Human Genetics and Cognitive Functions Unit, Institut Pasteur, Paris, France; CNRS UMR 3571 Genes, Synapses and Cognition, Institut Pasteur, Paris, France; Human Genetics and Cognitive Functions, University Paris Diderot, Sorbonne Paris Cité, Paris, France
| | - Melek Karadag
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Jon Andoni Duñabeitia
- BCBL, Basque Center on Cognition Brain and Language, Donostia-San Sebastián, Spain; Facultad de Lenguas y Educación, Universidad Nebrija, Madrid, Spain
| |
Collapse
|