1
|
Kapitány-Fövény M, Vetró M, Révy G, Fabó D, Szirmai D, Hullám G. EEG based depression detection by machine learning: Does inner or overt speech condition provide better biomarkers when using emotion words as experimental cues? J Psychiatr Res 2024; 178:66-76. [PMID: 39121709 DOI: 10.1016/j.jpsychires.2024.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 07/24/2024] [Accepted: 08/04/2024] [Indexed: 08/12/2024]
Abstract
BACKGROUND Objective diagnostic approaches need to be tested to enhance the efficacy of depression detection. Non-invasive EEG-based identification represents a promising area. AIMS The present EEG study addresses two central questions: 1) whether inner or overt speech condition result in higher diagnositc accuracy of depression detection; and 2) does the affective nature of the presented emotion words count in such diagnostic approach. METHODS A matched case-control sample consisting of 10 depressed subjects and 10 healthy controls was assessed. An EEG headcap containing 64 electrodes measured neural responses to experimental cues presented in the form of 15 different words that belonged to three emotional categories: neutral, positive, and negative. 120 experimental cues was presented for every participant, each containing an "inner speech" and an "overt speech" segment. An EEGNet neural network was utilized. RESULTS The highest diagnostic accuracy of the EEGNet model was observed in the case of the overt speech condition (i.e. 69.5%), while a an overall subject-wise accuracy of 80% was achieved by the model. Only a negligible difference in diagnostic accuracy could be found between aggregated emotion word categories, with the highest accuracy (i.e. 70.2%) associated with the presentation of positive emotion words. Model decision was primarily influenced by electrodes representing the regions of the left parietal, the left temporal lobe and the middle frontal areas. CONCLUSIONS While the generalizability of our results is limited by the small sample size and potentially uncontrolled confounders, depression was associated with sensitive and presumably network-like aspects of these brain areas, potentially implying a higher level of emotion regulation that increases primarily in open communication.
Collapse
Affiliation(s)
- Máté Kapitány-Fövény
- Nyírő Gyula National Institute of Psychiatry and Addictology, Budapest, Lehel utca 59., H-1135, Hungary; Faculty of Health Sciences, Semmelweis University, Budapest, Vas utca 17., H-1088, Hungary.
| | - Mihály Vetró
- Department of Measurement and Information Systems, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Magyar Tudósok körútja 2., H-1117, Hungary
| | - Gábor Révy
- Department of Measurement and Information Systems, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Magyar Tudósok körútja 2., H-1117, Hungary.
| | - Dániel Fabó
- Department of Neurosurgery, Faculty of Medicine, Semmelweis University, Budapest, Amerikai út 57., H-1145, Hungary
| | - Danuta Szirmai
- Department of Neurosurgery, Faculty of Medicine, Semmelweis University, Budapest, Amerikai út 57., H-1145, Hungary
| | - Gábor Hullám
- Department of Measurement and Information Systems, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Magyar Tudósok körútja 2., H-1117, Hungary
| |
Collapse
|
2
|
Alexander JM, Stark BC. Interdisciplinary approaches to understanding the inner speech, with emphasis on the role of incorporating clinical data. Eur J Neurosci 2024. [PMID: 39015943 DOI: 10.1111/ejn.16470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 05/30/2024] [Accepted: 07/05/2024] [Indexed: 07/18/2024]
Abstract
Neuroscience has largely conceptualized inner speech, sometimes called covert speech, as being a part of the language system, namely, a precursor to overt speech and/or speech without the motor component (impoverished motor speech). Yet interdisciplinary work has strongly suggested that inner speech is multidimensional and situated within the language system as well as in more domain general systems. By leveraging evidence from philosophy, linguistics, neuroscience and cognitive science, we argue that neuroscience can gain a more comprehensive understanding of inner speech processes. We will summarize the existing knowledge on the traditional approach to understanding the neuroscience of inner speech, which is squarely through the language system, before discussing interdisciplinary approaches to understanding the cognitive, linguistic and neural substrates/mechanisms that may be involved in inner speech. Given our own interests in inner speech after brain injury, we finish by discussing the theoretical and clinical benefits of researching inner speech in aphasia through an interdisciplinary lens.
Collapse
Affiliation(s)
- Julianne M Alexander
- Department of Speech, Language and Hearing Sciences, Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, USA
| | - Brielle C Stark
- Department of Speech, Language and Hearing Sciences, Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, USA
| |
Collapse
|
3
|
Zhang W, Jiang M, Teo KAC, Bhuvanakantham R, Fong L, Sim WKJ, Guo Z, Foo CHV, Chua RHJ, Padmanabhan P, Leong V, Lu J, Gulyás B, Guan C. Revealing the spatiotemporal brain dynamics of covert speech compared with overt speech: A simultaneous EEG-fMRI study. Neuroimage 2024; 293:120629. [PMID: 38697588 DOI: 10.1016/j.neuroimage.2024.120629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 04/17/2024] [Accepted: 04/29/2024] [Indexed: 05/05/2024] Open
Abstract
Covert speech (CS) refers to speaking internally to oneself without producing any sound or movement. CS is involved in multiple cognitive functions and disorders. Reconstructing CS content by brain-computer interface (BCI) is also an emerging technique. However, it is still controversial whether CS is a truncated neural process of overt speech (OS) or involves independent patterns. Here, we performed a word-speaking experiment with simultaneous EEG-fMRI. It involved 32 participants, who generated words both overtly and covertly. By integrating spatial constraints from fMRI into EEG source localization, we precisely estimated the spatiotemporal dynamics of neural activity. During CS, EEG source activity was localized in three regions: the left precentral gyrus, the left supplementary motor area, and the left putamen. Although OS involved more brain regions with stronger activations, CS was characterized by an earlier event-locked activation in the left putamen (peak at 262 ms versus 1170 ms). The left putamen was also identified as the only hub node within the functional connectivity (FC) networks of both OS and CS, while showing weaker FC strength towards speech-related regions in the dominant hemisphere during CS. Path analysis revealed significant multivariate associations, indicating an indirect association between the earlier activation in the left putamen and CS, which was mediated by reduced FC towards speech-related regions. These findings revealed the specific spatiotemporal dynamics of CS, offering insights into CS mechanisms that are potentially relevant for future treatment of self-regulation deficits, speech disorders, and development of BCI speech applications.
Collapse
Affiliation(s)
- Wei Zhang
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Muyun Jiang
- School of Computer Science and Engineering, Nanyang Technological University, Singapore
| | - Kok Ann Colin Teo
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; IGP-Neuroscience, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore; Division of Neurosurgery, National University Health System, Singapore
| | - Raghavan Bhuvanakantham
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - LaiGuan Fong
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore
| | - Wei Khang Jeremy Sim
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; IGP-Neuroscience, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore
| | - Zhiwei Guo
- School of Computer Science and Engineering, Nanyang Technological University, Singapore
| | | | | | - Parasuraman Padmanabhan
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Victoria Leong
- Division of Psychology, Nanyang Technological University, Singapore; Department of Pediatrics, University of Cambridge, United Kingdom
| | - Jia Lu
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; DSO National Laboratories, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Balázs Gulyás
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Cuntai Guan
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| |
Collapse
|
4
|
Perkušić Čović M, Vujović I, Šoda J, Palmović M, Rogić Vidaković M. Overt Word Reading and Visual Object Naming in Adults with Dyslexia: Electroencephalography Study in Transparent Orthography. Bioengineering (Basel) 2024; 11:459. [PMID: 38790326 PMCID: PMC11117949 DOI: 10.3390/bioengineering11050459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 05/01/2024] [Accepted: 05/02/2024] [Indexed: 05/26/2024] Open
Abstract
The study aimed to investigate overt reading and naming processes in adult people with dyslexia (PDs) in shallow (transparent) language orthography. The results of adult PDs are compared with adult healthy controls HCs. Comparisons are made in three phases: pre-lexical (150-260 ms), lexical (280-700 ms), and post-lexical stage of processing (750-1000 ms) time window. Twelve PDs and HCs performed overt reading and naming tasks under EEG recording. The word reading and naming task consisted of sparse neighborhoods with closed phonemic onset (words/objects sharing the same onset). For the analysis of the mean ERP amplitude for pre-lexical, lexical, and post-lexical time window, a mixed design ANOVA was performed with the right (F4, FC2, FC6, C4, T8, CP2, CP6, P4) and left (F3, FC5, FC1, T7, C3, CP5, CP1, P7, P3) electrode sites, within-subject factors and group (PD vs. HC) as between-subject factor. Behavioral response latency results revealed significantly prolonged reading latency between HCs and PDs, while no difference was detected in naming response latency. ERP differences were found between PDs and HCs in the right hemisphere's pre-lexical time window (160-200 ms) for word reading aloud. For visual object naming aloud, ERP differences were found between PDs and HCs in the right hemisphere's post-lexical time window (900-1000 ms). The present study demonstrated different distributions of the electric field at the scalp in specific time windows between two groups in the right hemisphere in both word reading and visual object naming aloud, suggesting alternative processing strategies in adult PDs. These results indirectly support the view that adult PDs in shallow language orthography probably rely on the grapho-phonological route during overt word reading and have difficulties with phoneme and word retrieval during overt visual object naming in adulthood.
Collapse
Affiliation(s)
- Maja Perkušić Čović
- Polyclinic for Rehabilitation of People with Developmental Disorders, 21000 Split, Croatia;
| | - Igor Vujović
- Signal Processing, Analysis, and Advanced Diagnostics Research and Education Laboratory (SPAADREL), Faculty of Maritime Studies, University of Split, 21000 Split, Croatia; (I.V.); (J.Š.)
| | - Joško Šoda
- Signal Processing, Analysis, and Advanced Diagnostics Research and Education Laboratory (SPAADREL), Faculty of Maritime Studies, University of Split, 21000 Split, Croatia; (I.V.); (J.Š.)
| | - Marijan Palmović
- Laboratory for Psycholinguistic Research, Department of Speech and Language Pathology, University of Zagreb, 10000 Zagreb, Croatia;
| | - Maja Rogić Vidaković
- Laboratory for Human and Experimental Neurophysiology, Department of Neuroscience, School of Medicine, University of Split, 21000 Split, Croatia
| |
Collapse
|
5
|
Brinthaupt TM, Morin A. Self-talk: research challenges and opportunities. Front Psychol 2023; 14:1210960. [PMID: 37465491 PMCID: PMC10350497 DOI: 10.3389/fpsyg.2023.1210960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 06/19/2023] [Indexed: 07/20/2023] Open
Abstract
In this review, we discuss major measurement and methodological challenges to studying self-talk. We review the assessment of self-talk frequency, studying self-talk in its natural context, personal pronoun usage within self-talk, experiential sampling methods, and the experimental manipulation of self-talk. We highlight new possible research opportunities and discuss recent advances such as brain imaging studies of self-talk, the use of self-talk by robots, and measurement of self-talk in aphasic patients.
Collapse
Affiliation(s)
- Thomas M. Brinthaupt
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, United States
| | - Alain Morin
- Department of Psychology, Mount Royal University, Calgary, AB, Canada
| |
Collapse
|
6
|
Abdulghani MM, Walters WL, Abed KH. Imagined Speech Classification Using EEG and Deep Learning. Bioengineering (Basel) 2023; 10:649. [PMID: 37370580 DOI: 10.3390/bioengineering10060649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/23/2023] [Accepted: 05/25/2023] [Indexed: 06/29/2023] Open
Abstract
In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The long-short term memory recurrent neural network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: up, down, left, and right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration was implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based brain-computer interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50%, and 92.62% for precision, recall, and F1-score, respectively.
Collapse
Affiliation(s)
- Mokhles M Abdulghani
- Department of Electrical & Computer Engineering and Computer Science, College of Sciences, Engineering & Technology, Jackson State University, Jackson, MS 39217, USA
| | - Wilbur L Walters
- Department of Electrical & Computer Engineering and Computer Science, College of Sciences, Engineering & Technology, Jackson State University, Jackson, MS 39217, USA
| | - Khalid H Abed
- Department of Electrical & Computer Engineering and Computer Science, College of Sciences, Engineering & Technology, Jackson State University, Jackson, MS 39217, USA
| |
Collapse
|
7
|
Viacheslav I, Vartanov A, Bueva A, Bronov O. The emotional component of inner speech: A pilot exploratory fMRI study. Brain Cogn 2023; 165:105939. [PMID: 36549191 DOI: 10.1016/j.bandc.2022.105939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
Inner speech is one of the most important human cognitive processes. Nevertheless, until now, many aspects of inner speech, particularly the emotional characteristics of inner speech, remain poorly understood. The main objectives of our study are to identify the neural substrate for the emotional (prosodic) dimension of inner speech and brain structures that control the suppression of expression in inner speech. To achieve these goals, a pilot exploratory fMRI study was carried out on 33 people. The subjects listened to pre-recorded phrases or individual words pronounced with different emotional connotations, after which they were internally spoken with the same emotion or with suppression of expression (neutral). The results show that there is an emotional component in inner speech, which is encoded by similar structures as in spoken speech. The unique role of the caudate nuclei in the suppression of expression in the inner speech was also shown.
Collapse
Affiliation(s)
| | | | | | - Oleg Bronov
- Federal State Budgetary Institution "National Medical and Surgical Center named after N.I. Pirogov", Russia
| |
Collapse
|
8
|
Nooripour R, Mazloomzadeh M, Shirkhani M, Ghanbari N, Var TSP, Hosseini SR. Can We Predict Dissociative Experiences Based on Inner Speech in Nonclinical Population by Mediating Role of Sleep Disturbance? J Nerv Ment Dis 2022; 210:607-612. [PMID: 35193997 DOI: 10.1097/nmd.0000000000001499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
ABSTRACT Dissociative experiences include various experiences and behaviors that can cause people to feel disturbed and disconnected from reality. Individuals with dissociative experiences may exhibit various symptoms, particularly in their inner speech. The present study examined how we can predict dissociative experiences based on inner speech in nonclinical populations by mediating the role of sleep disturbance. In this cross-sectional study, data were collected from university students aged 18 to 40 years ( N = 400). They were asked to complete online self-report questionnaires: Varieties of the Inner Speech Questionnaire, Dissociative Experiences Scale, and Pittsburgh Sleep Quality Index. Results showed that there was a relationship between dissociative experiences and sleep disturbance ( r = 0.29, p < 0.001), dialogic inner speech ( r = 0.39, p < 0.001), condensed inner speech ( r = 0.31, p < 0.001), other people's inner speech ( r = 0.46, p < 0.001), evaluative/motivational inner speech ( r = 0.28, p < 0.001), and total inner speech score ( r = 0.48, p < 0.001). Thus, the current study showed a significant relationship among inner speech, dissociative experiences, and sleep disturbances. Inner speech was found to predict dissociative experiences by mediating sleep disturbances in the nonclinical population. Individuals with strong dissociative experiences had high scores for inner speech and sleep disturbance. The present study highlights a new area of research and its relationship to inner speech and dissociation. Future studies could further explore this new area to validate the findings reported here and support the authors' theoretical interpretation.
Collapse
Affiliation(s)
- Roghieh Nooripour
- Department of Counseling, Faculty of Education and Psychology, Alzahra University, Tehran
| | - Mohammadreza Mazloomzadeh
- Department of Psychology, Faculty of Education Sciences and Psychology, Ferdowsi University of Mashhad, Mashhad
| | - Milad Shirkhani
- Department of Psychology, Faculty of Education Sciences and Psychology, Ferdowsi University of Mashhad, Mashhad
| | - Nikzad Ghanbari
- Department of Clinical Psychology, Faculty of Psychology and Educational Sciences, Shahid Beheshti University Tehran
| | - Tabassom Saeid Par Var
- Addiction Department, School of Behavioral Sciences and Mental Health, Iran University of Medical Sciences, Tehran, Iran
| | - Seyed Ruhollah Hosseini
- Department of Psychology, Faculty of Education Sciences and Psychology, Ferdowsi University of Mashhad, Mashhad
| |
Collapse
|
9
|
Function of language skills in preschooler's problem-solving performance: The role of self-directed speech. JOURNAL OF APPLIED DEVELOPMENTAL PSYCHOLOGY 2022. [DOI: 10.1016/j.appdev.2022.101431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
10
|
Hakim U, Pinti P, Noah AJ, Zhang X, Burgess P, Hamilton A, Hirsch J, Tachtsidis I. Investigation of functional near-infrared spectroscopy signal quality and development of the hemodynamic phase correlation signal. NEUROPHOTONICS 2022; 9:025001. [PMID: 35599691 PMCID: PMC9116886 DOI: 10.1117/1.nph.9.2.025001] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 04/13/2022] [Indexed: 06/15/2023]
Abstract
Significance: There is a longstanding recommendation within the field of fNIRS to use oxygenated (HbO 2 ) and deoxygenated (HHb) hemoglobin when analyzing and interpreting results. Despite this, many fNIRS studies do focus onHbO 2 only. Previous work has shown thatHbO 2 on its own is susceptible to systemic interference and results may mostly reflect that rather than functional activation. Studies using bothHbO 2 and HHb to draw their conclusions do so with varying methods and can lead to discrepancies between studies. The combination ofHbO 2 and HHb has been recommended as a method to utilize both signals in analysis. Aim: We present the development of the hemodynamic phase correlation (HPC) signal to combineHbO 2 and HHb as recommended to utilize both signals in the analysis. We use synthetic and experimental data to evaluate how the HPC and current signals used for fNIRS analysis compare. Approach: About 18 synthetic datasets were formed using resting-state fNIRS data acquired from 16 channels over the frontal lobe. To simulate fNIRS data for a block-design task, we superimposed a synthetic task-related hemodynamic response to the resting state data. This data was used to develop an HPC-general linear model (GLM) framework. Experiments were conducted to investigate the performance of each signal at different SNR and to investigate the effect of false positives on the data. Performance was based on each signal's mean T -value across channels. Experimental data recorded from 128 participants across 134 channels during a finger-tapping task were used to investigate the performance of multiple signals [HbO 2 , HHb, HbT, HbD, correlation-based signal improvement (CBSI), and HPC] on real data. Signal performance was evaluated on its ability to localize activation to a specific region of interest. Results: Results from varying the SNR show that the HPC signal has the highest performance for high SNRs. The CBSI performed the best for medium-low SNR. The next analysis evaluated how false positives affect the signals. The analyses evaluating the effect of false positives showed that the HPC and CBSI signals reflect the effect of false positives onHbO 2 and HHb. The analysis of real experimental data revealed that the HPC and HHb signals provide localization to the primary motor cortex with the highest accuracy. Conclusions: We developed a new hemodynamic signal (HPC) with the potential to overcome the current limitations of usingHbO 2 and HHb separately. Our results suggest that the HPC signal provides comparable accuracy to HHb to localize functional activation while at the same time being more robust against false positives.
Collapse
Affiliation(s)
- Uzair Hakim
- University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom
| | - Paola Pinti
- University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom
- University of London, Birkbeck College, Centre for Brain and Cognitive Development, London, United Kingdom
| | - Adam J. Noah
- Yale University, Department of Neuroscience and Comparative Medicine, Yale School of Medicine, United States
| | - Xian Zhang
- Yale University, Department of Neuroscience and Comparative Medicine, Yale School of Medicine, United States
| | - Paul Burgess
- University College London, Institute of Cognitive Neuroscience, London, United Kingdom
| | - Antonia Hamilton
- University College London, Institute of Cognitive Neuroscience, London, United Kingdom
| | - Joy Hirsch
- University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom
- Yale University, Department of Neuroscience and Comparative Medicine, Yale School of Medicine, United States
| | - Ilias Tachtsidis
- University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom
| |
Collapse
|
11
|
Yeung MK, Chu VW. Viewing neurovascular coupling through the lens of combined EEG-fNIRS: A systematic review of current methods. Psychophysiology 2022; 59:e14054. [PMID: 35357703 DOI: 10.1111/psyp.14054] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/01/2022] [Accepted: 03/08/2022] [Indexed: 12/25/2022]
Abstract
Neurovascular coupling is a key physiological mechanism that occurs in the healthy human brain, and understanding this process has implications for understanding the aging and neuropsychiatric populations. Combined electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) has emerged as a promising, noninvasive tool for probing neurovascular interactions in humans. However, the utility of this approach critically depends on the methodological quality used for multimodal integration. Despite a growing number of combined EEG-fNIRS applications reported in recent years, the methodological rigor of past studies remains unclear, limiting the accurate interpretation of reported findings and hindering the translational application of this multimodal approach. To fill this knowledge gap, we critically evaluated various methodological aspects of previous combined EEG-fNIRS studies performed in healthy individuals. A literature search was conducted using PubMed and PsycINFO on June 28, 2021. Studies involving concurrent EEG and fNIRS measurements in awake and healthy individuals were selected. After screening and eligibility assessment, 96 studies were included in the methodological evaluation. Specifically, we critically reviewed various aspects of participant sampling, experimental design, signal acquisition, data preprocessing, outcome selection, data analysis, and results presentation reported in these studies. Altogether, we identified several notable strengths and limitations of the existing EEG-fNIRS literature. In light of these limitations and the features of combined EEG-fNIRS, recommendations are made to improve and standardize research practices to facilitate the use of combined EEG-fNIRS when studying healthy neurovascular coupling processes and alterations in neurovascular coupling among various populations.
Collapse
Affiliation(s)
- Michael K Yeung
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China
| | - Vivian W Chu
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
12
|
Korzeczek A, Neef NE, Steinmann I, Paulus W, Sommer M. Stuttering severity relates to frontotemporal low-beta synchronization during pre-speech preparation. Clin Neurophysiol 2022; 138:84-96. [DOI: 10.1016/j.clinph.2022.03.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 03/02/2022] [Accepted: 03/09/2022] [Indexed: 12/15/2022]
|
13
|
Panachakel JT, Ramakrishnan AG. Decoding Covert Speech From EEG-A Comprehensive Review. Front Neurosci 2021; 15:642251. [PMID: 33994922 PMCID: PMC8116487 DOI: 10.3389/fnins.2021.642251] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/18/2021] [Indexed: 11/13/2022] Open
Abstract
Over the past decade, many researchers have come up with different implementations of systems for decoding covert or imagined speech from EEG (electroencephalogram). They differ from each other in several aspects, from data acquisition to machine learning algorithms, due to which, a comparison between different implementations is often difficult. This review article puts together all the relevant works published in the last decade on decoding imagined speech from EEG into a single framework. Every important aspect of designing such a system, such as selection of words to be imagined, number of electrodes to be recorded, temporal and spatial filtering, feature extraction and classifier are reviewed. This helps a researcher to compare the relative merits and demerits of the different approaches and choose the one that is most optimal. Speech being the most natural form of communication which human beings acquire even without formal education, imagined speech is an ideal choice of prompt for evoking brain activity patterns for a BCI (brain-computer interface) system, although the research on developing real-time (online) speech imagery based BCI systems is still in its infancy. Covert speech based BCI can help people with disabilities to improve their quality of life. It can also be used for covert communication in environments that do not support vocal communication. This paper also discusses some future directions, which will aid the deployment of speech imagery based BCI for practical applications, rather than only for laboratory experiments.
Collapse
Affiliation(s)
- Jerrin Thomas Panachakel
- Medical Intelligence and Language Engineering Laboratory, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India
| | | |
Collapse
|
14
|
Stephan F, Saalbach H, Rossi S. Inner versus Overt Speech Production: Does This Make a Difference in the Developing Brain? Brain Sci 2020; 10:E939. [PMID: 33291489 PMCID: PMC7762104 DOI: 10.3390/brainsci10120939] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 11/24/2020] [Accepted: 12/03/2020] [Indexed: 11/21/2022] Open
Abstract
Studies in adults showed differential neural processing between overt and inner speech. So far, it is unclear whether inner and overt speech are processed differentially in children. The present study examines the pre-activation of the speech network in order to disentangle domain-general executive control from linguistic control of inner and overt speech production in 6- to 7-year-olds by simultaneously applying electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Children underwent a picture-naming task in which the pure preparation of a subsequent speech production and the actual execution of speech can be differentiated. The preparation phase does not represent speech per se but it resembles the setting up of the language production network. Only the fNIRS revealed a larger activation for overt, compared to inner, speech over bilateral prefrontal to parietal regions during the preparation phase. Findings suggest that the children's brain can prepare the subsequent speech production. The preparation for overt and inner speech requires different domain-general executive control. In contrast to adults, the children´s brain did not show differences between inner and overt speech when a concrete linguistic content occurs and a concrete execution is required. This might indicate that domain-specific executive control processes are still under development.
Collapse
Affiliation(s)
- Franziska Stephan
- Department of Educational Psychology, Faculty of Education, University Leipzig, 04109 Leipzig, Germany;
- Leipzig Research Center for Early Child Development, 04109 Leipzig, Germany
- ICONE, Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Henrik Saalbach
- Department of Educational Psychology, Faculty of Education, University Leipzig, 04109 Leipzig, Germany;
- Leipzig Research Center for Early Child Development, 04109 Leipzig, Germany
| | - Sonja Rossi
- ICONE, Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| |
Collapse
|