1
|
Deepika D, Rekha G. A hybrid capsule attention-based convolutional bi-GRU method for multi-class mental task classification based brain-computer Interface. Comput Methods Biomech Biomed Engin 2024:1-17. [PMID: 39397592 DOI: 10.1080/10255842.2024.2410221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 02/23/2024] [Accepted: 09/25/2024] [Indexed: 10/15/2024]
Abstract
Electroencephalography analysis is critical for brain computer interface research. The primary goal of brain-computer interface is to establish communication between impaired people and others via brain signals. The classification of multi-level mental activities using the brain-computer interface has recently become more difficult, which affects the accuracy of the classification. However, several deep learning-based techniques have attempted to identify mental tasks using multidimensional data. The hybrid capsule attention-based convolutional bidirectional gated recurrent unit model was introduced in this study as a hybrid deep learning technique for multi-class mental task categorization. Initially, the obtained electroencephalography data is pre-processed with a digital low-pass Butterworth filter and a discrete wavelet transform to remove disturbances. The spectrally adaptive common spatial pattern is used to extract characteristics from pre-processed electroencephalography data. The retrieved features were then loaded into the suggested classification model, which was used to extract the features deeply and classify the mental tasks. To improve classification results, the model's parameters are fine-tuned using a dung beetle optimization approach. Finally, the proposed classifier is assessed for several types of mental task classification using the provided dataset. The simulation results are compared with the existing state-of-the-art techniques in terms of accuracy, precision, recall, etc. The accuracy obtained using the proposed approach is 97.87%, which is higher than that of the other existing methods.
Collapse
Affiliation(s)
- D Deepika
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad, Telangana, 500075, India
- Department of Computer Science and Engineering, Mahatma Gandhi Institute of Technology, Hyderabad, Telangana, 500075, India
| | - G Rekha
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad, Telangana, 500075, India
| |
Collapse
|
2
|
Yao Q, Gu H, Wang S, Li X. Spatial-Frequency Characteristics of EEG Associated With the Mental Stress in Human-Machine Systems. IEEE J Biomed Health Inform 2024; 28:5904-5916. [PMID: 38959145 DOI: 10.1109/jbhi.2024.3422384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/05/2024]
Abstract
Accurate assessment of user mental stress in human-machine system plays a crucial role in ensuring task performance and system safety. However, the underlying neural mechanisms of stress in human-machine tasks and assessment methods based on physiological indicators remain fundamental challenges. In this paper, we employ a virtual unmanned aerial vehicle (UAV) control experiment to explore the reorganization of functional brain network patterns under stress conditions. The results indicate enhanced functional connectivity in the frontal theta band and central beta band, as well as reduced functional connectivity in the left parieto-occipital alpha band, which is associated with increased mental stress. Evaluation of network metrics reveals that decreased global efficiency in the theta and beta bands is linked to elevated stress levels. Subsequently, inspired by the frequency-specific patterns in the stress brain network, a cross-band graph convolutional network (CBGCN) model is constructed for mental stress brain state recognition. The proposed method captures the spatial-frequency topological relationships of cross-band brain networks through multiple branches, with the aim of integrating complex dynamic patterns hidden in the brain network and learning discriminative cognitive features. Experimental results demonstrate that the neuro-inspired CBGCN model improves classification performance and enhances model interpretability. The study suggests that the proposed approach provides a potentially viable solution for recognizing stress states in human-machine system by using EEG signals.
Collapse
|
3
|
Kumari A, Edla DR, Reddy RR, Jannu S, Vidyarthi A, Alkhayyat A, de Marin MSG. EEG-based motor imagery channel selection and classification using hybrid optimization and two-tier deep learning. J Neurosci Methods 2024; 409:110215. [PMID: 38968976 DOI: 10.1016/j.jneumeth.2024.110215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 06/14/2024] [Accepted: 06/28/2024] [Indexed: 07/07/2024]
Abstract
Brain-computer interface (BCI) technology holds promise for individuals with profound motor impairments, offering the potential for communication and control. Motor imagery (MI)-based BCI systems are particularly relevant in this context. Despite their potential, achieving accurate and robust classification of MI tasks using electroencephalography (EEG) data remains a significant challenge. In this paper, we employed the Minimum Redundancy Maximum Relevance (MRMR) algorithm to optimize channel selection. Furthermore, we introduced a hybrid optimization approach that combines the War Strategy Optimization (WSO) and Chimp Optimization Algorithm (ChOA). This hybridization significantly enhances the classification model's overall performance and adaptability. A two-tier deep learning architecture is proposed for classification, consisting of a Convolutional Neural Network (CNN) and a modified Deep Neural Network (M-DNN). The CNN focuses on capturing temporal correlations within EEG data, while the M-DNN is designed to extract high-level spatial characteristics from selected EEG channels. Integrating optimal channel selection, hybrid optimization, and the two-tier deep learning methodology in our BCI framework presents an enhanced approach for precise and effective BCI control. Our model got 95.06% accuracy with high precision. This advancement has the potential to significantly impact neurorehabilitation and assistive technology applications, facilitating improved communication and control for individuals with motor impairments.
Collapse
Affiliation(s)
- Annu Kumari
- Department of Computer Science and Engineering, National Institute of Technology Goa, Cuncolim, South Goa, 403 703, Goa, India.
| | - Damodar Reddy Edla
- Department of Computer Science and Engineering, National Institute of Technology Goa, Cuncolim, South Goa, 403 703, Goa, India.
| | - R Ravinder Reddy
- Department of Computer Science and Engineering, Chaitanya Bharathi Institute of Technology, Hyderabad, 500 075, India.
| | - Srikanth Jannu
- Department of Computer Science and Engineering, Vaagdevi Engineering College, Warangal, Telangana, 506 005, India.
| | - Ankit Vidyarthi
- Department of CSE&IT, Jaypee Institute of Information Technology, Noida, Uttar Pradesh, 201309, India.
| | | | - Mirtha Silvana Garat de Marin
- Engineering Research & Innovation Group, Universidad Europea del Atlántico, C/Isabel Torres 21, 39011 Santander, Spain; Department of Project Management, Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA; Department of Project Management, Universidade Internacional do Cuanza, Estrada Nacional 250, Bairro Kaluapanda, Cuito-Bié, Angola.
| |
Collapse
|
4
|
Jin X, Yang X, Kong W, Zhu L, Tang J, Peng Y, Ding Y, Zhao Q. TSFAN: tensorized spatial-frequency attention network with domain adaptation for cross-session EEG-based biometric recognition. J Neural Eng 2024; 21:046005. [PMID: 38866001 DOI: 10.1088/1741-2552/ad5761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 06/12/2024] [Indexed: 06/14/2024]
Abstract
Objective.Electroencephalogram (EEG) signals are promising biometrics owning to their invisibility, adapting to the application scenarios with high-security requirements. However, It is challenging to explore EEG identity features without the interference of device and state differences of the subject across sessions. Existing methods treat training sessions as a single domain, affected by the different data distribution among sessions. Although most multi-source unsupervised domain adaptation (MUDA) methods bridge the domain gap between multiple source and target domains individually, relationships among the domain-invariant features of each distribution alignment are neglected.Approach.In this paper, we propose a MUDA method, Tensorized Spatial-Frequency Attention Network (TSFAN), to assist the performance of the target domain for EEG-based biometric recognition. Specifically, significant relationships of domain-invariant features are modeled via a tensorized attention mechanism. It jointly incorporates appropriate common spatial-frequency representations of pairwise source and target but also cross-source domains, without the effect of distribution discrepancy among source domains. Additionally, considering the curse of dimensionality, our TSFAN is approximately represented in Tucker format. Benefiting the low-rank Tucker Network, the TSFAN can scale linearly in the number of domains, providing us the great flexibility to extend TSFAN to the case associated with an arbitrary number of sessions.Main results.Extensive experiments on the representative benchmarks demonstrate the effectiveness of TSFAN in EEG-based biometric recognition, outperforming state-of-the-art approaches, as verified by cross-session validation.Significance.The proposed TSFAN aims to investigate the presence of consistent EEG identity features across sessions. It is achieved by utilizing a novel tensorized attention mechanism that collaborates intra-source transferable information with inter-source interactions, while remaining unaffected by domain shifts in multiple source domains. Furthermore, the electrode selection shows that EEG-based identity features across sessions are distributed across brain regions, and 20 electrodes based on 10-20 standard system are able to extract stable identity information.
Collapse
Affiliation(s)
- Xuanyu Jin
- School of Computer Science, Hangzhou Dianzi University, Hangzhou, People's Republic of China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, People's Republic of China
| | - Xinyu Yang
- School of Computer Science, Hangzhou Dianzi University, Hangzhou, People's Republic of China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, People's Republic of China
| | - Wanzeng Kong
- School of Computer Science, Hangzhou Dianzi University, Hangzhou, People's Republic of China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, People's Republic of China
| | - Li Zhu
- School of Computer Science, Hangzhou Dianzi University, Hangzhou, People's Republic of China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, People's Republic of China
| | - Jiajia Tang
- School of Computer Science, Hangzhou Dianzi University, Hangzhou, People's Republic of China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, People's Republic of China
| | - Yong Peng
- School of Computer Science, Hangzhou Dianzi University, Hangzhou, People's Republic of China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, People's Republic of China
| | - Yu Ding
- Netease Fuxi AI Lab, NetEase, Hangzhou, People's Republic of China
| | - Qibin Zhao
- Tensor Learning Unit, Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan
| |
Collapse
|
5
|
Mena-Camilo E, Salazar-Colores S, Aceves-Fernández MA, Lozada-Hernández EE, Ramos-Arreguín JM. Non-Invasive Prediction of Choledocholithiasis Using 1D Convolutional Neural Networks and Clinical Data. Diagnostics (Basel) 2024; 14:1278. [PMID: 38928692 PMCID: PMC11202441 DOI: 10.3390/diagnostics14121278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 05/30/2024] [Accepted: 06/04/2024] [Indexed: 06/28/2024] Open
Abstract
This paper introduces a novel one-dimensional convolutional neural network that utilizes clinical data to accurately detect choledocholithiasis, where gallstones obstruct the common bile duct. Swift and precise detection of this condition is critical to preventing severe complications, such as biliary colic, jaundice, and pancreatitis. This cutting-edge model was rigorously compared with other machine learning methods commonly used in similar problems, such as logistic regression, linear discriminant analysis, and a state-of-the-art random forest, using a dataset derived from endoscopic retrograde cholangiopancreatography scans performed at Olive View-University of California, Los Angeles Medical Center. The one-dimensional convolutional neural network model demonstrated exceptional performance, achieving 90.77% accuracy and 92.86% specificity, with an area under the curve of 0.9270. While the paper acknowledges potential areas for improvement, it emphasizes the effectiveness of the one-dimensional convolutional neural network architecture. The results suggest that this one-dimensional convolutional neural network approach could serve as a plausible alternative to endoscopic retrograde cholangiopancreatography, considering its disadvantages, such as the need for specialized equipment and skilled personnel and the risk of postoperative complications. The potential of the one-dimensional convolutional neural network model to significantly advance the clinical diagnosis of this gallstone-related condition is notable, offering a less invasive, potentially safer, and more accessible alternative.
Collapse
Affiliation(s)
- Enrique Mena-Camilo
- Facultad de Ingeniería, Universidad Autónoma de Querétaro, Querétaro 76010, Mexico; (E.M.-C.); (M.A.A.-F.); (J.M.R.-A.)
| | | | | | | | - Juan Manuel Ramos-Arreguín
- Facultad de Ingeniería, Universidad Autónoma de Querétaro, Querétaro 76010, Mexico; (E.M.-C.); (M.A.A.-F.); (J.M.R.-A.)
| |
Collapse
|
6
|
Zeng X, Cai S, Xie L. Attention-guided graph structure learning network for EEG-enabled auditory attention detection. J Neural Eng 2024; 21:036025. [PMID: 38776893 DOI: 10.1088/1741-2552/ad4f1a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Accepted: 05/22/2024] [Indexed: 05/25/2024]
Abstract
Objective: Decoding auditory attention from brain signals is essential for the development of neuro-steered hearing aids. This study aims to overcome the challenges of extracting discriminative feature representations from electroencephalography (EEG) signals for auditory attention detection (AAD) tasks, particularly focusing on the intrinsic relationships between different EEG channels.Approach: We propose a novel attention-guided graph structure learning network, AGSLnet, which leverages potential relationships between EEG channels to improve AAD performance. Specifically, AGSLnet is designed to dynamically capture latent relationships between channels and construct a graph structure of EEG signals.Main result: We evaluated AGSLnet on two publicly available AAD datasets and demonstrated its superiority and robustness over state-of-the-art models. Visualization of the graph structure trained by AGSLnet supports previous neuroscience findings, enhancing our understanding of the underlying neural mechanisms.Significance: This study presents a novel approach for examining brain functional connections, improving AAD performance in low-latency settings, and supporting the development of neuro-steered hearing aids.
Collapse
Affiliation(s)
- Xianzhang Zeng
- School of Intelligent Engineering, South China University of Technology, Guangzhou, People's Republic of China
| | - Siqi Cai
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Longhan Xie
- School of Intelligent Engineering, South China University of Technology, Guangzhou, People's Republic of China
| |
Collapse
|
7
|
Ahmadzadeh Nobari Azar N, Cavus N, Esmaili P, Sekeroglu B, Aşır S. Detecting emotions through EEG signals based on modified convolutional fuzzy neural network. Sci Rep 2024; 14:10371. [PMID: 38710806 DOI: 10.1038/s41598-024-60977-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 04/29/2024] [Indexed: 05/08/2024] Open
Abstract
Emotion is a human sense that can influence an individual's life quality in both positive and negative ways. The ability to distinguish different types of emotion can lead researchers to estimate the current situation of patients or the probability of future disease. Recognizing emotions from images have problems concealing their feeling by modifying their facial expressions. This led researchers to consider Electroencephalography (EEG) signals for more accurate emotion detection. However, the complexity of EEG recordings and data analysis using conventional machine learning algorithms caused inconsistent emotion recognition. Therefore, utilizing hybrid deep learning models and other techniques has become common due to their ability to analyze complicated data and achieve higher performance by integrating diverse features of the models. However, researchers prioritize models with fewer parameters to achieve the highest average accuracy. This study improves the Convolutional Fuzzy Neural Network (CFNN) for emotion recognition using EEG signals to achieve a reliable detection system. Initially, the pre-processing and feature extraction phases are implemented to obtain noiseless and informative data. Then, the CFNN with modified architecture is trained to classify emotions. Several parametric and comparative experiments are performed. The proposed model achieved reliable performance for emotion recognition with an average accuracy of 98.21% and 98.08% for valence (pleasantness) and arousal (intensity), respectively, and outperformed state-of-the-art methods.
Collapse
Affiliation(s)
- Nasim Ahmadzadeh Nobari Azar
- Department of Biomedical Engineering, Near East University, 99138, Nicosia, Cyprus.
- Computer Information Systems Research and Technology Center, Near East University, Nicosia, 99138, Turkey.
| | - Nadire Cavus
- Department of Computer Information Systems, Near East University, 99138, Nicosia, Cyprus
- Computer Information Systems Research and Technology Center, Near East University, Nicosia, 99138, Turkey
| | - Parvaneh Esmaili
- Department of Computer Engineering, Cyprus International University, 99258, Nicosia, Cyprus
| | - Boran Sekeroglu
- Software Engineering Department, World Peace University, Nicosia, Turkey
| | - Süleyman Aşır
- Department of Biomedical Engineering, Near East University, 99138, Nicosia, Cyprus
- Center for Science and Technology and Engineering, Near East University, Nicosia, 99138, Turkey
| |
Collapse
|
8
|
Aslan M, Baykara M, Alakus TB. LieWaves: dataset for lie detection based on EEG signals and wavelets. Med Biol Eng Comput 2024; 62:1571-1588. [PMID: 38311647 DOI: 10.1007/s11517-024-03021-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 01/09/2024] [Indexed: 02/06/2024]
Abstract
This study introduces an electroencephalography (EEG)-based dataset to analyze lie detection. Various analyses or detections can be performed using EEG signals. Lie detection using EEG data has recently become a significant topic. In every aspect of life, people find the need to tell lies to each other. While lies told daily may not have significant societal impacts, lie detection becomes crucial in legal, security, job interviews, or situations that could affect the community. This study aims to obtain EEG signals for lie detection, create a dataset, and analyze this dataset using signal processing techniques and deep learning methods. EEG signals were acquired from 27 individuals using a wearable EEG device called Emotiv Insight with 5 channels (AF3, T7, Pz, T8, AF4). Each person took part in two trials: one where they were honest and another where they were deceitful. During each experiment, participants evaluated beads they saw before the experiment and stole from them in front of a video clip. This study consisted of four stages. In the first stage, the LieWaves dataset was created with the EEG data obtained during these experiments. In the second stage, preprocessing was carried out. In this stage, the automatic and tunable artifact removal (ATAR) algorithm was applied to remove the artifacts from the EEG signals. Later, the overlapping sliding window (OSW) method was used for data augmentation. In the third stage, feature extraction was performed. To achieve this, EEG signals were analyzed by combining discrete wavelet transform (DWT) and fast Fourier transform (FFT) including statistical methods (SM). In the last stage, each obtained feature vector was classified separately using Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and CNNLSTM hybrid algorithms. At the study's conclusion, the most accurate result, achieving a 99.88% accuracy score, was produced using the LSTM and DWT techniques. With this study, a new data set was introduced to the literature, and it was aimed to eliminate the deficiencies in this field with this data set. Evaluation results obtained from the data set have shown that this data set can be effective in this field.
Collapse
Affiliation(s)
- Musa Aslan
- Department of Software Engineering, Karadeniz Technical University, Trabzon, Turkey
| | - Muhammet Baykara
- Department of Software Engineering, Firat University, Elazig, Turkey
| | - Talha Burak Alakus
- Department of Software Engineering, Kirklareli University, Kirklareli, Turkey.
| |
Collapse
|
9
|
Li Y, Cao D, Qu J, Wang W, Xu X, Kong L, Liao J, Hu W, Zhang K, Wang J, Li C, Yang X, Zhang X. Automatic Detection of Scalp High-Frequency Oscillations Based on Deep Learning. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1627-1636. [PMID: 38625771 DOI: 10.1109/tnsre.2024.3389010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
Scalp high-frequency oscillations (sHFOs) are a promising non-invasive biomarker of epilepsy. However, the visual marking of sHFOs is a time-consuming and subjective process, existing automatic detectors based on single-dimensional analysis have difficulty with accurately eliminating artifacts and thus do not provide sufficient reliability to meet clinical needs. Therefore, we propose a high-performance sHFOs detector based on a deep learning algorithm. An initial detection module was designed to extract candidate high-frequency oscillations. Then, one-dimensional (1D) and two-dimensional (2D) deep learning models were designed, respectively. Finally, the weighted voting method is used to combine the outputs of the two model. In experiments, the precision, recall, specificity and F1-score were 83.44%, 83.60%, 96.61% and 83.42%, respectively, on average and the kappa coefficient was 80.02%. In addition, the proposed detector showed a stable performance on multi-centre datasets. Our sHFOs detector demonstrated high robustness and generalisation ability, which indicates its potential applicability as a clinical assistance tool. The proposed sHFOs detector achieves an accurate and robust method via deep learning algorithm.
Collapse
|
10
|
Chen Y, Fazli S, Wallraven C. An EEG Dataset of Neural Signatures in a Competitive Two-Player Game Encouraging Deceptive Behavior. Sci Data 2024; 11:389. [PMID: 38627400 PMCID: PMC11021485 DOI: 10.1038/s41597-024-03234-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 04/05/2024] [Indexed: 04/19/2024] Open
Abstract
Studying deception is vital for understanding decision-making and social dynamics. Recent EEG research has deepened insights into the brain mechanisms behind deception. Standard methods in this field often rely on memory, are vulnerable to countermeasures, yield false positives, and lack real-world relevance. Here, we present a comprehensive dataset from an EEG-monitored competitive, two-player card game designed to elicit authentic deception behavior. Our extensive dataset contains EEG data from 12 pairs (N = 24 participants with role switching), controlled for age, gender, and risk-taking, with detailed labels and annotations. The dataset combines standard event-related potential and microstate analyses with state-of-the-art decoding approaches of four scenarios: spontaneous/instructed truth-telling and lying. This demonstrates game-based methods' efficacy in studying deception and sets a benchmark for future research. Overall, our dataset represents a unique resource with applications in cognitive neuroscience and related fields for studying deception, competitive behavior, decision-making, inter-brain synchrony, and benchmarking of decoding frameworks in a difficult, high-level cognitive task.
Collapse
Affiliation(s)
- Yiyu Chen
- Department of Artificial Intelligence, Korea University, Seoul, 02841, South Korea
| | - Siamac Fazli
- Department of Computer Science, Nazarbayev University, Astana, 010000, Kazakhstan
| | - Christian Wallraven
- Department of Artificial Intelligence, Korea University, Seoul, 02841, South Korea.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, South Korea.
| |
Collapse
|
11
|
Emami N, Ferdousi R. HormoNet: a deep learning approach for hormone-drug interaction prediction. BMC Bioinformatics 2024; 25:87. [PMID: 38418979 PMCID: PMC10903040 DOI: 10.1186/s12859-024-05708-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 02/16/2024] [Indexed: 03/02/2024] Open
Abstract
Several experimental evidences have shown that the human endogenous hormones can interact with drugs in many ways and affect drug efficacy. The hormone drug interactions (HDI) are essential for drug treatment and precision medicine; therefore, it is essential to understand the hormone-drug associations. Here, we present HormoNet to predict the HDI pairs and their risk level by integrating features derived from hormone and drug target proteins. To the best of our knowledge, this is one of the first attempts to employ deep learning approach for prediction of HDI prediction. Amino acid composition and pseudo amino acid composition were applied to represent target information using 30 physicochemical and conformational properties of the proteins. To handle the imbalance problem in the data, we applied synthetic minority over-sampling technique technique. Additionally, we constructed novel datasets for HDI prediction and the risk level of their interaction. HormoNet achieved high performance on our constructed hormone-drug benchmark datasets. The results provide insights into the understanding of the relationship between hormone and a drug, and indicate the potential benefit of reducing risk levels of interactions in designing more effective therapies for patients in drug treatments. Our benchmark datasets and the source codes for HormoNet are available in: https://github.com/EmamiNeda/HormoNet .
Collapse
Affiliation(s)
- Neda Emami
- Department of Health Information Technology, School of Management and Medical Informatics, Tabriz University of Medical Sciences, Tabriz, Iran.
| | - Reza Ferdousi
- Department of Health Information Technology, School of Management and Medical Informatics, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|
12
|
Moaveninejad S, D'Onofrio V, Tecchio F, Ferracuti F, Iarlori S, Monteriù A, Porcaro C. Fractal Dimension as a discriminative feature for high accuracy classification in motor imagery EEG-based brain-computer interface. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107944. [PMID: 38064955 DOI: 10.1016/j.cmpb.2023.107944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/31/2023] [Accepted: 11/24/2023] [Indexed: 01/26/2024]
Abstract
BACKGROUND AND OBJECTIVE The brain-computer interface (BCI) technology acquires human brain electrical signals, which can be effectively and successfully used to control external devices, potentially supporting subjects suffering from motor impairments in the interaction with the environment. To this aim, BCI systems must correctly decode and interpret neurophysiological signals reflecting the intention of the subjects to move. Therefore, the accurate classification of single events in motor tasks represents a fundamental challenge in ensuring efficient communication and control between users and BCIs. Movement-associated changes in electroencephalographic (EEG) sensorimotor rhythms, such as event-related desynchronization (ERD), are well-known features of discriminating motor tasks. Fractal dimension (FD) can be used to evaluate the complexity and self-similarity in brain signals, potentially providing complementary information to frequency-based signal features. METHODS In the present work, we introduce FD as a novel feature for subject-independent event classification, and we test several machine learning (ML) models in behavioural tasks of motor imagery (MI) and motor execution (ME). RESULTS Our results show that FD improves the classification accuracy of ML compared to ERD. Furthermore, unilateral hand movements have higher classification accuracy than bilateral movements in both MI and ME tasks. CONCLUSIONS These results provide further insights into subject-independent event classification in BCI systems and demonstrate the potential of FD as a discriminative feature for EEG signals.
Collapse
Affiliation(s)
| | | | - Franca Tecchio
- Institute of Cognitive Sciences and Technologies (ISCT) - National Research Council (CNR), 00185 Rome, Italy
| | - Francesco Ferracuti
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| | - Sabrina Iarlori
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| | - Andrea Monteriù
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| | - Camillo Porcaro
- Department of Neuroscience, University of Padova, 35128 Padua, Italy; Padova Neuroscience Center (PNC), University of Padova, 35131 Padua, Italy; Institute of Cognitive Sciences and Technologies (ISCT) - National Research Council (CNR), 00185 Rome, Italy; Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK.
| |
Collapse
|
13
|
Kocejko T, Matuszkiewicz N, Durawa P, Madajczak A, Kwiatkowski J. How Integration of a Brain-Machine Interface and Obstacle Detection System Can Improve Wheelchair Control via Movement Imagery. SENSORS (BASEL, SWITZERLAND) 2024; 24:918. [PMID: 38339635 PMCID: PMC10857086 DOI: 10.3390/s24030918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/27/2024] [Accepted: 01/29/2024] [Indexed: 02/12/2024]
Abstract
This study presents a human-computer interaction combined with a brain-machine interface (BMI) and obstacle detection system for remote control of a wheeled robot through movement imagery, providing a potential solution for individuals facing challenges with conventional vehicle operation. The primary focus of this work is the classification of surface EEG signals related to mental activity when envisioning movement and deep relaxation states. Additionally, this work presents a system for obstacle detection based on image processing. The implemented system constitutes a complementary part of the interface. The main contributions of this work include the proposal of a modified 10-20-electrode setup suitable for motor imagery classification, the design of two convolutional neural network (CNNs) models employed to classify signals acquired from sixteen EEG channels, and the implementation of an obstacle detection system based on computer vision integrated with a brain-machine interface. The models developed in this study achieved an accuracy of 83% in classifying EEG signals. The resulting classification outcomes were subsequently utilized to control the movement of a mobile robot. Experimental trials conducted on a designated test track demonstrated real-time control of the robot. The findings indicate the feasibility of integration of the obstacle detection system for collision avoidance with the classification of motor imagery for the purpose of brain-machine interface control of vehicles. The elaborated solution could help paralyzed patients to safely control a wheelchair through EEG and effectively prevent unintended vehicle movements.
Collapse
Affiliation(s)
- Tomasz Kocejko
- Department of Biomedical Engineering, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Narutowicza 11/12, 80-233 Gdansk, Poland; (N.M.); (P.D.); (A.M.); (J.K.)
| | | | | | | | | |
Collapse
|
14
|
Sireesha V, Tallapragada VVS, Naresh M, Pradeep Kumar GV. EEG-BCI-based motor imagery classification using double attention convolutional network. Comput Methods Biomech Biomed Engin 2024:1-20. [PMID: 38164118 DOI: 10.1080/10255842.2023.2298369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
This article aims to improve and diversify signal processing techniques to execute a brain-computer interface (BCI) based on neurological phenomena observed when performing motor tasks using motor imagery (MI). The noise present in the original data, such as intermodulation noise, crosstalk, and other unwanted noise, is removed by Modify Least Mean Square (M-LMS) in the pre-processing stage. Traditional LMSs were unable to extract all the noise from the images. After pre-processing, the required features, such as statistical features, entropy features, etc., were extracted using Common Spatial Pattern (CSP) and Pearson's Correlation Coefficient (PCC) instead of the traditional single feature extraction model. The arithmetic optimization algorithm cannot select the features accurately and fails to reduce the feature dimensionality of the data. Thus, an Extended Arithmetic operation optimization (ExAo) algorithm is used to select the most significant attributes from the extracted features. The proposed model uses Double Attention Convolutional Neural Networks (DAttnConvNet) to classify the types of EEG signals based on optimal feature selection. Here, the attention mechanism is used to select and optimize the features to improve the classification accuracy and efficiency of the model. In EEG motor imagery datasets, the proposed model has been analyzed under class, which obtained an accuracy of 99.98% in class Baseline (B), 99.82% in class Imagined movement of a right fist (R) and 99.61% in class Imagined movement of both fists (RL). In the EEG dataset, the proposed model can obtain a high accuracy of 97.94% compared to EEG datasets of other models.
Collapse
Affiliation(s)
- V Sireesha
- Department of Computer Science and Engineering, School of Technology, GITAM University, Hyderabad, India
| | | | - M Naresh
- Department of ECE, Matrusri Engineering College, Saidabad, Hyderabad, India
| | - G V Pradeep Kumar
- Department of ECE, Chaitanya Bharathi Institute of Technology, Hyderabad, India
| |
Collapse
|
15
|
Ali O, Saif-Ur-Rehman M, Glasmachers T, Iossifidis I, Klaes C. ConTraNet: A hybrid network for improving the classification of EEG and EMG signals with limited training data. Comput Biol Med 2024; 168:107649. [PMID: 37980798 DOI: 10.1016/j.compbiomed.2023.107649] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 10/06/2023] [Accepted: 10/31/2023] [Indexed: 11/21/2023]
Abstract
OBJECTIVE Bio-Signals such as electroencephalography (EEG) and electromyography (EMG) are widely used for the rehabilitation of physically disabled people and for the characterization of cognitive impairments. Successful decoding of these bio-signals is however non-trivial because of the time-varying and non-stationary characteristics. Furthermore, existence of short- and long-range dependencies in these time-series signal makes the decoding even more challenging. State-of-the-art studies proposed Convolutional Neural Networks (CNNs) based architectures for the classification of these bio-signals, which are proven useful to learn spatial representations. However, CNNs because of the fixed size convolutional kernels and shared weights pay only uniform attention and are also suboptimal in learning short-long term dependencies, simultaneously, which could be pivotal in decoding EEG and EMG signals. Therefore, it is important to address these limitations of CNNs. To learn short- and long-range dependencies simultaneously and to pay more attention to more relevant part of the input signal, Transformer neural network-based architectures can play a significant role. Nonetheless, it requires a large corpus of training data. However, EEG and EMG decoding studies produce limited amount of the data. Therefore, using standalone transformers neural networks produce ordinary results. In this study, we ask a question whether we can fix the limitations of CNN and transformer neural networks and provide a robust and generalized model that can simultaneously learn spatial patterns, long-short term dependencies, pay variable amount of attention to time-varying non-stationary input signal with limited training data. APPROACH In this work, we introduce a novel single hybrid model called ConTraNet, which is based on CNN and Transformer architectures that contains the strengths of both CNN and Transformer neural networks. ConTraNet uses a CNN block to introduce inductive bias in the model and learn local dependencies, whereas the Transformer block uses the self-attention mechanism to learn the short- and long-range or global dependencies in the signal and learn to pay different attention to different parts of the signals. MAIN RESULTS We evaluated and compared the ConTraNet with state-of-the-art methods on four publicly available datasets (BCI Competition IV dataset 2b, Physionet MI-EEG dataset, Mendeley sEMG dataset, Mendeley sEMG V1 dataset) which belong to EEG-HMI and EMG-HMI paradigms. ConTraNet outperformed its counterparts in all the different category tasks (2-class, 3-class, 4-class, 7-class, and 10-class decoding tasks). SIGNIFICANCE With limited training data ConTraNet significantly improves classification performance on four publicly available datasets for 2, 3, 4, 7, and 10-classes compared to its counterparts.
Collapse
Affiliation(s)
- Omair Ali
- Faculty of Medicine, Department of Neurosurgery, University Hospital Knappschaftskrankenhaus Bochum GmbH, Germany; Department of Electrical Engineering and Information Technology, Ruhr-University Bochum, Germany.
| | - Muhammad Saif-Ur-Rehman
- Department of Computer Science, Ruhr-West University of Applied Science, Mülheim an der Ruhr, Germany
| | | | - Ioannis Iossifidis
- Department of Computer Science, Ruhr-West University of Applied Science, Mülheim an der Ruhr, Germany
| | - Christian Klaes
- Faculty of Medicine, Department of Neurosurgery, University Hospital Knappschaftskrankenhaus Bochum GmbH, Germany
| |
Collapse
|
16
|
Wang W, Li B, Wang H, Wang X, Qin Y, Shi X, Liu S. EEG-FMCNN: A fusion multi-branch 1D convolutional neural network for EEG-based motor imagery classification. Med Biol Eng Comput 2024; 62:107-120. [PMID: 37728715 DOI: 10.1007/s11517-023-02931-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 09/07/2023] [Indexed: 09/21/2023]
Abstract
Motor imagery (MI) electroencephalogram (EEG) signal is recognized as a promising paradigm for brain-computer interface (BCI) systems and has been extensively employed in various BCI applications, including assisting disabled individuals, controlling devices and environments, and enhancing human capabilities. The high-performance decoding capability of MI-EEG signals is a key issue that impacts the development of the industry. However, decoding MI-EEG signals is challenging due to the low signal-to-noise ratio and inter-subject variability. In response to the aforementioned core problems, this paper proposes a novel end-to-end network, a fusion multi-branch 1D convolutional neural network (EEG-FMCNN), to decode MI-EEG signals without pre-processing. The utilization of multi-branch 1D convolution not only exhibits a certain level of noise tolerance but also addresses the issue of inter-subject variability to some extent. This is attributed to the ability of multi-branch architectures to capture information from different frequency bands, enabling the establishment of optimal convolutional scales and depths. Furthermore, we incorporate 1D squeeze-and-excitation (SE) blocks and shortcut connections at appropriate locations to further enhance the generalization and robustness of the network. In the BCI Competition IV-2a dataset, our proposed model has obtained good experimental results, achieving accuracies of 78.82% and 68.41% for subject-dependent and subject-independent modes, respectively. In addition, extensive ablative experiments and fine-tuning experiments were conducted, resulting in a notable 7% improvement in the average performance of the network, which holds significant implications for the generalization and application of the network.
Collapse
Affiliation(s)
- Wenlong Wang
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai, 201306, China
- Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, 201306, China
| | - Baojiang Li
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai, 201306, China.
- Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, 201306, China.
| | - Haiyan Wang
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai, 201306, China
- Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, 201306, China
| | - Xichao Wang
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai, 201306, China
- Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, 201306, China
| | - Yuxin Qin
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai, 201306, China
- Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, 201306, China
| | - Xingbin Shi
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai, 201306, China
- Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, 201306, China
| | - Shuxin Liu
- The School of Electrical Engineering, Shanghai Dianji University, Shanghai, 201306, China
- The Key Laboratory of Cognitive Computing and Intelligent Information Processing of Fujian Education Institutions (Wuyi University), Fujian, 354300, China
| |
Collapse
|
17
|
Rodríguez-Azar PI, Mejía-Muñoz JM, Cruz-Mejía O, Torres-Escobar R, López LVR. Fog Computing for Control of Cyber-Physical Systems in Industry Using BCI. SENSORS (BASEL, SWITZERLAND) 2023; 24:149. [PMID: 38203012 PMCID: PMC10781321 DOI: 10.3390/s24010149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 12/23/2023] [Accepted: 12/25/2023] [Indexed: 01/12/2024]
Abstract
Brain-computer interfaces use signals from the brain, such as EEG, to determine brain states, which in turn can be used to issue commands, for example, to control industrial machinery. While Cloud computing can aid in the creation and operation of industrial multi-user BCI systems, the vast amount of data generated from EEG signals can lead to slow response time and bandwidth problems. Fog computing reduces latency in high-demand computation networks. Hence, this paper introduces a fog computing solution for BCI processing. The solution consists in using fog nodes that incorporate machine learning algorithms to convert EEG signals into commands to control a cyber-physical system. The machine learning module uses a deep learning encoder to generate feature images from EEG signals that are subsequently classified into commands by a random forest. The classification scheme is compared using various classifiers, being the random forest the one that obtained the best performance. Additionally, a comparison was made between the fog computing approach and using only cloud computing through the use of a fog computing simulator. The results indicate that the fog computing method resulted in less latency compared to the solely cloud computing approach.
Collapse
Affiliation(s)
- Paula Ivone Rodríguez-Azar
- Departamento de Ingeniería Industrial y Manufactura, Instituto de Ingeniería y Tecnología, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez 32310, Mexico
| | - Jose Manuel Mejía-Muñoz
- Departamento de Ingeniería Eléctrica, Instituto de Ingenieria y Tecnologia, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez 32310, Mexico;
| | - Oliverio Cruz-Mejía
- Departamento de Ingeniería Industrial, FES Aragón, Universidad Nacional Autónoma de México, Mexico 57171, Mexico;
| | | | - Lucero Verónica Ruelas López
- Departamento de Ingeniería Eléctrica, Instituto de Ingenieria y Tecnologia, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez 32310, Mexico;
| |
Collapse
|
18
|
Miao M, Yang Z, Zeng H, Zhang W, Xu B, Hu W. Explainable cross-task adaptive transfer learning for motor imagery EEG classification. J Neural Eng 2023; 20:066021. [PMID: 37963394 DOI: 10.1088/1741-2552/ad0c61] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 11/14/2023] [Indexed: 11/16/2023]
Abstract
Objective. In the field of motor imagery (MI) electroencephalography (EEG)-based brain-computer interfaces, deep transfer learning (TL) has proven to be an effective tool for solving the problem of limited availability in subject-specific data for the training of robust deep learning (DL) models. Although considerable progress has been made in the cross-subject/session and cross-device scenarios, the more challenging problem of cross-task deep TL remains largely unexplored.Approach. We propose a novel explainable cross-task adaptive TL method for MI EEG decoding. Firstly, similarity analysis and data alignment are performed for EEG data of motor execution (ME) and MI tasks. Afterwards, the MI EEG decoding model is obtained via pre-training with extensive ME EEG data and fine-tuning with partial MI EEG data. Finally, expected gradient-based post-hoc explainability analysis is conducted for the visualization of important temporal-spatial features.Main results. Extensive experiments are conducted on one large ME EEG High-Gamma dataset and two large MI EEG datasets (openBMI and GIST). The best average classification accuracy of our method reaches 80.00% and 72.73% for OpenBMI and GIST respectively, which outperforms several state-of-the-art algorithms. In addition, the results of the explainability analysis further validate the correlation between ME and MI EEG data and the effectiveness of ME/MI cross-task adaptation.Significance. This paper confirms that the decoding of MI EEG can be well facilitated by pre-existing ME EEG data, which largely relaxes the constraint of training samples for MI EEG decoding and is important in a practical sense.
Collapse
Affiliation(s)
- Minmin Miao
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, People's Republic of China
| | - Zhong Yang
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
| | - Hong Zeng
- School of Instrument Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Wenbin Zhang
- College of Computer and Information, Hohai University, Nanjing, People's Republic of China
| | - Baoguo Xu
- School of Instrument Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Wenjun Hu
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, People's Republic of China
| |
Collapse
|
19
|
Alreshidi I, Bisandu D, Moulitsas I. Illuminating the Neural Landscape of Pilot Mental States: A Convolutional Neural Network Approach with Shapley Additive Explanations Interpretability. SENSORS (BASEL, SWITZERLAND) 2023; 23:9052. [PMID: 38005440 PMCID: PMC10674947 DOI: 10.3390/s23229052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/31/2023] [Accepted: 11/07/2023] [Indexed: 11/26/2023]
Abstract
Predicting pilots' mental states is a critical challenge in aviation safety and performance, with electroencephalogram data offering a promising avenue for detection. However, the interpretability of machine learning and deep learning models, which are often used for such tasks, remains a significant issue. This study aims to address these challenges by developing an interpretable model to detect four mental states-channelised attention, diverted attention, startle/surprise, and normal state-in pilots using EEG data. The methodology involves training a convolutional neural network on power spectral density features of EEG data from 17 pilots. The model's interpretability is enhanced via the use of SHapley Additive exPlanations values, which identify the top 10 most influential features for each mental state. The results demonstrate high performance in all metrics, with an average accuracy of 96%, a precision of 96%, a recall of 94%, and an F1 score of 95%. An examination of the effects of mental states on EEG frequency bands further elucidates the neural mechanisms underlying these states. The innovative nature of this study lies in its combination of high-performance model development, improved interpretability, and in-depth analysis of the neural correlates of mental states. This approach not only addresses the critical need for effective and interpretable mental state detection in aviation but also contributes to our understanding of the neural underpinnings of these states. This study thus represents a significant advancement in the field of EEG-based mental state detection.
Collapse
Affiliation(s)
- Ibrahim Alreshidi
- Centre for Computational Engineering Sciences, Cranfield University, Cranfield MK43 0AL, UK;
- Machine Learning and Data Analytics Laboratory, Digital Aviation Research and Technology Centre (DARTeC), Cranfield University, Cranfield MK43 0AL, UK
- College of Computer Science and Engineering, University of Ha’il, Ha’il 81451, Saudi Arabia
| | - Desmond Bisandu
- Centre for Computational Engineering Sciences, Cranfield University, Cranfield MK43 0AL, UK;
- Machine Learning and Data Analytics Laboratory, Digital Aviation Research and Technology Centre (DARTeC), Cranfield University, Cranfield MK43 0AL, UK
| | - Irene Moulitsas
- Centre for Computational Engineering Sciences, Cranfield University, Cranfield MK43 0AL, UK;
- Machine Learning and Data Analytics Laboratory, Digital Aviation Research and Technology Centre (DARTeC), Cranfield University, Cranfield MK43 0AL, UK
| |
Collapse
|
20
|
Wang H, Jiang J, Gan JQ, Wang H. Motor Imagery EEG Classification Based on a Weighted Multi-Branch Structure Suitable for Multisubject Data. IEEE Trans Biomed Eng 2023; 70:3040-3051. [PMID: 37186527 DOI: 10.1109/tbme.2023.3274231] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
OBJECTIVE Electroencephalogram (EEG) signal recognition based on deep learning technology requires the support of sufficient data. However, training data scarcity usually occurs in subject-specific motor imagery tasks unless multisubject data can be used to enlarge training data. Unfortunately, because of the large discrepancies between data distributions from different subjects, model performance could only be improved marginally or even worsened by simply training on multisubject data. METHOD This article proposes a novel weighted multi-branch (WMB) structure for handling multisubject data to solve the problem, in which each branch is responsible for fitting a pair of source-target subject data and adaptive weights are used to integrate all branches or select branches with the largest weights to make the final decision. The proposed WMB structure was applied to six well-known deep learning models (EEGNet, Shallow ConvNet, Deep ConvNet, ResNet, MSFBCNN, and EEG_TCNet) and comprehensive experiments were conducted on EEG datasets BCICIV-2a, BCICIV-2b, high gamma dataset (HGD) and two supplementary datasets. RESULT Superior results against the state-of-the-art models have demonstrated the efficacy of the proposed method in subject-specific motor imagery EEG classification. For example, the proposed WMB_EEGNet achieved classification accuracies of 84.14%, 90.23%, and 97.81% on BCICIV-2a, BCICIV-2b and HGD, respectively. CONCLUSION It is clear that the proposed WMB structure is capable to make good use of multisubject data with large distribution discrepancies for subject-specific EEG classification.
Collapse
|
21
|
Almufareh MF, Kausar S, Humayun M, Tehsin S. Leveraging Motor Imagery Rehabilitation for Individuals with Disabilities: A Review. Healthcare (Basel) 2023; 11:2653. [PMID: 37830690 PMCID: PMC10572951 DOI: 10.3390/healthcare11192653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 09/20/2023] [Accepted: 09/26/2023] [Indexed: 10/14/2023] Open
Abstract
Motor imagery, an intricate cognitive procedure encompassing the mental simulation of motor actions, has surfaced as a potent strategy within the neuro-rehabilitation domain. It presents a non-invasive, economically viable method for facilitating individuals with disabilities in enhancing their motor functionality and regaining self-sufficiency. This manuscript delivers an exhaustive analysis of the significance of motor imagery in augmenting functional rehabilitation for individuals afflicted with physical impairments. It investigates the fundamental mechanisms governing motor imagery, its applications across diverse disability conditions, and the prospective advantages it renders. Moreover, this document addresses the prevailing obstacles and prospective trajectories in this sector, accentuating the necessity for continued investigation and the invention of cutting-edge technologies that optimize the potentiality of motor imagery in aiding disabled persons.
Collapse
Affiliation(s)
- Maram Fahaad Almufareh
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakakah 72388, Saudi Arabia
| | - Sumaira Kausar
- Center of Excellence in Artificial Intelligence COE-AI, Department of CS, Bahria University, Islamabad 44000, Pakistan; (S.K.); (S.T.)
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakakah 72388, Saudi Arabia
| | - Samabia Tehsin
- Center of Excellence in Artificial Intelligence COE-AI, Department of CS, Bahria University, Islamabad 44000, Pakistan; (S.K.); (S.T.)
| |
Collapse
|
22
|
Jiao Y, He X, Jiao Z. Detecting slow eye movements using multi-scale one-dimensional convolutional neural network for driver sleepiness detection. J Neurosci Methods 2023; 397:109939. [PMID: 37579794 DOI: 10.1016/j.jneumeth.2023.109939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 07/15/2023] [Accepted: 08/03/2023] [Indexed: 08/16/2023]
Abstract
BACKGROUND Slow eye movements (SEMs), which occurs during eye-closed periods with high time coverage rate during simulated driving process, indicate drivers' sleep onset. NEW METHOD For the multi-scale characteristics of slow eye movement waveforms, we propose a multi-scale one-dimensional convolutional neural network (MS-1D-CNN) for classification. The MS-1D-CNN performs multiple down-sampling processing branches on the original signal and uses the local convolutional layer to extract the features for each branch. RESULTS We evaluate the classification performance of this model on ten subjects' standard train-test datasets and continuous test datasets by means of subject-subject evaluation and leave-one-subject-out cross validation, respectively. For the standard train-test datasets, the overall average classification accuracies are about 99.1% and 98.6%, in subject-subject evaluation and leave-one-subject-out cross validation, respectively. For the continuous test datasets, the overall average values of accuracy, precision, recall and F1-score are 99.3%, 98.9%, 99.5% and 99.1% in subject-subject evaluation, are 99.2%, 98.8%, 99.3% and 99.0% in leave-one-subject-out cross validation. COMPARISON WITH EXISTING METHOD Results of the standard train-test datasets show that the overall average classification accuracy of the MS-1D-CNN is quite higher than the baseline method based on hand-designed features by 3.5% and 3.5%, in subject-subject evaluation and leave-one-subject-out cross validation, respectively. CONCLUSIONS These results suggest that multi-scale transformation in the MS-1D-CNN model can enhance the representation ability of features, thereby improving classification accuracy. Experimental results verify the good performance of the MS-1D-CNN model, even in leave-one-subject-out cross validation, thus promoting the application of SEMs detection technology for driver sleepiness detection.
Collapse
Affiliation(s)
- Yingying Jiao
- Center for Brain-like Computing and Machine Intelligence, Department of Computer Science and Engineering, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240, China.
| | - Xiujin He
- Center for Brain-like Computing and Machine Intelligence, Department of Computer Science and Engineering, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240, China
| | - Zhuqing Jiao
- Center for Brain-like Computing and Machine Intelligence, Department of Computer Science and Engineering, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240, China
| |
Collapse
|
23
|
Ma X, Chen W, Pei Z, Liu J, Huang B, Chen J. A Temporal Dependency Learning CNN With Attention Mechanism for MI-EEG Decoding. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3188-3200. [PMID: 37498754 DOI: 10.1109/tnsre.2023.3299355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Deep learning methods have been widely explored in motor imagery (MI)-based brain computer interface (BCI) systems to decode electroencephalography (EEG) signals. However, most studies fail to fully explore temporal dependencies among MI-related patterns generated in different stages during MI tasks, resulting in limited MI-EEG decoding performance. Apart from feature extraction, learning temporal dependencies is equally important to develop a subject-specific MI-based BCI because every subject has their own way of performing MI tasks. In this paper, a novel temporal dependency learning convolutional neural network (CNN) with attention mechanism is proposed to address MI-EEG decoding. The network first learns spatial and spectral information from multi-view EEG data via the spatial convolution block. Then, a series of non-overlapped time windows is employed to segment the output data, and the discriminative feature is further extracted from each time window to capture MI-related patterns generated in different stages. Furthermore, to explore temporal dependencies among discriminative features in different time windows, we design a temporal attention module that assigns different weights to features in various time windows and fuses them into more discriminative features. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and OpenBMI datasets show that our proposed network outperforms the state-of-the-art algorithms and achieves the average accuracy of 79.48%, improved by 2.30% on the BCIC-IV-2a dataset. We demonstrate that learning temporal dependencies effectively improves MI-EEG decoding performance. The code is available at https://github.com/Ma-Xinzhi/LightConvNet.
Collapse
|
24
|
Sun C, Mou C. Survey on the research direction of EEG-based signal processing. Front Neurosci 2023; 17:1203059. [PMID: 37521708 PMCID: PMC10372445 DOI: 10.3389/fnins.2023.1203059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 06/16/2023] [Indexed: 08/01/2023] Open
Abstract
Electroencephalography (EEG) is increasingly important in Brain-Computer Interface (BCI) systems due to its portability and simplicity. In this paper, we provide a comprehensive review of research on EEG signal processing techniques since 2021, with a focus on preprocessing, feature extraction, and classification methods. We analyzed 61 research articles retrieved from academic search engines, including CNKI, PubMed, Nature, IEEE Xplore, and Science Direct. For preprocessing, we focus on innovatively proposed preprocessing methods, channel selection, and data augmentation. Data augmentation is classified into conventional methods (sliding windows, segmentation and recombination, and noise injection) and deep learning methods [Generative Adversarial Networks (GAN) and Variation AutoEncoder (VAE)]. We also pay attention to the application of deep learning, and multi-method fusion approaches, including both conventional algorithm fusion and fusion between conventional algorithms and deep learning. Our analysis identifies 35 (57.4%), 18 (29.5%), and 37 (60.7%) studies in the directions of preprocessing, feature extraction, and classification, respectively. We find that preprocessing methods have become widely used in EEG classification (96.7% of reviewed papers) and comparative experiments have been conducted in some studies to validate preprocessing. We also discussed the adoption of channel selection and data augmentation and concluded several mentionable matters about data augmentation. Furthermore, deep learning methods have shown great promise in EEG classification, with Convolutional Neural Networks (CNNs) being the main structure of deep neural networks (92.3% of deep learning papers). We summarize and analyze several innovative neural networks, including CNNs and multi-structure fusion. However, we also identified several problems and limitations of current deep learning techniques in EEG classification, including inappropriate input, low cross-subject accuracy, unbalanced between parameters and time costs, and a lack of interpretability. Finally, we highlight the emerging trend of multi-method fusion approaches (49.2% of reviewed papers) and analyze the data and some examples. We also provide insights into some challenges of multi-method fusion. Our review lays a foundation for future studies to improve EEG classification performance.
Collapse
|
25
|
Koo B, Nguyen NT, Kim J. Identification and Classification of Human Body Exercises on Smart Textile Bands by Combining Decision Tree and Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:6223. [PMID: 37448070 DOI: 10.3390/s23136223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 06/27/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023]
Abstract
In recent years, human activity recognition (HAR) has gained significant interest from researchers in the sports and fitness industries. In this study, the authors have proposed a cascaded method including two classifying stages to classify fitness exercises, utilizing a decision tree as the first stage and a one-dimension convolutional neural network as the second stage. The data acquisition was carried out by five participants performing exercises while wearing an inertial measurement unit sensor attached to a wristband on their wrists. However, only data acquired along the z-axis of the IMU accelerator was used as input to train and test the proposed model, to simplify the model and optimize the training time while still achieving good performance. To examine the efficiency of the proposed method, the authors compared the performance of the cascaded model and the conventional 1D-CNN model. The obtained results showed an overall improvement in the accuracy of exercise classification by the proposed model, which was approximately 92%, compared to 82.4% for the 1D-CNN model. In addition, the authors suggested and evaluated two methods to optimize the clustering outcome of the first stage in the cascaded model. This research demonstrates that the proposed model, with advantages in terms of training time and computational cost, is able to classify fitness workouts with high performance. Therefore, with further development, it can be applied in various real-time HAR applications.
Collapse
Affiliation(s)
- Bonhak Koo
- Department of Materials Science and Engineering, Soongsil University, Seoul 156-743, Republic of Korea
| | - Ngoc Tram Nguyen
- Department of Smart Wearable Engineering, Soongsil University, Seoul 156-743, Republic of Korea
| | - Jooyong Kim
- Department of Materials Science and Engineering, Soongsil University, Seoul 156-743, Republic of Korea
| |
Collapse
|
26
|
Proverbio AM, Pischedda F. Measuring brain potentials of imagination linked to physiological needs and motivational states. Front Hum Neurosci 2023; 17:1146789. [PMID: 37007683 PMCID: PMC10050745 DOI: 10.3389/fnhum.2023.1146789] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 03/02/2023] [Indexed: 03/17/2023] Open
Abstract
IntroductionWhile EEG signals reflecting motor and perceptual imagery are effectively used in brain computer interface (BCI) contexts, little is known about possible indices of motivational states. In the present study, electrophysiological markers of imagined motivational states, such as craves and desires were investigated.MethodsEvent-related potentials (ERPs) were recorded in 31 participants during perception and imagery elicited by the presentation of 360 pictograms. Twelve micro-categories of needs, subdivided into four macro-categories, were considered as most relevant for a possible BCI usage, namely: primary visceral needs (e.g., hunger, linked to desire of food); somatosensory thermal and pain sensations (e.g., cold, linked to desire of warm), affective states (e.g., fear: linked to desire of reassurance) and secondary needs (e.g., desire to exercise or listen to music). Anterior N400 and centroparietal late positive potential (LPP) were measured and statistically analyzed.ResultsN400 and LPP were differentially sensitive to the various volition stats, depending on their sensory, emotional and motivational poignancy. N400 was larger to imagined positive appetitive states (e.g., play, cheerfulness) than negative ones (sadness or fear). In addition, N400 was of greater amplitude during imagery of thermal and nociceptive sensations than other motivational or visceral states. Source reconstruction of electromagnetic dipoles showed the activation of sensorimotor areas and cerebellum for movement imagery, and of auditory and superior frontal areas for music imagery.DiscussionOverall, ERPs were smaller and more anteriorly distributed during imagery than perception, but showed some similarity in terms of lateralization, distribution, and category response, thus indicating some overlap in neural processing, as also demonstrated by correlation analyses. In general, anterior frontal N400 provided clear markers of subjects’ physiological needs and motivational states, especially cold, pain, and fear (but also sadness, the urgency to move, etc.), than can signal life-threatening conditions. It is concluded that ERP markers might potentially allow the reconstruction of mental representations related to various motivational states through BCI systems.
Collapse
|
27
|
A compact multi-branch 1D convolutional neural network for EEG-based motor imagery classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
28
|
Hossain KM, Islam MA, Hossain S, Nijholt A, Ahad MAR. Status of deep learning for EEG-based brain-computer interface applications. Front Comput Neurosci 2023; 16:1006763. [PMID: 36726556 PMCID: PMC9885375 DOI: 10.3389/fncom.2022.1006763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 12/23/2022] [Indexed: 01/18/2023] Open
Abstract
In the previous decade, breakthroughs in the central nervous system bioinformatics and computational innovation have prompted significant developments in brain-computer interface (BCI), elevating it to the forefront of applied science and research. BCI revitalization enables neurorehabilitation strategies for physically disabled patients (e.g., disabled patients and hemiplegia) and patients with brain injury (e.g., patients with stroke). Different methods have been developed for electroencephalogram (EEG)-based BCI applications. Due to the lack of a large set of EEG data, methods using matrix factorization and machine learning were the most popular. However, things have changed recently because a number of large, high-quality EEG datasets are now being made public and used in deep learning-based BCI applications. On the other hand, deep learning is demonstrating great prospects for solving complex relevant tasks such as motor imagery classification, epileptic seizure detection, and driver attention recognition using EEG data. Researchers are doing a lot of work on deep learning-based approaches in the BCI field right now. Moreover, there is a great demand for a study that emphasizes only deep learning models for EEG-based BCI applications. Therefore, we introduce this study to the recent proposed deep learning-based approaches in BCI using EEG data (from 2017 to 2022). The main differences, such as merits, drawbacks, and applications are introduced. Furthermore, we point out current challenges and the directions for future studies. We argue that this review study will help the EEG research community in their future research.
Collapse
Affiliation(s)
- Khondoker Murad Hossain
- Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, United States
| | - Md. Ariful Islam
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka, Bangladesh
| | | | - Anton Nijholt
- Human Media Interaction, University of Twente, Enschede, Netherlands
| | - Md Atiqur Rahman Ahad
- Department of Computer Science and Digital Technology, University of East London, London, United Kingdom,*Correspondence: Md Atiqur Rahman Ahad ✉
| |
Collapse
|
29
|
Wang J, Ge X, Shi Y, Sun M, Gong Q, Wang H, Huang W. Dual-Modal Information Bottleneck Network for Seizure Detection. Int J Neural Syst 2023; 33:2250061. [PMID: 36599663 DOI: 10.1142/s0129065722500617] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
In recent years, deep learning has shown very competitive performance in seizure detection. However, most of the currently used methods either convert electroencephalogram (EEG) signals into spectral images and employ 2D-CNNs, or split the one-dimensional (1D) features of EEG signals into many segments and employ 1D-CNNs. Moreover, these investigations are further constrained by the absence of consideration for temporal links between time series segments or spectrogram images. Therefore, we propose a Dual-Modal Information Bottleneck (Dual-modal IB) network for EEG seizure detection. The network extracts EEG features from both time series and spectrogram dimensions, allowing information from different modalities to pass through the Dual-modal IB, requiring the model to gather and condense the most pertinent information in each modality and only share what is necessary. Specifically, we make full use of the information shared between the two modality representations to obtain key information for seizure detection and to remove irrelevant feature between the two modalities. In addition, to explore the intrinsic temporal dependencies, we further introduce a bidirectional long-short-term memory (BiLSTM) for Dual-modal IB model, which is used to model the temporal relationships between the information after each modality is extracted by convolutional neural network (CNN). For CHB-MIT dataset, the proposed framework can achieve an average segment-based sensitivity of 97.42%, specificity of 99.32%, accuracy of 98.29%, and an average event-based sensitivity of 96.02%, false detection rate (FDR) of 0.70/h. We release our code at https://github.com/LLLL1021/Dual-modal-IB.
Collapse
Affiliation(s)
- Jiale Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Xinting Ge
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Yunfeng Shi
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Mengxue Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Qingtao Gong
- Ulsan Ship and Ocean College, Ludong University, Yantai 264025, P. R. China
| | - Haipeng Wang
- Institute of Information Fusion, Naval, Aviation University, Yantai 264001, P. R. China
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| |
Collapse
|
30
|
Proverbio AM, Tacchini M, Jiang K. Event-related brain potential markers of visual and auditory perception: A useful tool for brain computer interface systems. Front Behav Neurosci 2022; 16:1025870. [PMID: 36523756 PMCID: PMC9744781 DOI: 10.3389/fnbeh.2022.1025870] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 11/03/2022] [Indexed: 06/27/2024] Open
Abstract
OBJECTIVE A majority of BCI systems, enabling communication with patients with locked-in syndrome, are based on electroencephalogram (EEG) frequency analysis (e.g., linked to motor imagery) or P300 detection. Only recently, the use of event-related brain potentials (ERPs) has received much attention, especially for face or music recognition, but neuro-engineering research into this new approach has not been carried out yet. The aim of this study was to provide a variety of reliable ERP markers of visual and auditory perception for the development of new and more complex mind-reading systems for reconstructing the mental content from brain activity. METHODS A total of 30 participants were shown 280 color pictures (adult, infant, and animal faces; human bodies; written words; checkerboards; and objects) and 120 auditory files (speech, music, and affective vocalizations). This paradigm did not involve target selection to avoid artifactual waves linked to decision-making and response preparation (e.g., P300 and motor potentials), masking the neural signature of semantic representation. Overall, 12,000 ERP waveforms × 126 electrode channels (1 million 512,000 ERP waveforms) were processed and artifact-rejected. RESULTS Clear and distinct category-dependent markers of perceptual and cognitive processing were identified through statistical analyses, some of which were novel to the literature. Results are discussed from the view of current knowledge of ERP functional properties and with respect to machine learning classification methods previously applied to similar data. CONCLUSION The data showed a high level of accuracy (p ≤ 0.01) in the discriminating the perceptual categories eliciting the various electrical potentials by statistical analyses. Therefore, the ERP markers identified in this study could be significant tools for optimizing BCI systems [pattern recognition or artificial intelligence (AI) algorithms] applied to EEG/ERP signals.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Laboratory of Cognitive Electrophysiology, Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Marta Tacchini
- Laboratory of Cognitive Electrophysiology, Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Kaijun Jiang
- Laboratory of Cognitive Electrophysiology, Department of Psychology, University of Milano-Bicocca, Milan, Italy
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
31
|
Zhang T, Tang Q, Nie F, Zhao Q, Chen W. DeepLncPro: an interpretable convolutional neural network model for identifying long non-coding RNA promoters. Brief Bioinform 2022; 23:6754194. [PMID: 36209437 DOI: 10.1093/bib/bbac447] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 09/14/2022] [Accepted: 09/17/2022] [Indexed: 12/14/2022] Open
Abstract
Long non-coding RNA (lncRNA) plays important roles in a series of biological processes. The transcription of lncRNA is regulated by its promoter. Hence, accurate identification of lncRNA promoter will be helpful to understand its regulatory mechanisms. Since experimental techniques remain time consuming for gnome-wide promoter identification, developing computational tools to identify promoters are necessary. However, only few computational methods have been proposed for lncRNA promoter prediction and their performances still have room to be improved. In the present work, a convolutional neural network based model, called DeepLncPro, was proposed to identify lncRNA promoters in human and mouse. Comparative results demonstrated that DeepLncPro was superior to both state-of-the-art machine learning methods and existing models for identifying lncRNA promoters. Furthermore, DeepLncPro has the ability to extract and analyze transcription factor binding motifs from lncRNAs, which made it become an interpretable model. These results indicate that the DeepLncPro can server as a powerful tool for identifying lncRNA promoters. An open-source tool for DeepLncPro was provided at https://github.com/zhangtian-yang/DeepLncPro.
Collapse
Affiliation(s)
- Tianyang Zhang
- School of Life Sciences, North China University of Science and Technology
| | - Qiang Tang
- School of Basic Medical Sciences, Chengdu University of Traditional Chinese Medicine
| | - Fulei Nie
- School of Life Sciences, North China University of Science and Technology
| | - Qi Zhao
- School of Computer Science and Software Engineering, University of Science and Technology Liaoning
| | - Wei Chen
- Innovative Institute of Chinese Medicine and Pharmacy, Chengdu University of Traditional Chinese Medicine
| |
Collapse
|
32
|
Luján MÁ, Mateo Sotos J, Torres A, Santos JL, Quevedo O, Borja AL. Mental Disorder Diagnosis from EEG Signals Employing Automated Leaning Procedures Based on Radial Basis Functions. J Med Biol Eng 2022; 42:853-859. [PMID: 36407571 PMCID: PMC9651124 DOI: 10.1007/s40846-022-00758-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Accepted: 10/21/2022] [Indexed: 11/13/2022]
Abstract
Purpose In this paper, a new automated procedure based on deep learning methods for schizophrenia diagnosis is presented. Methods To this aim, electroencephalogram signals obtained using a 32-channel helmet are prominently used to analyze high temporal resolution information from the brain. By these means, the data collected is employed to evaluate the class likelihoods using a neuronal network based on radial basis functions and a fuzzy means algorithm. Results The results obtained with real datasets validate the high accuracy of the proposed classification method. Thus, effectively characterizing the changes in EEG signals acquired from schizophrenia patients and healthy volunteers. More specifically, values of accuracy better than 93% has been obtained in the present research. Additionally, a comparative study with other approaches based on well-knows machine learning methods shows that the proposed method provides better results than recently proposed algorithms in schizophrenia detection. Conclusion The proposed method can be used as a diagnostic tool in the detection of the schizophrenia, helping for early diagnosis and treatment.
Collapse
Affiliation(s)
- Miguel Ángel Luján
- Departamento de Ingeniería Eléctrica, Automática y Comunicaciones, Universidad de Castilla-La Mancha, Electrónica, 02071 Albacete, Spain
| | - Jorge Mateo Sotos
- Instituto de Tecnología, Construcción y Telecomunicaciones, Universidad de Castilla-La Mancha, 16071 Cuenca, Spain
| | - Ana Torres
- Instituto de Tecnología, Construcción y Telecomunicaciones, Universidad de Castilla-La Mancha, 16071 Cuenca, Spain
| | - José L. Santos
- Servicio de Psiquiatría, Hospital Virgen de la Luz, 16002 Cuenca, Spain
| | - Oscar Quevedo
- KTH Royal Institute of Technology, 114 28 Stockholm, Sweden
| | - Alejandro L. Borja
- Departamento de Ingeniería Eléctrica, Automática y Comunicaciones, Universidad de Castilla-La Mancha, Electrónica, 02071 Albacete, Spain
| |
Collapse
|
33
|
Wang H, Yua H, Wang H. EEG_GENet: A feature-level graph embedding method for motor imagery classification based on EEG signals. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
34
|
Lopez-Bernal D, Balderas D, Ponce P, Molina A. A State-of-the-Art Review of EEG-Based Imagined Speech Decoding. Front Hum Neurosci 2022; 16:867281. [PMID: 35558735 PMCID: PMC9086783 DOI: 10.3389/fnhum.2022.867281] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 03/24/2022] [Indexed: 11/13/2022] Open
Abstract
Currently, the most used method to measure brain activity under a non-invasive procedure is the electroencephalogram (EEG). This is because of its high temporal resolution, ease of use, and safety. These signals can be used under a Brain Computer Interface (BCI) framework, which can be implemented to provide a new communication channel to people that are unable to speak due to motor disabilities or other neurological diseases. Nevertheless, EEG-based BCI systems have presented challenges to be implemented in real life situations for imagined speech recognition due to the difficulty to interpret EEG signals because of their low signal-to-noise ratio (SNR). As consequence, in order to help the researcher make a wise decision when approaching this problem, we offer a review article that sums the main findings of the most relevant studies on this subject since 2009. This review focuses mainly on the pre-processing, feature extraction, and classification techniques used by several authors, as well as the target vocabulary. Furthermore, we propose ideas that may be useful for future work in order to achieve a practical application of EEG-based BCI systems toward imagined speech decoding.
Collapse
Affiliation(s)
- Diego Lopez-Bernal
- Tecnologico de Monterrey, National Department of Research, Mexico City, Mexico
| | | | | | | |
Collapse
|