1
|
Mathumitha R, Maryposonia A. Emotion analysis of EEG signals using proximity-conserving auto-encoder (PCAE) and ensemble techniques. Cogn Neurodyn 2025; 19:32. [PMID: 39866661 PMCID: PMC11757850 DOI: 10.1007/s11571-024-10187-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Revised: 11/02/2024] [Accepted: 12/02/2024] [Indexed: 01/28/2025] Open
Abstract
Emotion recognition plays a crucial role in brain-computer interfaces (BCI) which helps to identify and classify human emotions as positive, negative, and neutral. Emotion analysis in BCI maintains a substantial perspective in distinct fields such as healthcare, education, gaming, and human-computer interaction. In healthcare, emotion analysis based on electroencephalography (EEG) signals is deployed to provide personalized support for patients with autism or mood disorders. Recently, several deep learning (DL) based approaches have been developed for accurate emotion recognition tasks. Yet, previous works often struggle with poor recognition accuracy, high dimensionality, and high computational time. This research work designed an innovative framework named Proximity-conserving Auto-encoder (PCAE) for accurate emotion recognition based on EEG signals and resolves challenges faced by traditional emotion analysis techniques. For preserving local structures among the EEG data and reducing dimensionality, the proposed PCAE approach is introduced and it captures the essential features related to emotional states. The EEG data are collected from the EEG Brainwave dataset using a Muse EEG headband and applying preprocessing steps to enhance signal quality. The proposed PCAE model incorporates multiple convolution and deconvolution layers for encoding and decoding and deploys a Local Proximity Preservation Layer for preserving local correlations in the latent space. In addition, it develops a Proximity-conserving Squeeze-and-Excitation Auto-encoder (PC-SEAE) model to further improve the feature extraction ability of the PCAE technique. The proposed PCAE technique utilizes Maximum Mean Discrepancy (MMD) regularization to decrease the distribution discrepancy between input data and the extracted features. Moreover, the proposed model designs an ensemble model for emotion categorization that incorporates a one-versus-support vector machine (SVM), random forest (RF), and Long Short-Term Memory (LSTM) networks by utilizing each classifier's strength to enhance classification accuracy. Further, the performance of the proposed PCAE model is evaluated using diverse performance measures and the model attains outstanding results including accuracy, precision, and Kappa coefficient of 98.87%, 98.69%, and 0.983 respectively. This experimental validation proves that the proposed PCAE framework provides a significant contribution to accurate emotion recognition and classification systems.
Collapse
Affiliation(s)
- R. Mathumitha
- Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, TamilNadu India
| | - A. Maryposonia
- Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, TamilNadu India
| |
Collapse
|
2
|
Fu B, Chu W, Gu C, Liu Y. Cross-Modal Guiding Neural Network for Multimodal Emotion Recognition From EEG and Eye Movement Signals. IEEE J Biomed Health Inform 2024; 28:5865-5876. [PMID: 38917288 DOI: 10.1109/jbhi.2024.3419043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Multimodal emotion recognition research is gaining attention because of the emerging trend of integrating information from different sensory modalities to improve performance. Electroencephalogram (EEG) signals are considered objective indicators of emotions and provide precise insights despite their complex data collection. In contrast, eye movement signals are more susceptible to environmental and individual differences but offer convenient data collection. Conventional emotion recognition methods typically use separate models for different modalities, potentially overlooking their inherent connections. This study introduces a cross-modal guiding neural network designed to fully leverage the strengths of both modalities. The network includes a dual-branch feature extraction module that simultaneously extracts features from EEG and eye movement signals. In addition, the network includes a feature guidance module that uses EEG features to direct eye movement feature extraction, reducing the impact of subjective factors. This study also introduces a feature reweighting module to explore emotion-related features within eye movement signals, thereby improving emotion classification accuracy. The empirical findings from both the SEED-IV dataset and our collected dataset substantiate the commendable performance of the model, thereby confirming its efficacy.
Collapse
|
3
|
Zhang R, Rong R, Xu Y, Wang H, Wang X. OxcarNet: sinc convolutional network with temporal and channel attention for prediction of oxcarbazepine monotherapy responses in patients with newly diagnosed epilepsy. J Neural Eng 2024; 21:056019. [PMID: 39250934 DOI: 10.1088/1741-2552/ad788c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Accepted: 09/09/2024] [Indexed: 09/11/2024]
Abstract
Objective.Monotherapy with antiepileptic drugs (AEDs) is the preferred strategy for the initial treatment of epilepsy. However, an inadequate response to the initially prescribed AED is a significant indicator of a poor long-term prognosis, emphasizing the importance of precise prediction of treatment outcomes with the initial AED regimen in patients with epilepsy.Approach. We introduce OxcarNet, an end-to-end neural network framework developed to predict treatment outcomes in patients undergoing oxcarbazepine monotherapy. The proposed predictive model adopts a Sinc Module in its initial layers for adaptive identification of discriminative frequency bands. The derived feature maps are then processed through a Spatial Module, which characterizes the scalp distribution patterns of the electroencephalography (EEG) signals. Subsequently, these features are fed into an attention-enhanced Temporal Module to capture temporal dynamics and discrepancies. A channel module with an attention mechanism is employed to reveal inter-channel dependencies within the output of the Temporal Module, ultimately achieving response prediction. OxcarNet was rigorously evaluated using a proprietary dataset of retrospectively collected EEG data from newly diagnosed epilepsy patients at Nanjing Drum Tower Hospital. This dataset included patients who underwent long-term EEG monitoring in a clinical inpatient setting.Main results.OxcarNet demonstrated exceptional accuracy in predicting treatment outcomes for patients undergoing Oxcarbazepine monotherapy. In the ten-fold cross-validation, the model achieved an accuracy of 97.27%, and in the validation involving unseen patient data, it maintained an accuracy of 89.17%, outperforming six conventional machine learning methods and three generic neural decoding networks. These findings underscore the model's effectiveness in accurately predicting the treatment responses in patients with newly diagnosed epilepsy. The analysis of features extracted by the Sinc filters revealed a predominant concentration of predictive frequencies in the high-frequency range of the gamma band.Significance. The findings of our study offer substantial support and new insights into tailoring early AED selection, enhancing the prediction accuracy for the responses of AEDs.
Collapse
Affiliation(s)
- Runkai Zhang
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, Jiangsu, People's Republic of China
| | - Rong Rong
- Department of Neurology, Nanjing Drum Tower Hospital, Nanjing 210008, Jiangsu, People's Republic of China
| | - Yun Xu
- Department of Neurology, Nanjing Drum Tower Hospital, Nanjing 210008, Jiangsu, People's Republic of China
| | - Haixian Wang
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, Jiangsu, People's Republic of China
| | - Xiaoyun Wang
- Department of Neurology, Nanjing Drum Tower Hospital, Nanjing 210008, Jiangsu, People's Republic of China
| |
Collapse
|
4
|
Luo H, Zhao X, Zhou T, Wang Z, Xu T, Hu H. EEG Emotion Recognition Based on 3D-CTransNet. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40031451 DOI: 10.1109/embc53108.2024.10782401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Emotion recognition is of great significance for brain-computer interface and emotion computing, and EEG plays a key role in this field. However, the current design of brain computer interface deep learning model is faced with algorithmic or structural constraints, and it is difficult to recognize the complex features in EEG signals with long-term dynamic changes. To solve this issue, a hybrid CNN-Transformer structure using 3D data input is proposed and named 3D-CTransNet in this paper, which solves the problem of performance degradation of the traditional CNN-LSTM hybrid structure in the recognition of long sequence signals. At the same time, the self attention mechanism and parallel mode introduced by Transformer improve the recognition accuracy and processing speed. In addition, the 3D data feature map based on electrode position mapping effectively retains the spatial characteristics of EEG signals, which makes CNN better combine the time domain and spatial domain. Finally, the Valence-Arousal classification training of emotion is carried out on the public dataset DEAP, and the classification accuracy is 97.04%, which is about 5% higher than that of the hybrid CNN-LSTM model.
Collapse
|
5
|
Rakhmatulin I, Dao MS, Nassibi A, Mandic D. Exploring Convolutional Neural Network Architectures for EEG Feature Extraction. SENSORS (BASEL, SWITZERLAND) 2024; 24:877. [PMID: 38339594 PMCID: PMC10856895 DOI: 10.3390/s24030877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/12/2024] [Accepted: 01/20/2024] [Indexed: 02/12/2024]
Abstract
The main purpose of this paper is to provide information on how to create a convolutional neural network (CNN) for extracting features from EEG signals. Our task was to understand the primary aspects of creating and fine-tuning CNNs for various application scenarios. We considered the characteristics of EEG signals, coupled with an exploration of various signal processing and data preparation techniques. These techniques include noise reduction, filtering, encoding, decoding, and dimension reduction, among others. In addition, we conduct an in-depth analysis of well-known CNN architectures, categorizing them into four distinct groups: standard implementation, recurrent convolutional, decoder architecture, and combined architecture. This paper further offers a comprehensive evaluation of these architectures, covering accuracy metrics, hyperparameters, and an appendix that contains a table outlining the parameters of commonly used CNN architectures for feature extraction from EEG signals.
Collapse
Affiliation(s)
- Ildar Rakhmatulin
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK; (A.N.)
| | - Minh-Son Dao
- National Institute of Information and Communications Technology (NICT), Tokyo 184-0015, Japan
| | - Amir Nassibi
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK; (A.N.)
| | - Danilo Mandic
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK; (A.N.)
| |
Collapse
|
6
|
Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, García DL, Gorriz JM, Acharya UR. Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 2023; 165:107450. [PMID: 37708717 DOI: 10.1016/j.compbiomed.2023.107450] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023]
Abstract
Emotions are a critical aspect of daily life and serve a crucial role in human decision-making, planning, reasoning, and other mental states. As a result, they are considered a significant factor in human interactions. Human emotions can be identified through various sources, such as facial expressions, speech, behavior (gesture/position), or physiological signals. The use of physiological signals can enhance the objectivity and reliability of emotion detection. Compared with peripheral physiological signals, electroencephalogram (EEG) recordings are directly generated by the central nervous system and are closely related to human emotions. EEG signals have the great spatial resolution that facilitates the evaluation of brain functions, making them a popular modality in emotion recognition studies. Emotion recognition using EEG signals presents several challenges, including signal variability due to electrode positioning, individual differences in signal morphology, and lack of a universal standard for EEG signal processing. Moreover, identifying the appropriate features for emotion recognition from EEG data requires further research. Finally, there is a need to develop more robust artificial intelligence (AI) including conventional machine learning (ML) and deep learning (DL) methods to handle the complex and diverse EEG signals associated with emotional states. This paper examines the application of DL techniques in emotion recognition from EEG signals and provides a detailed discussion of relevant articles. The paper explores the significant challenges in emotion recognition using EEG signals, highlights the potential of DL techniques in addressing these challenges, and suggests the scope for future research in emotion recognition using DL techniques. The paper concludes with a summary of its findings.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Sara Bagherzadeh
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - David López García
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
7
|
Ouyang G, Zhou C. Exploiting Information in Event-Related Brain Potentials from Average Temporal Waveform, Time-Frequency Representation, and Phase Dynamics. Bioengineering (Basel) 2023; 10:1054. [PMID: 37760156 PMCID: PMC10525145 DOI: 10.3390/bioengineering10091054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/02/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023] Open
Abstract
Characterizing the brain's dynamic pattern of response to an input in electroencephalography (EEG) is not a trivial task due to the entanglement of the complex spontaneous brain activity. In this context, the brain's response can be defined as (1) the additional neural activity components generated after the input or (2) the changes in the ongoing spontaneous activities induced by the input. Moreover, the response can be manifested in multiple features. Three commonly studied examples of features are (1) transient temporal waveform, (2) time-frequency representation, and (3) phase dynamics. The most extensively used method of average event-related potentials (ERPs) captures the first one, while the latter two and other more complex features are attracting increasing attention. However, there has not been much work providing a systematic illustration and guidance for how to effectively exploit multifaceted features in neural cognitive research. Based on a visual oddball ERPs dataset with 200 participants, this work demonstrates how the information from the above-mentioned features are complementary to each other and how they can be integrated based on stereotypical neural-network-based machine learning approaches to better exploit neural dynamic information in basic and applied cognitive research.
Collapse
Affiliation(s)
- Guang Ouyang
- Faculty of Education, The University of Hong Kong, Hong Kong
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies, The Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| |
Collapse
|
8
|
Khajeh Hosseini MS, Pourmir Firoozabadi M, Badie K, Azad Fallah P. Electroencephalograph Emotion Classification Using a Novel Adaptive Ensemble Classifier Considering Personality Traits. Basic Clin Neurosci 2023; 14:687-700. [PMID: 38628840 PMCID: PMC11016883 DOI: 10.32598/bcn.2022.3830.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 04/12/2022] [Accepted: 06/22/2023] [Indexed: 04/19/2024] Open
Abstract
Introduction The study explores the use of Electroencephalograph (EEG) signals as a means to uncover various states of the human brain, with a specific focus on emotion classification. Despite the potential of EEG signals in this domain, existing methods face challenges. Features extracted from EEG signals may not accurately represent an individual's emotional patterns due to interference from time-varying factors and noise. Additionally, higher-level cognitive factors, such as personality, mood, and past experiences, further complicate emotion recognition. The dynamic nature of EEG data in terms of time series introduces variability in feature distribution and interclass discrimination across different time stages. Methods To address these challenges, the paper proposes a novel adaptive ensemble classification method. The study introduces a new method for providing emotional stimuli, categorizing them into three groups (sadness, neutral, and happiness) based on their valence-arousal (VA) scores. The experiment involved 60 participants aged 19-30 years, and the proposed method aimed to mitigate the limitations associated with conventional classifiers. Results The results demonstrate a significant improvement in the performance of emotion classifiers compared to conventional methods. The classification accuracy achieved by the proposed adaptive ensemble classification method is reported at 87.96%. This suggests a promising advancement in the ability to accurately classify emotions using EEG signals, overcoming the limitations outlined in the introduction. Conclusion In conclusion, the paper introduces an innovative approach to emotion classification based on EEG signals, addressing key challenges associated with existing methods. By employing a new adaptive ensemble classification method and refining the process of providing emotional stimuli, the study achieves a noteworthy improvement in classification accuracy. This advancement is crucial for enhancing our understanding of the complexities of emotion recognition through EEG signals, paving the way for more effective applications in fields such as neuroinformatics and affective computing.
Collapse
Affiliation(s)
- Mohammad Saleh Khajeh Hosseini
- Department of Biomedical Engineering, Faculty of Medical Sciences and Technologies, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | | | - Kambiz Badie
- Department of Content & E-Services Research, Faculty of IT Research,University of Tehran, Tehran, Iran
- Iran Telecommunication Research Center (ITRC), Tehran, Iran
| | - Parviz Azad Fallah
- Department of Psychology, Faculty of Humanities, Tarbiat Modares University, Tehran, Iran
| |
Collapse
|
9
|
Hosseini MSK, Firoozabadi SM, Badie K, Azadfallah P. Personality-Based Emotion Recognition Using EEG Signals with a CNN-LSTM Network. Brain Sci 2023; 13:947. [PMID: 37371425 PMCID: PMC10296308 DOI: 10.3390/brainsci13060947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/27/2023] [Accepted: 05/30/2023] [Indexed: 06/29/2023] Open
Abstract
The accurate detection of emotions has significant implications in healthcare, psychology, and human-computer interaction. Integrating personality information into emotion recognition can enhance its utility in various applications. The present study introduces a novel deep learning approach to emotion recognition, which utilizes electroencephalography (EEG) signals and the Big Five personality traits. The study recruited 60 participants and recorded their EEG data while they viewed unique sequence stimuli designed to effectively capture the dynamic nature of human emotions and personality traits. A pre-trained convolutional neural network (CNN) was used to extract emotion-related features from the raw EEG data. Additionally, a long short-term memory (LSTM) network was used to extract features related to the Big Five personality traits. The network was able to accurately predict personality traits from EEG data. The extracted features were subsequently used in a novel network to predict emotional states within the arousal and valence dimensions. The experimental results showed that the proposed classifier outperformed common classifiers, with a high accuracy of 93.97%. The findings suggest that incorporating personality traits as features in the designed network, for emotion recognition, leads to higher accuracy, highlighting the significance of examining these traits in the analysis of emotions.
Collapse
Affiliation(s)
| | - Seyed Mohammad Firoozabadi
- Department of Medical Physics, Faculty of Medicine, Tarbiat Modares University, Tehran 14117-13116, Iran;
| | - Kambiz Badie
- Content & E-Services Research Group, IT Research Faculty, ICT Research Institute, Tehran 14399-55471, Iran;
| | - Parviz Azadfallah
- Department of Psychology, Faculty of Humanities, Tarbiat Modares University, Tehran 14117-13116, Iran
| |
Collapse
|
10
|
Wang X, Ren Y, Luo Z, He W, Hong J, Huang Y. Deep learning-based EEG emotion recognition: Current trends and future perspectives. Front Psychol 2023; 14:1126994. [PMID: 36923142 PMCID: PMC10009917 DOI: 10.3389/fpsyg.2023.1126994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 01/11/2023] [Indexed: 03/03/2023] Open
Abstract
Automatic electroencephalogram (EEG) emotion recognition is a challenging component of human-computer interaction (HCI). Inspired by the powerful feature learning ability of recently-emerged deep learning techniques, various advanced deep learning models have been employed increasingly to learn high-level feature representations for EEG emotion recognition. This paper aims to provide an up-to-date and comprehensive survey of EEG emotion recognition, especially for various deep learning techniques in this area. We provide the preliminaries and basic knowledge in the literature. We review EEG emotion recognition benchmark data sets briefly. We review deep learning techniques in details, including deep belief networks, convolutional neural networks, and recurrent neural networks. We describe the state-of-the-art applications of deep learning techniques for EEG emotion recognition in detail. We analyze the challenges and opportunities in this field and point out its future directions.
Collapse
Affiliation(s)
- Xiaohu Wang
- School of Intelligent Manufacturing and Mechanical Engineering, Hunan Institute of Technology, Hengyang, China
| | - Yongmei Ren
- School of Electrical and Information Engineering, Hunan Institute of Technology, Hengyang, China
| | - Ze Luo
- School of Intelligent Manufacturing and Mechanical Engineering, Hunan Institute of Technology, Hengyang, China
| | - Wei He
- School of Electrical and Information Engineering, Hunan Institute of Technology, Hengyang, China
| | - Jun Hong
- School of Intelligent Manufacturing and Mechanical Engineering, Hunan Institute of Technology, Hengyang, China
| | - Yinzhen Huang
- School of Computer and Information Engineering, Hunan Institute of Technology, Hengyang, China
| |
Collapse
|
11
|
EEG Channel Selection Techniques in Motor Imagery Applications: A Review and New Perspectives. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 9:bioengineering9120726. [PMID: 36550932 PMCID: PMC9774545 DOI: 10.3390/bioengineering9120726] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 10/28/2022] [Accepted: 10/30/2022] [Indexed: 11/25/2022]
Abstract
Communication, neuro-prosthetics, and environmental control are just a few applications for disabled persons who use robots and manipulators that use brain-computer interface (BCI) systems. The brain's motor imagery (MI) signal is an essential input for a brain-related task in BCI applications. Due to their noninvasive, portability, and cost-effectiveness, electroencephalography (EEG) signals are the most widely used input in BCI systems. The EEG data are often collected from more than 100 different locations in the brain; channel selection techniques are critical for selecting the optimum channels for a given application. However, when analyzing EEG data, the principal purpose of channel selection is to reduce computational complexity, improve classification accuracy by avoiding overfitting, and reduce setup time. Several channel selection assessment algorithms, both with and without classification-based methods, extracted appropriate channel subsets using defined criteria. Therefore, based on the exhaustive analysis of the EEG channel selection, this manuscript analyses several existing studies to reduce the number of noisy channels and improve system performance. We review several existing works to find the most promising MI-based EEG channel selection algorithms and associated classification methodologies on various datasets. Moreover, we focus on channel selection methods that choose fewer channels with great precision. Finally, our main finding is that a smaller channel set, typically 10-30% of total channels, provided excellent performance compared to other existing studies.
Collapse
|