1
|
Wang G, Wang X, Cheng H, Li H, Qin Z, Zheng F, Ye X, Sun B. Application of electroencephalogram (EEG) in the study of the influence of different contents of alcohol and Baijiu on brain perception. Food Chem 2025; 462:140969. [PMID: 39197245 DOI: 10.1016/j.foodchem.2024.140969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 05/27/2024] [Accepted: 08/21/2024] [Indexed: 09/01/2024]
Abstract
Alcoholic beverages flavour is complex and unique with different alcohol content, and the application of flavour perception could improve the objectivity of flavour evaluation. This study utilized electroencephalogram (EEG) to assess brain reactions to alcohol percentages (5 %-53 %) and Baijiu's complex flavours. The findings demonstrate the brain's proficiency in discerning between alcohol concentrations, evidenced by increasing physiological signal strength in tandem with alcohol content. When contrasted with alcohol solutions of equivalent concentrations, Baijiu prompts a more significant activation of brain signals, underscoring EEG's capability to detect subtleties due to flavour complexity. Additionally, the study reveals notable correlations, with δ and α wave intensities escalating in response to alcohol stimulation, coupled with substantial activation in the frontal, parietal, and right temporal regions. These insights verify the efficacy of EEG in charting the brain's engagement with alcoholic flavours, setting the stage for more detailed exploration into the neural encoding of these sensory experiences.
Collapse
Affiliation(s)
- Guangnan Wang
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Zhejiang University, Hangzhou 310058, China; Key Laboratory of Brewing Molecular Engineering of China Light Industry, Beijing Technology and Business University, Beijing 100048, China; Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing 314102, China
| | - Xiaolei Wang
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Zhejiang University, Hangzhou 310058, China; Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing 314102, China
| | - Huan Cheng
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Zhejiang University, Hangzhou 310058, China; Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing 314102, China
| | - Hehe Li
- Key Laboratory of Brewing Molecular Engineering of China Light Industry, Beijing Technology and Business University, Beijing 100048, China
| | - Zihan Qin
- College of Food Science and Biotechnology, Zhejiang Gongshang University, Hangzhou 310018, China
| | - Fuping Zheng
- Key Laboratory of Brewing Molecular Engineering of China Light Industry, Beijing Technology and Business University, Beijing 100048, China.
| | - Xingqian Ye
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Zhejiang University, Hangzhou 310058, China.
| | - Baoguo Sun
- Key Laboratory of Brewing Molecular Engineering of China Light Industry, Beijing Technology and Business University, Beijing 100048, China
| |
Collapse
|
2
|
Cheng Z, Bu X, Wang Q, Yang T, Tu J. EEG-based emotion recognition using multi-scale dynamic CNN and gated transformer. Sci Rep 2024; 14:31319. [PMID: 39733023 DOI: 10.1038/s41598-024-82705-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Accepted: 12/09/2024] [Indexed: 12/30/2024] Open
Abstract
Emotions play a crucial role in human thoughts, cognitive processes, and decision-making. EEG has become a widely utilized tool in emotion recognition due to its high temporal resolution, real-time monitoring capabilities, portability, and cost-effectiveness. In this paper, we propose a novel end-to-end emotion recognition method from EEG signals, called MSDCGTNet, which is based on the Multi-Scale Dynamic 1D CNN and the Gated Transformer. First, the Multi-Scale Dynamic CNN is used to extract complex spatial and spectral features from raw EEG signals, which not only avoids information loss but also reduces computational costs associated with the time-frequency conversion of signals. Then, the Gated Transformer Encoder is utilized to capture global dependencies of EEG signals. This encoder focuses on specific regions of the input sequence while reducing computational resources through parallel processing with the improved multi-head self-attention mechanisms. Third, the Temporal Convolution Network is used to extract temporal features from the EEG signals. Finally, the extracted abstract features are fed into a classification module for emotion recognition. The proposed method was evaluated on three publicly available datasets: DEAP, SEED, and SEED_IV. Experimental results demonstrate the high accuracy and efficiency of the proposed method for emotion recognition. This approach proves to be robust and suitable for various practical applications. By addressing challenges posed by existing methods, the proposed method provides a valuable and effective solution for the field of Brain-Computer Interface (BCI).
Collapse
Affiliation(s)
- Zhuoling Cheng
- School of Electronic Information and Electrical Engineering, Yangtze University, Jingzhou, 434100, Hubei, China
| | - Xuekui Bu
- School of Electronic Information and Electrical Engineering, Yangtze University, Jingzhou, 434100, Hubei, China
| | - Qingnan Wang
- School of Physics, Electronics and Intelligent Manufacturing, Huaihua University, Hunan, 418000, China
| | - Tao Yang
- Department of Neurology, Jingzhou First People's Hospital, Jingzhou, 434000, Hubei, China
| | - Jihui Tu
- School of Electronic Information and Electrical Engineering, Yangtze University, Jingzhou, 434100, Hubei, China.
| |
Collapse
|
3
|
Yousefipour B, Rajabpour V, Abdoljabbari H, Sheykhivand S, Danishvar S. An Ensemble Deep Learning Approach for EEG-Based Emotion Recognition Using Multi-Class CSP. Biomimetics (Basel) 2024; 9:761. [PMID: 39727765 DOI: 10.3390/biomimetics9120761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Revised: 12/05/2024] [Accepted: 12/12/2024] [Indexed: 12/28/2024] Open
Abstract
In recent years, significant advancements have been made in the field of brain-computer interfaces (BCIs), particularly in the area of emotion recognition using EEG signals. The majority of earlier research in this field has missed the spatial-temporal characteristics of EEG signals, which are critical for accurate emotion recognition. In this study, a novel approach is presented for classifying emotions into three categories, positive, negative, and neutral, using a custom-collected dataset. The dataset used in this study was specifically collected for this purpose from 16 participants, comprising EEG recordings corresponding to the three emotional states induced by musical stimuli. A multi-class Common Spatial Pattern (MCCSP) technique was employed for the processing stage of the EEG signals. These processed signals were then fed into an ensemble model comprising three autoencoders with Convolutional Neural Network (CNN) layers. A classification accuracy of 99.44 ± 0.39% for the three emotional classes was achieved by the proposed method. This performance surpasses previous studies, demonstrating the effectiveness of the approach. The high accuracy indicates that the method could be a promising candidate for future BCI applications, providing a reliable means of emotion detection.
Collapse
Affiliation(s)
- Behzad Yousefipour
- Department of Electrical Engineering, Sharif University of Technology, Tehran 51666-16471, Iran
| | - Vahid Rajabpour
- Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 51666-16471, Iran
| | - Hamidreza Abdoljabbari
- School of Electrical Engineering, Iran University of Science and Technology (IUST), Tehran 51666-16471, Iran
| | - Sobhan Sheykhivand
- Department of Biomedical Engineering, University of Bonab, Bonab 55517-61167, Iran
| | - Sebelan Danishvar
- College of Engineering, Design and Physical Sciences, Brunel University London, Uxbridge UB8 3PH, UK
| |
Collapse
|
4
|
Maza A, Goizueta S, Dolores Navarro M, Noé E, Ferri J, Naranjo V, Llorens R. EEG-based responses of patients with disorders of consciousness and healthy controls to familiar and non-familiar emotional videos. Clin Neurophysiol 2024; 168:104-120. [PMID: 39486289 DOI: 10.1016/j.clinph.2024.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 09/27/2024] [Accepted: 10/22/2024] [Indexed: 11/04/2024]
Abstract
OBJECTIVE To investigate the differences in the brain responses of healthy controls (HC) and patients with disorders of consciousness (DOC) to familiar and non-familiar audiovisual stimuli and their consistency with the clinical progress. METHODS EEG responses of 19 HC and 19 patients with DOC were recorded while watching emotionally-valenced familiar and non-familiar videos. Differential entropy of the EEG recordings was used to train machine learning models aimed to distinguish brain responses to stimuli type. The consistency of brain responses with the clinical progress of the patients was also evaluated. RESULTS Models trained using data from HC outperformed those for patients. However, the performance of the models for patients was not influenced by their clinical condition. The models were successfully trained for over 75% of participants, regardless of their clinical condition. More than 75% of patients whose CRS-R scores increased post-study displayed distinguishable brain responses to both stimuli. CONCLUSIONS Responses to emotionally-valenced stimuli enabled modelling classifiers that were sensitive to the familiarity of the stimuli, regardless of the clinical condition of the participants and were consistent with their clinical progress in most cases. SIGNIFICANCE EEG responses are sensitive to familiarity of emotionally-valenced stimuli in HC and patients with DOC.
Collapse
Affiliation(s)
- Anny Maza
- Institute for Human-Centered Technology Research, Universitat Politècnica de València, Camino de Vera s/n, Valencia 46011, Spain
| | - Sandra Goizueta
- Institute for Human-Centered Technology Research, Universitat Politècnica de València, Camino de Vera s/n, Valencia 46011, Spain
| | - María Dolores Navarro
- IRENEA. Instituto de Rehabilitación Neurológica, Fundación Hospitales Vithas, València, Spain
| | - Enrique Noé
- IRENEA. Instituto de Rehabilitación Neurológica, Fundación Hospitales Vithas, València, Spain
| | - Joan Ferri
- IRENEA. Instituto de Rehabilitación Neurológica, Fundación Hospitales Vithas, València, Spain
| | - Valery Naranjo
- Institute for Human-Centered Technology Research, Universitat Politècnica de València, Camino de Vera s/n, Valencia 46011, Spain
| | - Roberto Llorens
- Institute for Human-Centered Technology Research, Universitat Politècnica de València, Camino de Vera s/n, Valencia 46011, Spain.
| |
Collapse
|
5
|
Qiu L, Zhong L, Li J, Feng W, Zhou C, Pan J. SFT-SGAT: A semi-supervised fine-tuning self-supervised graph attention network for emotion recognition and consciousness detection. Neural Netw 2024; 180:106643. [PMID: 39186838 DOI: 10.1016/j.neunet.2024.106643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 04/11/2024] [Accepted: 08/14/2024] [Indexed: 08/28/2024]
Abstract
Emotional recognition is highly important in the field of brain-computer interfaces (BCIs). However, due to the individual variability in electroencephalogram (EEG) signals and the challenges in obtaining accurate emotional labels, traditional methods have shown poor performance in cross-subject emotion recognition. In this study, we propose a cross-subject EEG emotion recognition method based on a semi-supervised fine-tuning self-supervised graph attention network (SFT-SGAT). First, we model multi-channel EEG signals by constructing a graph structure that dynamically captures the spatiotemporal topological features of EEG signals. Second, we employ a self-supervised graph attention neural network to facilitate model training, mitigating the impact of signal noise on the model. Finally, a semi-supervised approach is used to fine-tune the model, enhancing its generalization ability in cross-subject classification. By combining supervised and unsupervised learning techniques, the SFT-SGAT maximizes the utility of limited labeled data in EEG emotion recognition tasks, thereby enhancing the model's performance. Experiments based on leave-one-subject-out cross-validation demonstrate that SFT-SGAT achieves state-of-the-art cross-subject emotion recognition performance on the SEED and SEED-IV datasets, with accuracies of 92.04% and 82.76%, respectively. Furthermore, experiments conducted on a self-collected dataset comprising ten healthy subjects and eight patients with disorders of consciousness (DOCs) revealed that the SFT-SGAT attains high classification performance in healthy subjects (maximum accuracy of 95.84%) and was successfully applied to DOC patients, with four patients achieving emotion recognition accuracies exceeding 60%. The experiments demonstrate the effectiveness of the proposed SFT-SGAT model in cross-subject EEG emotion recognition and its potential for assessing levels of consciousness in patients with DOC.
Collapse
Affiliation(s)
- Lina Qiu
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China; Research Station in Mathematics, South China Normal University, Guangzhou, 510630, China.
| | - Liangquan Zhong
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| | - Jianping Li
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| | - Weisen Feng
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| | - Chengju Zhou
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| | - Jiahui Pan
- School of Artificial Intelligence, South China Normal University, Guangzhou, 510630, China.
| |
Collapse
|
6
|
Liu M, Li T, Zhang X, Yang Y, Zhou Z, Fu T. IMH-Net: a convolutional neural network for end-to-end EEG motor imagery classification. Comput Methods Biomech Biomed Engin 2024; 27:2175-2188. [PMID: 37936533 DOI: 10.1080/10255842.2023.2275244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/15/2023] [Accepted: 10/17/2023] [Indexed: 11/09/2023]
Abstract
As the main component of Brain-computer interface (BCI) technology, the classification algorithm based on EEG has developed rapidly. The previous algorithms were often based on subject-dependent settings, resulting in BCI needing to be calibrated for new users. In this work, we propose IMH-Net, an end-to-end subject-independent model. The model first uses Inception blocks extracts the frequency domain features of the data, then further compresses the feature vectors to extract the spatial domain features, and finally learns the global information and classification through Multi-Head Attention mechanism. On the OpenBMI dataset, IMH-Net obtained 73.90 ± 13.10% accuracy and 73.09 ± 14.99% F1-score in subject-independent manner, which improved the accuracy by 1.96% compared with the comparison model. On the BCI competition IV dataset 2a, this model also achieved the highest accuracy and F1-score in subject-dependent manner. The IMH-Net model we proposed can improve the accuracy of subject-independent Motor Imagery (MI), and the robustness of the algorithm is high, which has strong practical value in the field of BCI.
Collapse
Affiliation(s)
- Menghao Liu
- Mechanical College, Shanghai Dianji University, Shanghai, China
| | - Tingting Li
- Department of Anesthesiology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xu Zhang
- Mechanical College, Shanghai Dianji University, Shanghai, China
| | - Yang Yang
- Shanghai Lanhui Medical Technology Co., Ltd, Shanghai, China
| | - Zhiyong Zhou
- Mechanical College, Shanghai Dianji University, Shanghai, China
| | - Tianhao Fu
- Mechanical College, Shanghai Dianji University, Shanghai, China
| |
Collapse
|
7
|
Hua C, Chai L, Zhou Z, Tao J, Yan Y, Chen X, Liu J, Fu R. Detection of virtual reality motion sickness based on EEG using asymmetry of entropy and cross-frequency coupling. Physiol Behav 2024; 284:114626. [PMID: 38964566 DOI: 10.1016/j.physbeh.2024.114626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 06/19/2024] [Accepted: 07/01/2024] [Indexed: 07/06/2024]
Abstract
The existence of Virtual Reality Motion Sickness (VRMS) is a key factor restricting the further development of the VR industry, and the premise to solve this problem is to be able to accurately and effectively detect its occurrence. In view of the current lack of high-accuracy and effective detection methods, this paper proposes a VRMS detection method based on entropy asymmetry and cross-frequency coupling value asymmetry of EEG. First of all, the EEG of the four selected pairs of electrodes on the bilateral brain are subjected to Multivariate Variational Mode Decomposition (MVMD) respectively, and three types of entropy values on the low-frequency and high-frequency components are calculated, namely approximate entropy, fuzzy entropy and permutation entropy, as well as three types of phase-amplitude coupling features between the low-frequency and high-frequency components, namely the mean value, standard deviation and correlation coefficient; Secondly, the difference of the entropies and the cross-frequency coupling features between the left electrodes and the right electrodes are calculated; Finally, the final feature set are selected via t-test and fed into the SVM for classification, thus realizing the automatic detection of VRMS. The results show that the three classification indexes under this method, i.e., accuracy, sensitivity and specificity, reach 99.5 %, 99.3 % and 99.7 %, respectively, and the value of the area under the ROC curve reached 1, which proves that this method can be an effective indicator for detecting the occurrence of VRMS.
Collapse
Affiliation(s)
- Chengcheng Hua
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Lining Chai
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Zhanfeng Zhou
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Jianlong Tao
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Ying Yan
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Xu Chen
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Jia Liu
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Rongrong Fu
- Measurement Technology and Instrumentation Key Laboratory of Hebei Province, Department of Electrical Engineering, Yanshan University, Qinhuangdao 066000, China.
| |
Collapse
|
8
|
Yu H, Xiong X, Zhou J, Qian R, Sha K. CATM: A Multi-Feature-Based Cross-Scale Attentional Convolutional EEG Emotion Recognition Model. SENSORS (BASEL, SWITZERLAND) 2024; 24:4837. [PMID: 39123882 PMCID: PMC11314657 DOI: 10.3390/s24154837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 07/21/2024] [Accepted: 07/23/2024] [Indexed: 08/12/2024]
Abstract
Aiming at the problem that existing emotion recognition methods fail to make full use of the information in the time, frequency, and spatial domains in the EEG signals, which leads to the low accuracy of EEG emotion classification, this paper proposes a multi-feature, multi-frequency band-based cross-scale attention convolutional model (CATM). The model is mainly composed of a cross-scale attention module, a frequency-space attention module, a feature transition module, a temporal feature extraction module, and a depth classification module. First, the cross-scale attentional convolution module extracts spatial features at different scales for the preprocessed EEG signals; then, the frequency-space attention module assigns higher weights to important channels and spatial locations; next, the temporal feature extraction module extracts temporal features of the EEG signals; and, finally, the depth classification module categorizes the EEG signals into emotions. We evaluated the proposed method on the DEAP dataset with accuracies of 99.70% and 99.74% in the valence and arousal binary classification experiments, respectively; the accuracy in the valence-arousal four-classification experiment was 97.27%. In addition, considering the application of fewer channels, we also conducted 5-channel experiments, and the binary classification accuracies of valence and arousal were 97.96% and 98.11%, respectively. The valence-arousal four-classification accuracy was 92.86%. The experimental results show that the method proposed in this paper exhibits better results compared to other recent methods, and also achieves better results in few-channel experiments.
Collapse
Affiliation(s)
| | | | - Jianhua Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; (H.Y.); (X.X.); (R.Q.); (K.S.)
| | | | | |
Collapse
|
9
|
Wang Y, Chen CB, Imamura T, Tapia IE, Somers VK, Zee PC, Lim DC. A novel methodology for emotion recognition through 62-lead EEG signals: multilevel heterogeneous recurrence analysis. Front Physiol 2024; 15:1425582. [PMID: 39119215 PMCID: PMC11306145 DOI: 10.3389/fphys.2024.1425582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 06/27/2024] [Indexed: 08/10/2024] Open
Abstract
Objective Recognizing emotions from electroencephalography (EEG) signals is a challenging task due to the complex, nonlinear, and nonstationary characteristics of brain activity. Traditional methods often fail to capture these subtle dynamics, while deep learning approaches lack explainability. In this research, we introduce a novel three-phase methodology integrating manifold embedding, multilevel heterogeneous recurrence analysis (MHRA), and ensemble learning to address these limitations in EEG-based emotion recognition. Approach The proposed methodology was evaluated using the SJTU-SEED IV database. We first applied uniform manifold approximation and projection (UMAP) for manifold embedding of the 62-lead EEG signals into a lower-dimensional space. We then developed MHRA to characterize the complex recurrence dynamics of brain activity across multiple transition levels. Finally, we employed tree-based ensemble learning methods to classify four emotions (neutral, sad, fear, happy) based on the extracted MHRA features. Main results Our approach achieved high performance, with an accuracy of 0.7885 and an AUC of 0.7552, outperforming existing methods on the same dataset. Additionally, our methodology provided the most consistent recognition performance across different emotions. Sensitivity analysis revealed specific MHRA metrics that were strongly associated with each emotion, offering valuable insights into the underlying neural dynamics. Significance This study presents a novel framework for EEG-based emotion recognition that effectively captures the complex nonlinear and nonstationary dynamics of brain activity while maintaining explainability. The proposed methodology offers significant potential for advancing our understanding of emotional processing and developing more reliable emotion recognition systems with broad applications in healthcare and beyond.
Collapse
Affiliation(s)
- Yujie Wang
- Department of Industrial and Systems Engineering, University of Miami, Coral Gables, FL, United States
| | - Cheng-Bang Chen
- Department of Industrial and Systems Engineering, University of Miami, Coral Gables, FL, United States
| | - Toshihiro Imamura
- Division of Sleep Medicine, Department of Medicine, University of Pennsylvania, Phialdelphia, PA, United States
- Division of Pulmonary and Sleep Medicine, Children’s Hospital of Philadelphia, Phialdelphia, PA, United States
| | - Ignacio E. Tapia
- Division of Pediatric Pulmonology, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Virend K. Somers
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, United States
| | - Phyllis C. Zee
- Center for Circadian and Sleep Medicine, Department of Neurology, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| | - Diane C. Lim
- Department of Medicine, Miami VA Medical Center, Miami, FL, United States
- Department of Medicine, Miller School of Medicine, University of Miami, Miami, FL, United States
| |
Collapse
|
10
|
Mustapha A, Ishak I, Zaki NNM, Ismail-Fitry MR, Arshad S, Sazili AQ. Application of machine learning approach on halal meat authentication principle, challenges, and prospects: A review. Heliyon 2024; 10:e32189. [PMID: 38975107 PMCID: PMC11225673 DOI: 10.1016/j.heliyon.2024.e32189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 05/20/2024] [Accepted: 05/29/2024] [Indexed: 07/09/2024] Open
Abstract
Meat is a source of essential amino acids that are necessary for human growth and development, meat can come from dead, alive, Halal, or non-Halal animal species which are intentionally or economically (adulteration) sold to consumers. Sharia has prohibited the consumption of pork by Muslims. Because of the activities of adulterators in recent times, consumers are aware of what they eat. In the past, several methods were employed for the authentication of Halal meat, but numerous drawbacks are attached to this method such as lack of flexibility, limited application, time,consumption and low level of accuracy and sensitivity. Machine Learning (ML) is the concept of learning through the development and application of algorithms from given data and making predictions or decisions without being explicitly programmed. The techniques compared with traditional methods in Halal meat authentication are fast, flexible, scaled, automated, less expensive, high accuracy and sensitivity. Some of the ML approaches used in Halal meat authentication have proven a high percentage of accuracy in meat authenticity while other approaches show no evidence of Halal meat authentication for now. The paper critically highlighted some of the principles, challenges, successes, and prospects of ML approaches in the authentication of Halal meat.
Collapse
Affiliation(s)
- Abdul Mustapha
- Halal Products Research Institute, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia
| | - Iskandar Ishak
- Halal Products Research Institute, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia
- Department of Computer Science, Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Serdang, 43400, Malaysia
| | - Nor Nadiha Mohd Zaki
- Halal Products Research Institute, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia
- Department of Animal Science, Faculty of Agriculture, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia
| | - Mohammad Rashedi Ismail-Fitry
- Halal Products Research Institute, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia
- Department of Food Technology, Faculty of Food Science and Technology, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia
| | - Syariena Arshad
- Halal Products Research Institute, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia
| | - Awis Qurni Sazili
- Halal Products Research Institute, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia
- Department of Animal Science, Faculty of Agriculture, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia
| |
Collapse
|
11
|
Hamzah HA, Abdalla KK. EEG-based emotion recognition systems; comprehensive study. Heliyon 2024; 10:e31485. [PMID: 38818173 PMCID: PMC11137547 DOI: 10.1016/j.heliyon.2024.e31485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 05/16/2024] [Indexed: 06/01/2024] Open
Abstract
Emotion recognition technology through EEG signal analysis is currently a fundamental concept in artificial intelligence. This recognition has major practical implications in emotional health care, human-computer interaction, and so on. This paper provides a comprehensive study of different methods for extracting electroencephalography (EEG) features for emotion recognition from four different perspectives, including time domain features, frequency domain features, time-frequency features, and nonlinear features. We summarize the current pattern recognition methods adopted in most related works, and with the rapid development of deep learning (DL) attracting the attention of researchers in this field, we pay more attention to deep learning-based studies and analyse the characteristics, advantages, disadvantages, and applicable scenarios. Finally, the current challenges and future development directions in this field were summarized. This paper can help novice researchers in this field gain a systematic understanding of the current status of emotion recognition research based on EEG signals and provide ideas for subsequent related research.
Collapse
Affiliation(s)
- Hussein Ali Hamzah
- Electrical Engineering Department, College of Engineering, University of Babylon, Iraq
| | - Kasim K. Abdalla
- Electrical Engineering Department, College of Engineering, University of Babylon, Iraq
| |
Collapse
|
12
|
Avola D, Cinque L, Mambro AD, Fagioli A, Marini MR, Pannone D, Fanini B, Foresti GL. Spatio-Temporal Image-Based Encoded Atlases for EEG Emotion Recognition. Int J Neural Syst 2024; 34:2450024. [PMID: 38533631 DOI: 10.1142/s0129065724500242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Emotion recognition plays an essential role in human-human interaction since it is a key to understanding the emotional states and reactions of human beings when they are subject to events and engagements in everyday life. Moving towards human-computer interaction, the study of emotions becomes fundamental because it is at the basis of the design of advanced systems to support a broad spectrum of application areas, including forensic, rehabilitative, educational, and many others. An effective method for discriminating emotions is based on ElectroEncephaloGraphy (EEG) data analysis, which is used as input for classification systems. Collecting brain signals on several channels and for a wide range of emotions produces cumbersome datasets that are hard to manage, transmit, and use in varied applications. In this context, the paper introduces the Empátheia system, which explores a different EEG representation by encoding EEG signals into images prior to their classification. In particular, the proposed system extracts spatio-temporal image encodings, or atlases, from EEG data through the Processing and transfeR of Interaction States and Mappings through Image-based eNcoding (PRISMIN) framework, thus obtaining a compact representation of the input signals. The atlases are then classified through the Empátheia architecture, which comprises branches based on convolutional, recurrent, and transformer models designed and tuned to capture the spatial and temporal aspects of emotions. Extensive experiments were conducted on the Shanghai Jiao Tong University (SJTU) Emotion EEG Dataset (SEED) public dataset, where the proposed system significantly reduced its size while retaining high performance. The results obtained highlight the effectiveness of the proposed approach and suggest new avenues for data representation in emotion recognition from EEG signals.
Collapse
Affiliation(s)
- Danilo Avola
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Luigi Cinque
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Angelo Di Mambro
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Alessio Fagioli
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Marco Raoul Marini
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Daniele Pannone
- Department of Computer Science, Sapienza University of Rome, Via Salaria 113, Rome 00198, Italy
| | - Bruno Fanini
- Institute of Heritage Science, National Research Council, Area della Ricerca Roma 1, SP35d, 9, Montelibretti 00010, Italy
| | - Gian Luca Foresti
- Department of Computer Science, Mathematics and Physics, University of Udine, Via delle Scienze 206, Udine 33100, Italy
| |
Collapse
|
13
|
Houssein EH, Hammad A, Emam MM, Ali AA. An enhanced Coati Optimization Algorithm for global optimization and feature selection in EEG emotion recognition. Comput Biol Med 2024; 173:108329. [PMID: 38513391 DOI: 10.1016/j.compbiomed.2024.108329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/07/2024] [Accepted: 03/17/2024] [Indexed: 03/23/2024]
Abstract
Emotion recognition based on Electroencephalography (EEG) signals has garnered significant attention across diverse domains including healthcare, education, information sharing, and gaming, among others. Despite its potential, the absence of a standardized feature set poses a challenge in efficiently classifying various emotions. Addressing the issue of high dimensionality, this paper introduces an advanced variant of the Coati Optimization Algorithm (COA), called eCOA for global optimization and selecting the best subset of EEG features for emotion recognition. Specifically, COA suffers from local optima and imbalanced exploitation abilities as other metaheuristic methods. The proposed eCOA incorporates the COA and RUNge Kutta Optimizer (RUN) algorithms. The Scale Factor (SF) and Enhanced Solution Quality (ESQ) mechanism from RUN are applied to resolve the raised shortcomings of COA. The proposed eCOA algorithm has been extensively evaluated using the CEC'22 test suite and two EEG emotion recognition datasets, DEAP and DREAMER. Furthermore, the eCOA is applied for binary and multi-class classification of emotions in the dimensions of valence, arousal, and dominance using a multi-layer perceptron neural network (MLPNN). The experimental results revealed that the eCOA algorithm has more powerful search capabilities than the original COA and seven well-known counterpart methods related to statistical, convergence, and diversity measures. Furthermore, eCOA can efficiently support feature selection to find the best EEG features to maximize performance on four quadratic emotion classification problems compared to the methods of its counterparts. The suggested method obtains a classification accuracy of 85.17% and 95.21% in the binary classification of low and high arousal emotions in two public datasets: DEAP and DREAMER, respectively, which are 5.58% and 8.98% superior to existing approaches working on the same datasets for different subjects, respectively.
Collapse
Affiliation(s)
- Essam H Houssein
- Faculty of Computers and Information, Minia University, Minia, Egypt.
| | - Asmaa Hammad
- Faculty of Computers and Information, Minia University, Minia, Egypt.
| | - Marwa M Emam
- Faculty of Computers and Information, Minia University, Minia, Egypt.
| | - Abdelmgeid A Ali
- Faculty of Computers and Information, Minia University, Minia, Egypt.
| |
Collapse
|
14
|
Said A, Göker H. Spectral analysis and Bi-LSTM deep network-based approach in detection of mild cognitive impairment from electroencephalography signals. Cogn Neurodyn 2024; 18:597-614. [PMID: 38699612 PMCID: PMC11061085 DOI: 10.1007/s11571-023-10010-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 09/05/2023] [Accepted: 09/12/2023] [Indexed: 05/05/2024] Open
Abstract
Mild cognitive impairment (MCI) is a neuropsychological syndrome that is characterized by cognitive impairments. It typically affects adults 60 years of age and older. It is a noticeable decline in the cognitive function of the patient, and if left untreated it gets converted to Alzheimer's disease (AD). For that reason, early diagnosis of MCI is important as it slows down the conversion of the disease to AD. Early and accurate diagnosis of MCI requires recognition of the clinical characteristics of the disease, extensive testing, and long-term observations. These observations and tests can be subjective, expensive, incomplete, or inaccurate. Electroencephalography (EEG) is a powerful choice for the diagnosis of diseases with its advantages such as being non-invasive, based on findings, less costly, and getting results in a short time. In this study, a new EEG-based model is developed which can effectively detect MCI patients with higher accuracy. For this purpose, a dataset consisting of EEG signals recorded from a total of 34 subjects, 18 of whom were MCI and 16 control groups was used, and their ages ranged from 40 to 77. To conduct the experiment, the EEG signals were denoised using Multiscale Principal Component Analysis (MSPCA), and to increase the size of the dataset Data Augmentation (DA) method was performed. The tenfold cross-validation method was used to validate the model, moreover, the power spectral density (PSD) of the EEG signals was extracted from the EEG signals using three spectral analysis methods, the periodogram, welch, and multitaper. The PSD graphs of the EEG signals showed signal differences between the subjects of control and the MCI group, indicating that the signal power of MCI patients is lower compared to control groups. To classify the subjects, one of the best classifiers of deep learning algorithms called the Bi-directional long-short-term-memory (Bi-LSTM) was used, and several machine learning algorithms, such as decision tree (DT), support vector machine (SVM), and k-nearest neighbor (KNN). These algorithms were trained and tested using the extracted feature vectors from the control and the MCI groups. Additionally, the values of the coefficient matrix of those algorithms were compared and evaluated with the performance evaluation matrix to determine which one performed the best overall. According to the experimental results, the proposed deep learning model of multitaper spectral analysis approach with Bi-LSTM deep learning algorithm attained the highest number of correctly classified samples for diagnosing MCI patients and achieved a remarkable accuracy compared to the other proposed models. The achieved classification results of the deep learning model are reported to be 98.97% accuracy, 98.34% sensitivity, 99.67% specificity, 99.70% precision, 99.02% f1 score, and 97.94% Matthews correlation coefficient (MCC).
Collapse
Affiliation(s)
- Afrah Said
- Department of Electrical Electronics Engineering, Faculty of Simav Technology, Dumlupınar University, 43500 Kütahya, Turkey
| | - Hanife Göker
- Health Services Vocational College, Gazi University, 06830 Ankara, Turkey
| |
Collapse
|
15
|
He Z, Chen L, Xu J, Lv H, Zhou RN, Hu J, Chen Y, Gao Y. Unified Convolutional Sparse Transformer for Disease Diagnosis, Monitoring, Drug Development, and Therapeutic Effect Prediction from EEG Raw Data. BIOLOGY 2024; 13:203. [PMID: 38666815 PMCID: PMC11048286 DOI: 10.3390/biology13040203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 03/13/2024] [Accepted: 03/19/2024] [Indexed: 04/28/2024]
Abstract
Electroencephalogram (EEG) analysis plays an indispensable role across contemporary medical applications, which encompasses diagnosis, monitoring, drug discovery, and therapeutic assessment. This work puts forth an end-to-end deep learning framework that is uniquely tailored for versatile EEG analysis tasks by directly operating on raw waveform inputs. It aims to address the challenges of manual feature engineering and the neglect of spatial interrelationships in existing methodologies. Specifically, a spatial channel attention module is introduced to emphasize the critical inter-channel dependencies in EEG signals through channel statistics aggregation and multi-layer perceptron operations. Furthermore, a sparse transformer encoder is used to leverage selective sparse attention in order to efficiently process long EEG sequences while reducing computational complexity. Distilling convolutional layers further concatenates the temporal features and retains only the salient patterns. As it was rigorously evaluated on key EEG datasets, our model consistently accomplished a superior performance over the current approaches in detection and classification assignments. By accounting for both spatial and temporal relationships in an end-to-end paradigm, this work facilitates a versatile, automated EEG understanding across diseases, subjects, and objectives through a singular yet customizable architecture. Extensive empirical validation and further architectural refinement may promote broader clinical adoption prospects.
Collapse
Affiliation(s)
- Zhengda He
- The Department of Computer Science and Technology, Nanjing University, Nanjing 210023, China;
- Laboratory of Molecular Design and Drug Discovery, China Pharmaceutical University, Nanjing 211198, China; (L.C.); (J.X.)
| | - Linjie Chen
- Laboratory of Molecular Design and Drug Discovery, China Pharmaceutical University, Nanjing 211198, China; (L.C.); (J.X.)
| | - Jiaying Xu
- Laboratory of Molecular Design and Drug Discovery, China Pharmaceutical University, Nanjing 211198, China; (L.C.); (J.X.)
| | - Hao Lv
- Laboratory of Molecular Design and Drug Discovery, China Pharmaceutical University, Nanjing 211198, China; (L.C.); (J.X.)
| | - Rui-ning Zhou
- Laboratory of Molecular Design and Drug Discovery, China Pharmaceutical University, Nanjing 211198, China; (L.C.); (J.X.)
| | - Jianhua Hu
- Laboratory of Molecular Design and Drug Discovery, China Pharmaceutical University, Nanjing 211198, China; (L.C.); (J.X.)
| | - Yadong Chen
- Laboratory of Molecular Design and Drug Discovery, China Pharmaceutical University, Nanjing 211198, China; (L.C.); (J.X.)
| | - Yang Gao
- The Department of Computer Science and Technology, Nanjing University, Nanjing 210023, China;
| |
Collapse
|
16
|
Bryniarska A, Ramos JA, Fernández M. Machine Learning Classification of Event-Related Brain Potentials during a Visual Go/NoGo Task. ENTROPY (BASEL, SWITZERLAND) 2024; 26:220. [PMID: 38539732 PMCID: PMC11670797 DOI: 10.3390/e26030220] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/16/2024] [Accepted: 02/28/2024] [Indexed: 01/06/2025]
Abstract
Machine learning (ML) methods are increasingly being applied to analyze biological signals. For example, ML methods have been successfully applied to the human electroencephalogram (EEG) to classify neural signals as pathological or non-pathological and to predict working memory performance in healthy and psychiatric patients. ML approaches can quickly process large volumes of data to reveal patterns that may be missed by humans. This study investigated the accuracy of ML methods at classifying the brain's electrical activity to cognitive events, i.e., event-related brain potentials (ERPs). ERPs are extracted from the ongoing EEG and represent electrical potentials in response to specific events. ERPs were evoked during a visual Go/NoGo task. The Go/NoGo task requires a button press on Go trials and response withholding on NoGo trials. NoGo trials elicit neural activity associated with inhibitory control processes. We compared the accuracy of six ML algorithms at classifying the ERPs associated with each trial type. The raw electrical signals were fed to all ML algorithms to build predictive models. The same raw data were then truncated in length and fitted to multiple dynamic state space models of order nx using a continuous-time subspace-based system identification algorithm. The 4nx numerator and denominator parameters of the transfer function of the state space model were then used as substitutes for the data. Dimensionality reduction simplifies classification, reduces noise, and may ultimately improve the predictive power of ML models. Our findings revealed that all ML methods correctly classified the electrical signal associated with each trial type with a high degree of accuracy, and accuracy remained high after parameterization was applied. We discuss the models and the usefulness of the parameterization.
Collapse
Affiliation(s)
- Anna Bryniarska
- Department of Computer Science, Opole
University of Technology, 45-758 Opole, Poland;
| | - José A. Ramos
- College of Computing and Engineering, Nova
Southeastern University, Fort Lauderdale, FL 33314, USA;
| | - Mercedes Fernández
- Department of Psychology and Neuroscience, Nova
Southeastern University, Fort Lauderdale, FL 33314, USA
| |
Collapse
|
17
|
Çelebi M, Öztürk S, Kaplan K. An emotion recognition method based on EWT-3D-CNN-BiLSTM-GRU-AT model. Comput Biol Med 2024; 169:107954. [PMID: 38183705 DOI: 10.1016/j.compbiomed.2024.107954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 12/28/2023] [Accepted: 01/01/2024] [Indexed: 01/08/2024]
Abstract
This has become a significant study area in recent years because of its use in brain-machine interaction (BMI). The robustness problem of emotion classification is one of the most basic approaches for improving the quality of emotion recognition systems. One of the two main branches of these approaches deals with the problem by extracting the features using manual engineering and the other is the famous artificial intelligence approach, which infers features of EEG data. This study proposes a novel method that considers the characteristic behavior of EEG recordings and based on the artificial intelligence method. The EEG signal is a noisy signal with a non-stationary and non-linear form. Using the Empirical Wavelet Transform (EWT) signal decomposition method, the signal's frequency components are obtained. Then, frequency-based features, linear and non-linear features are extracted. The resulting frequency-based, linear, and nonlinear features are mapped to the 2-D axis according to the positions of the EEG electrodes. By merging this 2-D images, 3-D images are constructed. In this way, the multichannel brain frequency of EEG recordings, spatial and temporal relationship are combined. Lastly, 3-D deep learning framework was constructed, which was combined with convolutional neural network (CNN), bidirectional long-short term memory (BiLSTM) and gated recurrent unit (GRU) with self-attention (AT). This model is named EWT-3D-CNN-BiLSTM-GRU-AT. As a result, we have created framework comprising handcrafted features generated and cascaded from state-of-the-art deep learning models. The framework is evaluated on the DEAP recordings based on the person-independent approach. The experimental findings demonstrate that the developed model can achieve classification accuracies of 90.57 % and 90.59 % for valence and arousal axes, respectively, for the DEAP database. Compared with existing cutting-edge emotion classification models, the proposed framework exhibits superior results for classifying human emotions.
Collapse
Affiliation(s)
- Muharrem Çelebi
- Electronics and Communication Engineering, Kocaeli University, Kocaeli, 41001, Turkey.
| | - Sıtkı Öztürk
- Electronics and Communication Engineering, Kocaeli University, Kocaeli, 41001, Turkey.
| | - Kaplan Kaplan
- Software Engineering, Kocaeli University, Kocaeli, 41001, Turkey.
| |
Collapse
|
18
|
Yu S, Wang Z, Wang F, Chen K, Yao D, Xu P, Zhang Y, Wang H, Zhang T. Multiclass classification of motor imagery tasks based on multi-branch convolutional neural network and temporal convolutional network model. Cereb Cortex 2024; 34:bhad511. [PMID: 38183186 DOI: 10.1093/cercor/bhad511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 12/06/2023] [Accepted: 12/08/2023] [Indexed: 01/07/2024] Open
Abstract
Motor imagery (MI) is a cognitive process wherein an individual mentally rehearses a specific movement without physically executing it. Recently, MI-based brain-computer interface (BCI) has attracted widespread attention. However, accurate decoding of MI and understanding of neural mechanisms still face huge challenges. These seriously hinder the clinical application and development of BCI systems based on MI. Thus, it is very necessary to develop new methods to decode MI tasks. In this work, we propose a multi-branch convolutional neural network (MBCNN) with a temporal convolutional network (TCN), an end-to-end deep learning framework to decode multi-class MI tasks. We first used MBCNN to capture the MI electroencephalography signals information on temporal and spectral domains through different convolutional kernels. Then, we introduce TCN to extract more discriminative features. The within-subject cross-session strategy is used to validate the classification performance on the dataset of BCI Competition IV-2a. The results showed that we achieved 75.08% average accuracy for 4-class MI task classification, outperforming several state-of-the-art approaches. The proposed MBCNN-TCN-Net framework successfully captures discriminative features and decodes MI tasks effectively, improving the performance of MI-BCIs. Our findings could provide significant potential for improving the clinical application and development of MI-based BCI systems.
Collapse
Affiliation(s)
- Shiqi Yu
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China
| | - Zedong Wang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Fei Wang
- School of Computer and Software, Chengdu Jincheng College, Chengdu 610097, China
| | - Kai Chen
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China
| | - Dezhong Yao
- Key Laboratory for Neuroinformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Peng Xu
- Key Laboratory for Neuroinformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yong Zhang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Hesong Wang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Tao Zhang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China
- Key Laboratory for Neuroinformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
19
|
Tveitstøl T, Tveter M, Pérez T. AS, Hatlestad-Hall C, Yazidi A, Hammer HL, Hebold Haraldsen IRJ. Introducing Region Based Pooling for handling a varied number of EEG channels for deep learning models. Front Neuroinform 2024; 17:1272791. [PMID: 38351907 PMCID: PMC10861709 DOI: 10.3389/fninf.2023.1272791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 12/07/2023] [Indexed: 02/16/2024] Open
Abstract
Introduction A challenge when applying an artificial intelligence (AI) deep learning (DL) approach to novel electroencephalography (EEG) data, is the DL architecture's lack of adaptability to changing numbers of EEG channels. That is, the number of channels cannot vary neither in the training data, nor upon deployment. Such highly specific hardware constraints put major limitations on the clinical usability and scalability of the DL models. Methods In this work, we propose a technique for handling such varied numbers of EEG channels by splitting the EEG montages into distinct regions and merge the channels within the same region to a region representation. The solution is termed Region Based Pooling (RBP). The procedure of splitting the montage into regions is performed repeatedly with different region configurations, to minimize potential loss of information. As RBP maps a varied number of EEG channels to a fixed number of region representations, both current and future DL architectures may apply RBP with ease. To demonstrate and evaluate the adequacy of RBP to handle a varied number of EEG channels, sex classification based solely on EEG was used as a test example. The DL models were trained on 129 channels, and tested on 32, 65, and 129-channels versions of the data using the same channel positions scheme. The baselines for comparison were zero-filling the missing channels and applying spherical spline interpolation. The performances were estimated using 5-fold cross validation. Results For the 32-channel system version, the mean AUC values across the folds were: RBP (93.34%), spherical spline interpolation (93.36%), and zero-filling (76.82%). Similarly, on the 65-channel system version, the performances were: RBP (93.66%), spherical spline interpolation (93.50%), and zero-filling (85.58%). Finally, the 129-channel system version produced the following results: RBP (94.68%), spherical spline interpolation (93.86%), and zero-filling (91.92%). Conclusion In conclusion, RBP obtained similar results to spherical spline interpolation, and superior results to zero-filling. We encourage further research and development of DL models in the cross-dataset setting, including the use of methods such as RBP and spherical spline interpolation to handle a varied number of EEG channels.
Collapse
Affiliation(s)
- Thomas Tveitstøl
- Department of Neurology, Oslo University Hospital, Oslo, Norway
- Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Mats Tveter
- Department of Neurology, Oslo University Hospital, Oslo, Norway
- Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Ana S. Pérez T.
- Department of Neurology, Oslo University Hospital, Oslo, Norway
- Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
| | | | - Anis Yazidi
- Department of Computer Science, Oslo Metropolitan University, Oslo, Norway
| | - Hugo L. Hammer
- Department of Computer Science, Oslo Metropolitan University, Oslo, Norway
- Department of Holistic Systems, SimulaMet, Oslo, Norway
| | | |
Collapse
|
20
|
Wu M, Ouyang R, Zhou C, Sun Z, Li F, Li P. A study on the combination of functional connection features and Riemannian manifold in EEG emotion recognition. Front Neurosci 2024; 17:1345770. [PMID: 38287990 PMCID: PMC10823003 DOI: 10.3389/fnins.2023.1345770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 12/26/2023] [Indexed: 01/31/2024] Open
Abstract
Introduction Affective computing is the core for Human-computer interface (HCI) to be more intelligent, where electroencephalogram (EEG) based emotion recognition is one of the primary research orientations. Besides, in the field of brain-computer interface, Riemannian manifold is a highly robust and effective method. However, the symmetric positive definiteness (SPD) of the features limits its application. Methods In the present work, we introduced the Laplace matrix to transform the functional connection features, i.e., phase locking value (PLV), Pearson correlation coefficient (PCC), spectral coherent (COH), and mutual information (MI), to into semi-positive, and the max operator to ensure the transformed feature be positive. Then the SPD network is employed to extract the deep spatial information and a fully connected layer is employed to validate the effectiveness of the extracted features. Particularly, the decision layer fusion strategy is utilized to achieve more accurate and stable recognition results, and the differences of classification performance of different feature combinations are studied. What's more, the optimal threshold value applied to the functional connection feature is also studied. Results The public emotional dataset, SEED, is adopted to test the proposed method with subject dependent cross-validation strategy. The result of average accuracies for the four features indicate that PCC outperform others three features. The proposed model achieve best accuracy of 91.05% for the fusion of PLV, PCC, and COH, followed by the fusion of all four features with the accuracy of 90.16%. Discussion The experimental results demonstrate that the optimal thresholds for the four functional connection features always kept relatively stable within a fixed interval. In conclusion, the experimental results demonstrated the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Minchao Wu
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
- Key Laboratory of Flight Techniques and Flight Safety, Civil Aviation Flight University of China, Guanghan, China
| | - Rui Ouyang
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| | - Chang Zhou
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| | - Zitong Sun
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| | - Fan Li
- Key Laboratory of Flight Techniques and Flight Safety, Civil Aviation Flight University of China, Guanghan, China
| | - Ping Li
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| |
Collapse
|
21
|
Abgeena A, Garg S. S-LSTM-ATT: a hybrid deep learning approach with optimized features for emotion recognition in electroencephalogram. Health Inf Sci Syst 2023; 11:40. [PMID: 37654692 PMCID: PMC10465436 DOI: 10.1007/s13755-023-00242-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 08/15/2023] [Indexed: 09/02/2023] Open
Abstract
Purpose Human emotion recognition using electroencephalograms (EEG) is a critical area of research in human-machine interfaces. Furthermore, EEG data are convoluted and diverse; thus, acquiring consistent results from these signals remains challenging. As such, the authors felt compelled to investigate EEG signals to identify different emotions. Methods A novel deep learning (DL) model stacked long short-term memory with attention (S-LSTM-ATT) model is proposed for emotion recognition (ER) in EEG signals. Long Short-Term Memory (LSTM) and attention networks effectively handle time-series EEG data and recognise intrinsic connections and patterns. Therefore, the model combined the strengths of the LSTM model and incorporated an attention network to enhance its effectiveness. Optimal features were extracted from the metaheuristic-based firefly optimisation algorithm (FFOA) to identify different emotions efficiently. Results The proposed approach recognised emotions in two publicly available standard datasets: SEED and EEG Brainwave. An outstanding accuracy of 97.83% in the SEED and 98.36% in the EEG Brainwave datasets were obtained for three emotion indices: positive, neutral and negative. Aside from accuracy, a comprehensive comparison of the proposed model's precision, recall, F1 score and kappa score was performed to determine the model's applicability. When applied to the SEED and EEG Brainwave datasets, the proposed S-LSTM-ATT achieved superior results to baseline models such as Convolutional Neural Networks (CNN), Gated Recurrent Unit (GRU) and LSTM. Conclusion Combining an FFOA-based feature selection (FS) and an S-LSTM-ATT-based classification model demonstrated promising results with high accuracy. Other metrics like precision, recall, F1 score and kappa score proved the suitability of the proposed model for ER in EEG signals.
Collapse
Affiliation(s)
- Abgeena Abgeena
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, 835215 India
| | - Shruti Garg
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, 835215 India
| |
Collapse
|
22
|
Cui Z, Wu B, Blank I, Yu Y, Gu J, Zhou T, Zhang Y, Wang W, Liu Y. TastePeptides-EEG: An Ensemble Model for Umami Taste Evaluation Based on Electroencephalogram and Machine Learning. JOURNAL OF AGRICULTURAL AND FOOD CHEMISTRY 2023; 71:13430-13439. [PMID: 37639501 DOI: 10.1021/acs.jafc.3c04611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
In the field of food, the sensory evaluation of food still relies on the results of manual sensory evaluation, but the results of human sensory evaluation are not universal, and there is a problem of speech fraud. This work proposed an electroencephalography (EEG)-based analysis method that effectively enables the identification of umami/non-umami substances. First, the key features were extracted using percentage conversion, standardization, and significance screening, and based on these features, the top four models were selected from 19 common binary classification algorithms as submodels. Then, the support vector machine (SVM) algorithm was used to fit the outputs of these four submodels to establish TastePeptides-EEG. The validation set of the model achieved a judgment accuracy of 90.2%, and the test set achieved a judgment accuracy of 77.8%. This study discovered the frequency change of α wave in umami taste perception and found the frequency response delay phenomenon of the F/RT/C area under umami taste stimulation for the first time. The model is published at www.tastepeptides-meta.com/TastePeptides-EEG, which is convenient for relevant researchers to speed up the analysis of umami perception and provide help for the development of the next generation of brain-computer interfaces for flavor perception.
Collapse
Affiliation(s)
- Zhiyong Cui
- Department of Food Science & Technology, School of Agriculture & Biology, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Ben Wu
- Department of Food Science & Technology, School of Agriculture & Biology, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Imre Blank
- Zhejiang Yiming Food Co, Ltd., Huting North Street 199, Shanghai 201615, China
| | - Yashu Yu
- Department of Food Science & Technology, School of Agriculture & Biology, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jiaming Gu
- College of Humanities and Development Studies, China Agricultural University, Beijing 100094 China
| | - Tianxing Zhou
- Department of Food Science & Technology, School of Agriculture & Biology, Shanghai Jiao Tong University, Shanghai 200240, China
- Department of Bioinformatics, Faculty of Science, The University of Melbourne, Melbourne, Victoria 3010, Australia
| | - Yin Zhang
- Key Laboratory of Meat Processing of Sichuan, Chengdu University, Chengdu 610106, China
| | - Wenli Wang
- Department of Food Science & Technology, School of Agriculture & Biology, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yuan Liu
- Department of Food Science & Technology, School of Agriculture & Biology, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
23
|
Du X, Meng Y, Qiu S, Lv Y, Liu Q. EEG Emotion Recognition by Fusion of Multi-Scale Features. Brain Sci 2023; 13:1293. [PMID: 37759894 PMCID: PMC10526490 DOI: 10.3390/brainsci13091293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 08/27/2023] [Accepted: 08/29/2023] [Indexed: 09/29/2023] Open
Abstract
Electroencephalogram (EEG) signals exhibit low amplitude, complex background noise, randomness, and significant inter-individual differences, which pose challenges in extracting sufficient features and can lead to information loss during the mapping process from low-dimensional feature matrices to high-dimensional ones in emotion recognition algorithms. In this paper, we propose a Multi-scale Deformable Convolutional Interacting Attention Network based on Residual Network (MDCNAResnet) for EEG-based emotion recognition. Firstly, we extract differential entropy features from different channels of EEG signals and construct a three-dimensional feature matrix based on the relative positions of electrode channels. Secondly, we utilize deformable convolution (DCN) to extract high-level abstract features by replacing standard convolution with deformable convolution, enhancing the modeling capability of the convolutional neural network for irregular targets. Then, we develop the Bottom-Up Feature Pyramid Network (BU-FPN) to extract multi-scale data features, enabling complementary information from different levels in the neural network, while optimizing the feature extraction process using Efficient Channel Attention (ECANet). Finally, we combine the MDCNAResnet with a Bidirectional Gated Recurrent Unit (BiGRU) to further capture the contextual semantic information of EEG signals. Experimental results on the DEAP dataset demonstrate the effectiveness of our approach, achieving accuracies of 98.63% and 98.89% for Valence and Arousal dimensions, respectively.
Collapse
Affiliation(s)
- Xiuli Du
- Communication and Network Laboratory, Dalian University, Dalian 116622, China; (Y.M.); (S.Q.); (Y.L.); (Q.L.)
- School of Information Engineering, Dalian University, Dalian 116622, China
| | - Yifei Meng
- Communication and Network Laboratory, Dalian University, Dalian 116622, China; (Y.M.); (S.Q.); (Y.L.); (Q.L.)
- School of Information Engineering, Dalian University, Dalian 116622, China
| | - Shaoming Qiu
- Communication and Network Laboratory, Dalian University, Dalian 116622, China; (Y.M.); (S.Q.); (Y.L.); (Q.L.)
- School of Information Engineering, Dalian University, Dalian 116622, China
| | - Yana Lv
- Communication and Network Laboratory, Dalian University, Dalian 116622, China; (Y.M.); (S.Q.); (Y.L.); (Q.L.)
- School of Information Engineering, Dalian University, Dalian 116622, China
| | - Qingli Liu
- Communication and Network Laboratory, Dalian University, Dalian 116622, China; (Y.M.); (S.Q.); (Y.L.); (Q.L.)
- School of Information Engineering, Dalian University, Dalian 116622, China
| |
Collapse
|
24
|
Nandini D, Yadav J, Rani A, Singh V. Design of subject independent 3D VAD emotion detection system using EEG signals and machine learning algorithms. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
25
|
Zaitseva E, Levashenko V, Rabcan J, Kvassay M. A New Fuzzy-Based Classification Method for Use in Smart/Precision Medicine. Bioengineering (Basel) 2023; 10:838. [PMID: 37508865 PMCID: PMC10376790 DOI: 10.3390/bioengineering10070838] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 07/08/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
The development of information technology has had a significant impact on various areas of human activity, including medicine. It has led to the emergence of the phenomenon of Industry 4.0, which, in turn, led to the development of the concept of Medicine 4.0. Medicine 4.0, or smart medicine, can be considered as a structural association of such areas as AI-based medicine, telemedicine, and precision medicine. Each of these areas has its own characteristic data, along with the specifics of their processing and analysis. Nevertheless, at present, all these types of data must be processed simultaneously, in order to provide the most complete picture of the health of each individual patient. In this paper, after a brief analysis of the topic of medical data, a new classification method is proposed that allows the processing of the maximum number of data types. The specificity of this method is its use of a fuzzy classifier. The effectiveness of this method is confirmed by an analysis of the results from the classification of various types of data for medical applications and health problems. In this paper, as an illustration of the proposed method, a fuzzy decision tree has been used as the fuzzy classifier. The accuracy of the classification in terms of the proposed method, based on a fuzzy classifier, gives the best performance in comparison with crisp classifiers.
Collapse
Affiliation(s)
- Elena Zaitseva
- Department of Informatics, Faculty of Management Science and Informatics, University of Zilina, 01026 Zilina, Slovakia
| | - Vitaly Levashenko
- Department of Informatics, Faculty of Management Science and Informatics, University of Zilina, 01026 Zilina, Slovakia
| | - Jan Rabcan
- Department of Informatics, Faculty of Management Science and Informatics, University of Zilina, 01026 Zilina, Slovakia
| | - Miroslav Kvassay
- Department of Informatics, Faculty of Management Science and Informatics, University of Zilina, 01026 Zilina, Slovakia
| |
Collapse
|
26
|
Luo G, Sun S, Qian K, Hu B, Schuller BW, Yamamoto Y. How does Music Affect Your Brain? A Pilot Study on EEG and Music Features for Automatic Analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083758 DOI: 10.1109/embc40787.2023.10339971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Music can effectively induce specific emotion and usually be used in clinical treatment or intervention. The electroencephalogram can help reflect the impact of music. Previous studies showed that the existing methods achieved relatively good performance in predicting emotion response to music. However, these methods tend to be time consuming and expensive due to their complexity. To this end, this study proposes a grey wolf optimiser-based method to predict the induced emotion through fusing electroencephalogram features and music features. Experimental results show that, the proposed method can reach a promising performance for predicting emotional response to music and outperform the alternative method. In addition, we analyse the relationship between the music features and electroencephalogram features and the results demonstrate that, musical timbre features are significantly related to the electroencephalogram features.Clinical relevance- This study targets the automatic prediction of the human response to music. It further explores the correlation between EEG features and music features aiming to provide the basis for the extension to the application of music. The grey wolf optimiser-based method proposed in this study could supply a promising avenue for the emotion prediction as induced by music.
Collapse
|
27
|
Zong J, Xiong X, Zhou J, Ji Y, Zhou D, Zhang Q. FCAN-XGBoost: A Novel Hybrid Model for EEG Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:5680. [PMID: 37420845 DOI: 10.3390/s23125680] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/03/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
In recent years, artificial intelligence (AI) technology has promoted the development of electroencephalogram (EEG) emotion recognition. However, existing methods often overlook the computational cost of EEG emotion recognition, and there is still room for improvement in the accuracy of EEG emotion recognition. In this study, we propose a novel EEG emotion recognition algorithm called FCAN-XGBoost, which is a fusion of two algorithms, FCAN and XGBoost. The FCAN module is a feature attention network (FANet) that we have proposed for the first time, which processes the differential entropy (DE) and power spectral density (PSD) features extracted from the four frequency bands of the EEG signal and performs feature fusion and deep feature extraction. Finally, the deep features are fed into the eXtreme Gradient Boosting (XGBoost) algorithm to classify the four emotions. We evaluated the proposed method on the DEAP and DREAMER datasets and achieved a four-category emotion recognition accuracy of 95.26% and 94.05%, respectively. Additionally, our proposed method reduces the computational cost of EEG emotion recognition by at least 75.45% for computation time and 67.51% for memory occupation. The performance of FCAN-XGBoost outperforms the state-of-the-art four-category model and reduces computational costs without losing classification performance compared with other models.
Collapse
Affiliation(s)
- Jing Zong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Xin Xiong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Jianhua Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Ying Ji
- Graduate School, Kunming Medical University, Kunming 650500, China
| | - Diao Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Qi Zhang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|
28
|
Nieto Mora D, Valencia S, Trujillo N, López JD, Martínez JD. Characterizing social and cognitive EEG-ERP through multiple kernel learning. Heliyon 2023; 9:e16927. [PMID: 37484433 PMCID: PMC10361029 DOI: 10.1016/j.heliyon.2023.e16927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/31/2023] [Accepted: 06/01/2023] [Indexed: 07/25/2023] Open
Abstract
EEG-ERP social-cognitive studies with healthy populations commonly fail to provide significant evidence due to low-quality data and the inherent similarity between groups. We propose a multiple kernel learning-based approach to enhance classification accuracy while keeping the traceability of the features (frequency bands or regions of interest) as a linear combination of kernels. These weights determine the relevance of each source of information, which is crucial for specialists. As a case study, we classify healthy ex-combatants of the Colombian armed conflict and civilians through a cognitive valence recognition task. Although previous works have shown accuracies below 80% with these groups, our proposal achieved an F1 score of 98%, revealing the most relevant bands and brain regions, which are the base for socio-cognitive trainings. With this methodology, we aim to contribute to standardizing EEG analyses and enhancing their statistics.
Collapse
Affiliation(s)
- Daniel Nieto Mora
- Máquinas Inteligentes y Reconocimiento de Patrones, Instituto Tecnológico Metropolitano ITM - Medellín, Colombia
| | - Stella Valencia
- Grupo de Investigación Salud Mental, Facultad Nacional de Salud Pública, Universidad de Antioquia UDEA - Medellín, Colombia
- Grupo de Neurociencias de Antioquia, Facultad de Medicina, Universidad de Antioquia UDEA - Medellín, Colombia
| | - Natalia Trujillo
- Grupo de Investigación Salud Mental, Facultad Nacional de Salud Pública, Universidad de Antioquia UDEA - Medellín, Colombia
- Grupo de Neurociencias de Antioquia, Facultad de Medicina, Universidad de Antioquia UDEA - Medellín, Colombia
| | - Jose David López
- Engineering Faculty, Universidad de Antioquia UDEA - Medellín, Colombia
| | | |
Collapse
|
29
|
She Q, Shi X, Fang F, Ma Y, Zhang Y. Cross-subject EEG emotion recognition using multi-source domain manifold feature selection. Comput Biol Med 2023; 159:106860. [PMID: 37080005 DOI: 10.1016/j.compbiomed.2023.106860] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 03/01/2023] [Accepted: 03/30/2023] [Indexed: 04/22/2023]
Abstract
Recent researches on emotion recognition suggests that domain adaptation, a form of transfer learning, has the capability to solve the cross-subject problem in Affective brain-computer interface (aBCI) field. However, traditional domain adaptation methods perform single to single domain transfer or simply merge different source domains into a larger domain to realize the transfer of knowledge, resulting in negative transfer. In this study, a multi-source transfer learning framework was proposed to promote the performance of multi-source electroencephalogram (EEG) emotion recognition. The method first used the data distribution similarity ranking (DDSA) method to select the appropriate source domain for each target domain off-line, and reduced data drift between domains through manifold feature mapping on Grassmann manifold. Meanwhile, the minimum redundancy maximum correlation algorithm (mRMR) was employed to select more representative manifold features and minimized the conditional distribution and marginal distribution of the manifold features, and then learned the domain-invariant classifier by summarizing structural risk minimization (SRM). Finally, the weighted fusion criterion was applied to further improve recognition performance. We compared our method with several state-of-the-art domain adaptation techniques using the SEED and DEAP dataset. Results showed that, compared with the conventional MEDA algorithm, the recognition accuracy of our proposed algorithm on SEED and DEAP dataset were improved by 6.74% and 5.34%, respectively. Besides, compared with TCA, JDA, and other state-of-the-art algorithms, the performance of our proposed method was also improved with the best average accuracy of 86.59% on SEED and 64.40% on DEAP. Our results demonstrated that the proposed multi-source transfer learning framework is more effective and feasible than other state-of-the-art methods in recognizing different emotions by solving the cross-subject problem.
Collapse
Affiliation(s)
- Qingshan She
- School of Automation, Hangzhou Dianzi University, Hangzhou, Zhejiang, 310018, China.
| | - Xinsheng Shi
- School of Automation, Hangzhou Dianzi University, Hangzhou, Zhejiang, 310018, China
| | - Feng Fang
- Department of Biomedical Engineering, University of Houston, Houston, TX, 77204, USA
| | - Yuliang Ma
- School of Automation, Hangzhou Dianzi University, Hangzhou, Zhejiang, 310018, China
| | - Yingchun Zhang
- Department of Biomedical Engineering, University of Houston, Houston, TX, 77204, USA.
| |
Collapse
|
30
|
Abdel-Hamid L. An Efficient Machine Learning-Based Emotional Valence Recognition Approach Towards Wearable EEG. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23031255. [PMID: 36772295 PMCID: PMC9921881 DOI: 10.3390/s23031255] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 01/14/2023] [Accepted: 01/17/2023] [Indexed: 05/17/2023]
Abstract
Emotion artificial intelligence (AI) is being increasingly adopted in several industries such as healthcare and education. Facial expressions and tone of speech have been previously considered for emotion recognition, yet they have the drawback of being easily manipulated by subjects to mask their true emotions. Electroencephalography (EEG) has emerged as a reliable and cost-effective method to detect true human emotions. Recently, huge research effort has been put to develop efficient wearable EEG devices to be used by consumers in out of the lab scenarios. In this work, a subject-dependent emotional valence recognition method is implemented that is intended for utilization in emotion AI applications. Time and frequency features were computed from a single time series derived from the Fp1 and Fp2 channels. Several analyses were performed on the strongest valence emotions to determine the most relevant features, frequency bands, and EEG timeslots using the benchmark DEAP dataset. Binary classification experiments resulted in an accuracy of 97.42% using the alpha band, by that outperforming several approaches from literature by ~3-22%. Multiclass classification gave an accuracy of 95.0%. Feature computation and classification required less than 0.1 s. The proposed method thus has the advantage of reduced computational complexity as, unlike most methods in the literature, only two EEG channels were considered. In addition, minimal features concluded from the thorough analyses conducted in this study were used to achieve state-of-the-art performance. The implemented EEG emotion recognition method thus has the merits of being reliable and easily reproducible, making it well-suited for wearable EEG devices.
Collapse
Affiliation(s)
- Lamiaa Abdel-Hamid
- Department of Electronics & Communication, Faculty of Engineering, Misr International University (MIU), Heliopolis, Cairo P.O. Box 1 , Egypt
| |
Collapse
|
31
|
Alharbi H. Identifying Thematics in a Brain-Computer Interface Research. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:2793211. [PMID: 36643889 PMCID: PMC9833923 DOI: 10.1155/2023/2793211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/21/2022] [Accepted: 12/24/2022] [Indexed: 01/05/2023]
Abstract
This umbrella review is motivated to understand the shift in research themes on brain-computer interfacing (BCI) and it determined that a shift away from themes that focus on medical advancement and system development to applications that included education, marketing, gaming, safety, and security has occurred. The background of this review examined aspects of BCI categorisation, neuroimaging methods, brain control signal classification, applications, and ethics. The specific area of BCI software and hardware development was not examined. A search using One Search was undertaken and 92 BCI reviews were selected for inclusion. Publication demographics indicate the average number of authors on review papers considered was 4.2 ± 1.8. The results also indicate a rapid increase in the number of BCI reviews from 2003, with only three reviews before that period, two in 1972, and one in 1996. While BCI authors were predominantly Euro-American in early reviews, this shifted to a more global authorship, which China dominated by 2020-2022. The review revealed six disciplines associated with BCI systems: life sciences and biomedicine (n = 42), neurosciences and neurology (n = 35), and rehabilitation (n = 20); (2) the second domain centred on the theme of functionality: computer science (n = 20), engineering (n = 28) and technology (n = 38). There was a thematic shift from understanding brain function and modes of interfacing BCI systems to more applied research novel areas of research-identified surround artificial intelligence, including machine learning, pre-processing, and deep learning. As BCI systems become more invasive in the lives of "normal" individuals, it is expected that there will be a refocus and thematic shift towards increased research into ethical issues and the need for legal oversight in BCI application.
Collapse
Affiliation(s)
- Hadeel Alharbi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha'il, Ha'il 81481, Saudi Arabia
| |
Collapse
|
32
|
Zhong MY, Yang QY, Liu Y, Zhen BY, Zhao FD, Xie BB. EEG emotion recognition based on TQWT-features and hybrid convolutional recurrent neural network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
33
|
Wu M, Teng W, Fan C, Pei S, Li P, Lv Z. An Investigation of Olfactory-Enhanced Video on EEG-Based Emotion Recognition. IEEE Trans Neural Syst Rehabil Eng 2023; 31:1602-1613. [PMID: 37028354 DOI: 10.1109/tnsre.2023.3253866] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023]
Abstract
Collecting emotional physiological signals is significant in building affective Human-Computer Interactions (HCI). However, how to evoke subjects' emotions efficiently in EEG-related emotional experiments is still a challenge. In this work, we developed a novel experimental paradigm that allows odors dynamically participate in different stages of video-evoked emotions, to investigate the efficiency of olfactory-enhanced videos in inducing subjects' emotions; According to the period that the odors participated in, the stimuli were divided into four patterns, i.e., the olfactory-enhanced video in early/later stimulus periods (OVEP/OVLP), and the traditional videos in early/later stimulus periods (TVEP/TVLP). The differential entropy (DE) feature and four classifiers were employed to test the efficiency of emotion recognition. The best average accuracies of the OVEP, OVLP, TVEP, and TVLP were 50.54%, 51.49%, 40.22%, and 57.55%, respectively. The experimental results indicated that the OVEP significantly outperformed the TVEP on classification performance, while there was no significant difference between the OVLP and TVLP. Besides, olfactory-enhanced videos achieved higher efficiency in evoking negative emotions than traditional videos. Moreover, we found that the neural patterns in response to emotions under different stimulus methods were stable, and for Fp1, FP2, and F7, there existed significant differences in whether adopt the odors.
Collapse
Affiliation(s)
- Minchao Wu
- Anhui Province Key Laboratory of Multimodal Cognitive Computation and the School of Computer Science and Technology, Anhui University, Hefei, China
| | - Wei Teng
- Anhui Province Key Laboratory of Multimodal Cognitive Computation and the School of Computer Science and Technology, Anhui University, Hefei, China
| | - Cunhang Fan
- Anhui Province Key Laboratory of Multimodal Cognitive Computation and the School of Computer Science and Technology, Anhui University, Hefei, China
| | - Shengbing Pei
- Anhui Province Key Laboratory of Multimodal Cognitive Computation and the School of Computer Science and Technology, Anhui University, Hefei, China
| | - Ping Li
- Anhui Province Key Laboratory of Multimodal Cognitive Computation and the School of Computer Science and Technology, Anhui University, Hefei, China
| | - Zhao Lv
- Anhui Province Key Laboratory of Multimodal Cognitive Computation and the School of Computer Science and Technology, Anhui University, Hefei, China
| |
Collapse
|
34
|
Zhan Q, Wang L, Ren L, Huang X. A novel heterogeneous transfer learning method based on data stitching for the sequential coding brain computer interface. Comput Biol Med 2022; 151:106220. [PMID: 36332422 DOI: 10.1016/j.compbiomed.2022.106220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 09/28/2022] [Accepted: 10/15/2022] [Indexed: 12/27/2022]
Abstract
OBJECTIVE For the brain computer interface (BCI), it is necessary to collect enough electroencephalography (EEG) signals to train the classification model. When the operation dimension of BCI is large, it will bring great burden to data acquisition. Fortunately, this problem can be solved by our proposed transfer learning method. METHOD For the sequential coding experimental paradigm, the multi-band data stitching with label alignment and tangent space mapping (MDSLATSM) algorithm is proposed as a novel heterogeneous transfer learning method. After filtering by multi-band filtering, the artificial signals can be obtained by data stitching from the source domain, which build a bridge between the source domain and target domain. To make the distribution of two domains closer, their covariance matrices are aligned by label alignment. After mapping to the tangent space, the features are extracted from the Riemannian manifold. Finally, the classification results are obtained with feature selection and classification. RESULTS Our data set includes the EEG signals from 16 subjects. For the heterogeneous transfer learning of cross-label, the average classification accuracy is 78.28%. MDSLATSM is also tested for cross-subject, and the average classification accuracy is 64.01%, which is better than existing methods. SIGNIFICANCE Combining multi-band filtering, data stitching, label alignment and tangent space mapping, a novel heterogeneous transfer learning method can be achieved with superior performance, which promotes the practical application of the BCI systems.
Collapse
Affiliation(s)
- Qianqian Zhan
- School of Electronics and Communication, Guangzhou University, Guangzhou, 510006, China
| | - Li Wang
- School of Electronics and Communication, Guangzhou University, Guangzhou, 510006, China.
| | - Lingling Ren
- School of Electronics and Communication, Guangzhou University, Guangzhou, 510006, China
| | - Xuewen Huang
- School of Electronics and Communication, Guangzhou University, Guangzhou, 510006, China
| |
Collapse
|
35
|
Kaklauskas A, Abraham A, Ubarte I, Kliukas R, Luksaite V, Binkyte-Veliene A, Vetloviene I, Kaklauskiene L. A Review of AI Cloud and Edge Sensors, Methods, and Applications for the Recognition of Emotional, Affective and Physiological States. SENSORS (BASEL, SWITZERLAND) 2022; 22:7824. [PMID: 36298176 PMCID: PMC9611164 DOI: 10.3390/s22207824] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/28/2022] [Accepted: 10/12/2022] [Indexed: 06/16/2023]
Abstract
Affective, emotional, and physiological states (AFFECT) detection and recognition by capturing human signals is a fast-growing area, which has been applied across numerous domains. The research aim is to review publications on how techniques that use brain and biometric sensors can be used for AFFECT recognition, consolidate the findings, provide a rationale for the current methods, compare the effectiveness of existing methods, and quantify how likely they are to address the issues/challenges in the field. In efforts to achieve the key goals of Society 5.0, Industry 5.0, and human-centered design better, the recognition of emotional, affective, and physiological states is progressively becoming an important matter and offers tremendous growth of knowledge and progress in these and other related fields. In this research, a review of AFFECT recognition brain and biometric sensors, methods, and applications was performed, based on Plutchik's wheel of emotions. Due to the immense variety of existing sensors and sensing systems, this study aimed to provide an analysis of the available sensors that can be used to define human AFFECT, and to classify them based on the type of sensing area and their efficiency in real implementations. Based on statistical and multiple criteria analysis across 169 nations, our outcomes introduce a connection between a nation's success, its number of Web of Science articles published, and its frequency of citation on AFFECT recognition. The principal conclusions present how this research contributes to the big picture in the field under analysis and explore forthcoming study trends.
Collapse
Affiliation(s)
- Arturas Kaklauskas
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ajith Abraham
- Machine Intelligence Research Labs, Scientific Network for Innovation and Research Excellence, Auburn, WA 98071, USA
| | - Ieva Ubarte
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Romualdas Kliukas
- Department of Applied Mechanics, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Vaida Luksaite
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Arune Binkyte-Veliene
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ingrida Vetloviene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Loreta Kaklauskiene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| |
Collapse
|
36
|
Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion. SENSORS 2022; 22:s22155611. [PMID: 35957167 PMCID: PMC9371233 DOI: 10.3390/s22155611] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Revised: 07/19/2022] [Accepted: 07/19/2022] [Indexed: 01/27/2023]
Abstract
Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study “in the wild” with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively.
Collapse
|
37
|
EEG-Based Empathic Safe Cobot. MACHINES 2022. [DOI: 10.3390/machines10080603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
An empathic collaborative robot (cobot) was realized through the transmission of fear from a human agent to a robot agent. Such empathy was induced through an electroencephalographic (EEG) sensor worn by the human agent, thus realizing an empathic safe brain-computer interface (BCI). The empathic safe cobot reacts to the fear and in turn transmits it to the human agent, forming a social circle of empathy and safety. A first randomized, controlled experiment involved two groups of 50 healthy subjects (100 total subjects) to measure the EEG signal in the presence or absence of a frightening event. The second randomized, controlled experiment on two groups of 50 different healthy subjects (100 total subjects) exposed the subjects to comfortable and uncomfortable movements of a collaborative robot (cobot) while the subjects’ EEG signal was acquired. The result was that a spike in the subject’s EEG signal was observed in the presence of uncomfortable movement. The questionnaires were distributed to the subjects, and confirmed the results of the EEG signal measurement. In a controlled laboratory setting, all experiments were found to be statistically significant. In the first experiment, the peak EEG signal measured just after the activating event was greater than the resting EEG signal (p < 10−3). In the second experiment, the peak EEG signal measured just after the uncomfortable movement of the cobot was greater than the EEG signal measured under conditions of comfortable movement of the cobot (p < 10−3). In conclusion, within the isolated and constrained experimental environment, the results were satisfactory.
Collapse
|