1
|
Şekerci Y, Kahraman MU, Özturan Ö, Çelik E, Ayan SŞ. Neurocognitive responses to spatial design behaviors and tools among interior architecture students: a pilot study. Sci Rep 2024; 14:4454. [PMID: 38396070 PMCID: PMC10891056 DOI: 10.1038/s41598-024-55182-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 02/21/2024] [Indexed: 02/25/2024] Open
Abstract
The impact of emotions on human behavior is substantial, and the ability to recognize people's feelings has a wide range of practical applications including education. Here, the methods and tools of education are being calibrated according to the data gained over electroencephalogram (EEG) signals. The issue of which design tools would be ideal in the future of interior architecture education, is an uncertain field. It is important to measure the students' emotional states while using manual and digital design tools to determine the different impacts. Brain-computer interfaces have made it possible to monitor emotional states in a way that is both convenient and economical. In the research of emotion recognition, EEG signals have been employed, and the resulting literature explains basic emotions as well as complicated scenarios that are created from the combination of numerous basic emotions. The objective of this study is to investigate the emotional states and degrees of attachment experienced by interior architecture students while engaging in their design processes. This includes examining the use of 2D or 3D tools, whether manual or digital, and identifying any changes in design tool usage and behaviors that may be influenced by different teaching techniques. Accordingly, the hierarchical clustering which is a technique used in data analysis to group objects into a hierarchical structure of clusters based on their similarities has been conducted.
Collapse
Affiliation(s)
- Yaren Şekerci
- Interior Architecture and Environmental Design, Antalya Bilim University, Antalya, 07190, Turkey.
| | - Mehmet Uğur Kahraman
- Interior Architecture and Environmental Design, Antalya Bilim University, Antalya, 07190, Turkey
| | - Özgü Özturan
- Akdeniz University, Interior Architecture, Antalya, 07070, Turkey
| | - Ertuğrul Çelik
- Electrical and Computer Engineering, Antalya Bilim University, Antalya, 07190, Turkey
| | - Sevgi Şengül Ayan
- Industrial Engineering, Antalya Bilim University, Antalya, 07190, Turkey
| |
Collapse
|
2
|
Pan L, Tang Z, Wang S, Song A. Cross-subject emotion recognition using hierarchical feature optimization and support vector machine with multi-kernel collaboration. Physiol Meas 2023; 44:125006. [PMID: 38029444 DOI: 10.1088/1361-6579/ad10c6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 11/29/2023] [Indexed: 12/01/2023]
Abstract
Objective. Due to individual differences, it is greatly challenging to realize the multiple types of emotion identification across subjects.Approach. In this research, a hierarchical feature optimization method is proposed in order to represent emotional states effectively based on peripheral physiological signals. Firstly, sparse learning combined with binary search is employed to achieve feature selection of single signals. Then an improved fast correlation-based filter is proposed to implement fusion optimization of multi-channel signal features. Aiming at overcoming the limitations of the support vector machine (SVM), which uses a single kernel function to make decisions, the multi-kernel function collaboration strategy is proposed to improve the classification performance of SVM.Main results. The effectiveness of the proposed method is verified on the DEAP dataset. Experimental results show that the proposed method presents a competitive performance for four cross-subject types of emotion identification with an accuracy of 84% (group 1) and 85.07% (group 2). Significance. The proposed model with hierarchical feature optimization and SVM with multi-kernel function collaboration demonstrates superior emotion recognition accuracy compared to state-of-the-art techniques. In addition, the analysis based on DEAP dataset composition characteristics presents a novel perspective to explore the emotion recognition issue more objectively and comprehensively.
Collapse
Affiliation(s)
- Lizheng Pan
- School of Mechanical Engineering and Rail Transit, Changzhou University, Changzhou 213164, People's Republic of China
| | - Ziqin Tang
- School of Mechanical Engineering and Rail Transit, Changzhou University, Changzhou 213164, People's Republic of China
| | - Shunchao Wang
- School of Mechanical Engineering and Rail Transit, Changzhou University, Changzhou 213164, People's Republic of China
| | - Aiguo Song
- School of Instrument Science and Engineering, Southeast University, Nanjing 210096, People's Republic of China
| |
Collapse
|
3
|
Zhong X, Gu Y, Luo Y, Zeng X, Liu G. Bi-hemisphere asymmetric attention network: recognizing emotion from EEG signals based on the transformer. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04228-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
4
|
Akter S, Prodhan RA, Pias TS, Eisenberg D, Fresneda Fernandez J. M1M2: Deep-Learning-Based Real-Time Emotion Recognition from Neural Activity. SENSORS (BASEL, SWITZERLAND) 2022; 22:8467. [PMID: 36366164 PMCID: PMC9654596 DOI: 10.3390/s22218467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/20/2022] [Accepted: 10/28/2022] [Indexed: 06/16/2023]
Abstract
Emotion recognition, or the ability of computers to interpret people's emotional states, is a very active research area with vast applications to improve people's lives. However, most image-based emotion recognition techniques are flawed, as humans can intentionally hide their emotions by changing facial expressions. Consequently, brain signals are being used to detect human emotions with improved accuracy, but most proposed systems demonstrate poor performance as EEG signals are difficult to classify using standard machine learning and deep learning techniques. This paper proposes two convolutional neural network (CNN) models (M1: heavily parameterized CNN model and M2: lightly parameterized CNN model) coupled with elegant feature extraction methods for effective recognition. In this study, the most popular EEG benchmark dataset, the DEAP, is utilized with two of its labels, valence, and arousal, for binary classification. We use Fast Fourier Transformation to extract the frequency domain features, convolutional layers for deep features, and complementary features to represent the dataset. The M1 and M2 CNN models achieve nearly perfect accuracy of 99.89% and 99.22%, respectively, which outperform every previous state-of-the-art model. We empirically demonstrate that the M2 model requires only 2 seconds of EEG signal for 99.22% accuracy, and it can achieve over 96% accuracy with only 125 milliseconds of EEG data for valence classification. Moreover, the proposed M2 model achieves 96.8% accuracy on valence using only 10% of the training dataset, demonstrating our proposed system's effectiveness. Documented implementation codes for every experiment are published for reproducibility.
Collapse
Affiliation(s)
- Sumya Akter
- Martin Tuchman School of Management, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - Rumman Ahmed Prodhan
- Martin Tuchman School of Management, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - Tanmoy Sarkar Pias
- Department of Computer Science, Virginia Tech, Blacksburg, VA 24061, USA
| | - David Eisenberg
- Department of Information Systems, Ying Wu College of Computing, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | | |
Collapse
|
5
|
A universal emotion recognition method based on feature priority evaluation and classifier reinforcement. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01590-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
6
|
Olamat A, Ozel P, Atasever S. Deep Learning Methods for Multi-Channel EEG-Based Emotion Recognition. Int J Neural Syst 2022; 32:2250021. [DOI: 10.1142/s0129065722500216] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Currently, Fourier-based, wavelet-based, and Hilbert-based time–frequency techniques have generated considerable interest in classification studies for emotion recognition in human–computer interface investigations. Empirical mode decomposition (EMD), one of the Hilbert-based time–frequency techniques, has been developed as a tool for adaptive signal processing. Additionally, the multi-variate version strongly influences designing the common oscillation structure of a multi-channel signal by utilizing the common instantaneous concepts of frequency and bandwidth. Additionally, electroencephalographic (EEG) signals are strongly preferred for comprehending emotion recognition perspectives in human–machine interactions. This study aims to herald an emotion detection design via EEG signal decomposition using multi-variate empirical mode decomposition (MEMD). For emotion recognition, the SJTU emotion EEG dataset (SEED) is classified using deep learning methods. Convolutional neural networks (AlexNet, DenseNet-201, ResNet-101, and ResNet50) and AutoKeras architectures are selected for image classification. The proposed framework reaches 99% and 100% classification accuracy when transfer learning methods and the AutoKeras method are used, respectively.
Collapse
Affiliation(s)
- Ali Olamat
- Biomedical Engineering Department, Yildiz Technical University, Istanbul, Turkey
| | - Pinar Ozel
- Biomedical Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey
| | - Sema Atasever
- Computer Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey
| |
Collapse
|
7
|
Li S, Lyu X, Zhao L, Chen Z, Gong A, Fu Y. Identification of Emotion Using Electroencephalogram by Tunable Q-Factor Wavelet Transform and Binary Gray Wolf Optimization. Front Comput Neurosci 2021; 15:732763. [PMID: 34566614 PMCID: PMC8455931 DOI: 10.3389/fncom.2021.732763] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 08/16/2021] [Indexed: 11/29/2022] Open
Abstract
Emotional brain-computer interface based on electroencephalogram (EEG) is a hot issue in the field of human-computer interaction, and is also an important part of the field of emotional computing. Among them, the recognition of EEG induced by emotion is a key problem. Firstly, the preprocessed EEG is decomposed by tunable-Q wavelet transform. Secondly, the sample entropy, second-order differential mean, normalized second-order differential mean, and Hjorth parameter (mobility and complexity) of each sub-band are extracted. Then, the binary gray wolf optimization algorithm is used to optimize the feature matrix. Finally, support vector machine is used to train the classifier. The five types of emotion signal samples of 32 subjects in the database for emotion analysis using physiological signal dataset is identified by the proposed algorithm. After 6-fold cross-validation, the maximum recognition accuracy is 90.48%, the sensitivity is 70.25%, the specificity is 82.01%, and the Kappa coefficient is 0.603. The results show that the proposed method has good performance indicators in the recognition of multiple types of EEG emotion signals, and has a better performance improvement compared with the traditional methods.
Collapse
Affiliation(s)
- Siyu Li
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China.,Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Xiaotong Lyu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China.,Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Lei Zhao
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China.,Faculty of Science, Kunming University of Science and Technology, Kunming, China
| | - Zhuangfei Chen
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China.,School of Medicine, Center for Brain Science and Visual Cognition, Kunming University of Science and Technology, Kunming, China
| | - Anmin Gong
- College of Information Engineering, Engineering University of PAP, Xi'an, China
| | - Yunfa Fu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China.,Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China.,School of Medicine, Center for Brain Science and Visual Cognition, Kunming University of Science and Technology, Kunming, China.,Computer Technology Application Key Lab of Yunnan Province, Kunming University of Science and Technology, Kunming, China
| |
Collapse
|
8
|
Rahul J, Sora M, Sharma LD, Bohat VK. An improved cardiac arrhythmia classification using an RR interval-based approach. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.04.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
9
|
Khan MU, Aziz S, Akram T, Amjad F, Iqtidar K, Nam Y, Khan MA. Expert Hypertension Detection System Featuring Pulse Plethysmograph Signals and Hybrid Feature Selection and Reduction Scheme. SENSORS 2021; 21:s21010247. [PMID: 33401652 PMCID: PMC7794944 DOI: 10.3390/s21010247] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/22/2020] [Accepted: 12/24/2020] [Indexed: 12/27/2022]
Abstract
Hypertension is an antecedent to cardiac disorders. According to the World Health Organization (WHO), the number of people affected with hypertension will reach around 1.56 billion by 2025. Early detection of hypertension is imperative to prevent the complications caused by cardiac abnormalities. Hypertension usually possesses no apparent detectable symptoms; hence, the control rate is significantly low. Computer-aided diagnosis based on machine learning and signal analysis has recently been applied to identify biomarkers for the accurate prediction of hypertension. This research proposes a new expert hypertension detection system (EHDS) from pulse plethysmograph (PuPG) signals for the categorization of normal and hypertension. The PuPG signal data set, including rich information of cardiac activity, was acquired from healthy and hypertensive subjects. The raw PuPG signals were preprocessed through empirical mode decomposition (EMD) by decomposing a signal into its constituent components. A combination of multi-domain features was extracted from the preprocessed PuPG signal. The features exhibiting high discriminative characteristics were selected and reduced through a proposed hybrid feature selection and reduction (HFSR) scheme. Selected features were subjected to various classification methods in a comparative fashion in which the best performance of 99.4% accuracy, 99.6% sensitivity, and 99.2% specificity was achieved through weighted k-nearest neighbor (KNN-W). The performance of the proposed EHDS was thoroughly assessed by tenfold cross-validation. The proposed EHDS achieved better detection performance in comparison to other electrocardiogram (ECG) and photoplethysmograph (PPG)-based methods.
Collapse
Affiliation(s)
- Muhammad Umar Khan
- Department of Electronics Engineering, University of Engineering and Technology Taxila, Taxila 47050, Pakistan; (M.U.K.); (F.A.)
| | - Sumair Aziz
- Department of Electronics Engineering, University of Engineering and Technology Taxila, Taxila 47050, Pakistan; (M.U.K.); (F.A.)
- Correspondence: (S.A.); (Y.N.)
| | - Tallha Akram
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah Campus, Wah Cantonment, Islamabad 45550, Pakistan;
| | - Fatima Amjad
- Department of Electronics Engineering, University of Engineering and Technology Taxila, Taxila 47050, Pakistan; (M.U.K.); (F.A.)
| | - Khushbakht Iqtidar
- Department of Computer and Software Engineering, College of Electrical and Mechanical Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan;
| | - Yunyoung Nam
- Department of Computer Science and Engineering, Soonchunhyang University, Asan 31538, Korea
- Correspondence: (S.A.); (Y.N.)
| | | |
Collapse
|
10
|
Wang J, Wang M. Review of the emotional feature extraction and classification using EEG signals. COGNITIVE ROBOTICS 2021. [DOI: 10.1016/j.cogr.2021.04.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
11
|
Tanabe S. How can artificial intelligence and humans work together to fight against cancer? Artif Intell Cancer 2020; 1:45-50. [DOI: 10.35713/aic.v1.i3.45] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 09/18/2020] [Accepted: 09/22/2020] [Indexed: 02/06/2023] Open
Abstract
This editorial will focus on and discuss growing artificial intelligence (AI) and the utilization of AI in human cancer therapy. The databases and big data related to genomes, genes, proteins and molecular networks are rapidly increasing all worldwide where information on human diseases, including cancer and infection resides. To overcome diseases, prevention and therapeutics are being developed with the abundant data analyzed by AI. AI has so much potential for handling considerable data, which requires some orientation and ambition. Appropriate interpretation of AI is essential for understanding disease mechanisms and finding targets for prevention and therapeutics. Collaboration with AI to extract the essence of cancer data and model intelligent networks will be explored. The utilization of AI can provide humans with a predictive future in disease mechanisms and treatment as well as prevention.
Collapse
Affiliation(s)
- Shihori Tanabe
- Division of Risk Assessment, Center for Biological Safety and Research, National Institute of Health Sciences, Kawasaki 210-9501, Kanagawa, Japan
| |
Collapse
|