1
|
Ortiz-Garcés I, Govea J, Sánchez-Viteri S, Villegas-Ch. W. CyberEduPlatform: an educational tool to improve cybersecurity through anomaly detection with Artificial Intelligence. PeerJ Comput Sci 2024; 10:e2041. [PMID: 38983228 PMCID: PMC11232618 DOI: 10.7717/peerj-cs.2041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 04/15/2024] [Indexed: 07/11/2024]
Abstract
Cybersecurity has become a central concern in the contemporary digital era due to the exponential increase in cyber threats. These threats, ranging from simple malware to advanced persistent attacks, put individuals and organizations at risk. This study explores the potential of artificial intelligence to detect anomalies in network traffic in a university environment. The effectiveness of automatic detection of unconventional activities was evaluated through extensive simulations and advanced artificial intelligence models. In addition, the importance of cybersecurity awareness and education is highlighted, introducing CyberEduPlatform, a tool designed to improve users' cyber awareness. The results indicate that, while AI models show high precision in detecting anomalies, complementary education and awareness play a crucial role in fortifying the first lines of defense against cyber threats. This research highlights the need for an integrated approach to cybersecurity, combining advanced technological solutions with robust educational strategies.
Collapse
Affiliation(s)
- Iván Ortiz-Garcés
- Escuela de Ingeniería en Ciberseguridad, Facultad de Ingenierías y Ciencias Aplicadas, Universidad de Las Américas, Quito, Pichincha, Ecuador
| | - Jaime Govea
- Escuela de Ingeniería en Ciberseguridad, Facultad de Ingenierías y Ciencias Aplicadas, Universidad de Las Américas, Quito, Pichincha, Ecuador
| | - Santiago Sánchez-Viteri
- Departamento de Sistemas, Universidad Internacional del Ecuador, Universidad Internacional del Ecuador, Quito, Pichincha, Ecuador
| | - William Villegas-Ch.
- Escuela de Ingeniería en Ciberseguridad, Facultad de Ingenierías y Ciencias Aplicadas, Universidad de Las Américas, Quito, Pichincha, Ecuador
| |
Collapse
|
2
|
Hamzah HA, Abdalla KK. EEG-based emotion recognition systems; comprehensive study. Heliyon 2024; 10:e31485. [PMID: 38818173 PMCID: PMC11137547 DOI: 10.1016/j.heliyon.2024.e31485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 05/16/2024] [Indexed: 06/01/2024] Open
Abstract
Emotion recognition technology through EEG signal analysis is currently a fundamental concept in artificial intelligence. This recognition has major practical implications in emotional health care, human-computer interaction, and so on. This paper provides a comprehensive study of different methods for extracting electroencephalography (EEG) features for emotion recognition from four different perspectives, including time domain features, frequency domain features, time-frequency features, and nonlinear features. We summarize the current pattern recognition methods adopted in most related works, and with the rapid development of deep learning (DL) attracting the attention of researchers in this field, we pay more attention to deep learning-based studies and analyse the characteristics, advantages, disadvantages, and applicable scenarios. Finally, the current challenges and future development directions in this field were summarized. This paper can help novice researchers in this field gain a systematic understanding of the current status of emotion recognition research based on EEG signals and provide ideas for subsequent related research.
Collapse
Affiliation(s)
- Hussein Ali Hamzah
- Electrical Engineering Department, College of Engineering, University of Babylon, Iraq
| | - Kasim K. Abdalla
- Electrical Engineering Department, College of Engineering, University of Babylon, Iraq
| |
Collapse
|
3
|
Laufer I, Mizrahi D, Zuckerman I. Enhancing EEG-based attachment style prediction: unveiling the impact of feature domains. Front Psychol 2024; 15:1326791. [PMID: 38318079 PMCID: PMC10838989 DOI: 10.3389/fpsyg.2024.1326791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 01/04/2024] [Indexed: 02/07/2024] Open
Abstract
Introduction Attachment styles are crucial in human relationships and have been explored through neurophysiological responses and EEG data analysis. This study investigates the potential of EEG data in predicting and differentiating secure and insecure attachment styles, contributing to the understanding of the neural basis of interpersonal dynamics. Methods We engaged 27 participants in our study, employing an XGBoost classifier to analyze EEG data across various feature domains, including time-domain, complexity-based, and frequency-based attributes. Results The study found significant differences in the precision of attachment style prediction: a high precision rate of 96.18% for predicting insecure attachment, and a lower precision of 55.34% for secure attachment. Balanced accuracy metrics indicated an overall model accuracy of approximately 84.14%, taking into account dataset imbalances. Discussion These results highlight the challenges in using EEG patterns for attachment style prediction due to the complex nature of attachment insecurities. Individuals with heightened perceived insecurity predominantly aligned with the insecure attachment category, suggesting a link to their increased emotional reactivity and sensitivity to social cues. The study underscores the importance of time-domain features in prediction accuracy, followed by complexity-based features, while noting the lesser impact of frequency-based features. Our findings advance the understanding of the neural correlates of attachment and pave the way for future research, including expanding demographic diversity and integrating multimodal data to refine predictive models.
Collapse
Affiliation(s)
| | - Dor Mizrahi
- Department of Industrial Engineering and Management, Ariel University, Ariel, Israel
| | | |
Collapse
|
4
|
Vieira JC, Guedes LA, Santos MR, Sanchez-Gendriz I. Using Explainable Artificial Intelligence to Obtain Efficient Seizure-Detection Models Based on Electroencephalography Signals. SENSORS (BASEL, SWITZERLAND) 2023; 23:9871. [PMID: 38139715 PMCID: PMC10747117 DOI: 10.3390/s23249871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 12/24/2023]
Abstract
Epilepsy is a condition that affects 50 million individuals globally, significantly impacting their quality of life. Epileptic seizures, a transient occurrence, are characterized by a spectrum of manifestations, including alterations in motor function and consciousness. These events impose restrictions on the daily lives of those affected, frequently resulting in social isolation and psychological distress. In response, numerous efforts have been directed towards the detection and prevention of epileptic seizures through EEG signal analysis, employing machine learning and deep learning methodologies. This study presents a methodology that reduces the number of features and channels required by simpler classifiers, leveraging Explainable Artificial Intelligence (XAI) for the detection of epileptic seizures. The proposed approach achieves performance metrics exceeding 95% in accuracy, precision, recall, and F1-score by utilizing merely six features and five channels in a temporal domain analysis, with a time window of 1 s. The model demonstrates robust generalization across the patient cohort included in the database, suggesting that feature reduction in simpler models-without resorting to deep learning-is adequate for seizure detection. The research underscores the potential for substantial reductions in the number of attributes and channels, advocating for the training of models with strategically selected electrodes, and thereby supporting the development of effective mobile applications for epileptic seizure detection.
Collapse
Affiliation(s)
- Jusciaane Chacon Vieira
- Department of Computer Engineering and Automation—DCA, Federal University of Rio Grande do Norte—UFRN, Natal 59078-900, RN, Brazil; (L.A.G.); (M.R.S.); (I.S.-G.)
| | | | | | | |
Collapse
|
5
|
Prasad R, Tarai S, Bit A. Investigation of frequency components embedded in EEG recordings underlying neuronal mechanism of cognitive control and attentional functions. Cogn Neurodyn 2023; 17:1321-1344. [PMID: 37786663 PMCID: PMC10542063 DOI: 10.1007/s11571-022-09888-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 09/03/2022] [Accepted: 09/14/2022] [Indexed: 11/29/2022] Open
Abstract
Attentional cognitive control regulates the perception to enhance human behaviour. The current study examines the atltentional mechanisms in terms of time and frequency of EEG signals. The cognitive load is higher for processing local attentional stimulus, thereby demanding higher response time (RT) with low response accuracy (RA). On the other hand, the global attentional mechanisms broadly promote the perception while demanding a low cognitive load with faster RT and high RA. Attentional mechanisms refer to perceptual systems that afford and allocate the adaptive behaviours for prioritizing the processing of relevant stimuli based on the local and global features. The early sensory component of C1, which was associated with the local attentional mechanism, showed higher amplitudes than the global attentional mechanisms in parieto-occipital regions. Further, the local attentional mechanisms were also sustained in N2 and P3 components increasing higher amplitude in the left and right hemispheric sides of temporal regions (T7 and T8). Theta band frequency had shown higher power spectrum density (PSD) values while processing local attentional mechanisms. However, the significance of other frequency bands was noticeably minute. Hence, integrating the attentional mechanisms in terms of ERP and frequency signatures, a hybrid custom weight allocation model (CWAM) was built to assess and predict the contribution of insignificant channels to significant ones. The CWAM model was formulated based on the computational linear regression derivatives. All the derivatives are computationally derived the significant score while channelizing the hierarchical performance of each channel with respect to the frequent and deviant occurrences of global-local stimulus. This model enables us to configure the neural dynamicity of cognitive allocation of resources within the different locations of the human brain while processing the attentional stimulus. CWAM is reported to be the first model to evaluate the performance of the non-significant channels for enhancing the response of significant channels. The findings of the CWAM model suggest that the brain's performance may be determined by the underlying contribution of the non-significant channels. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-022-09888-x.
Collapse
Affiliation(s)
| | - Shashikanta Tarai
- Department of Humanities and Social Sciences, NIT Raipur, Raipur, India
| | - Arindam Bit
- Department of Biomedical Engineering, NIT Raipur, Raipur, India
| |
Collapse
|
6
|
Xu H, Cao K, Chen H, Abudusalamu A, Wu W, Xue Y. Emotional brain network decoded by biological spiking neural network. Front Neurosci 2023; 17:1200701. [PMID: 37496741 PMCID: PMC10366476 DOI: 10.3389/fnins.2023.1200701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 05/30/2023] [Indexed: 07/28/2023] Open
Abstract
Introduction Emotional disorders are essential manifestations of many neurological and psychiatric diseases. Nowadays, researchers try to explore bi-directional brain-computer interface techniques to help the patients. However, the related functional brain areas and biological markers are still unclear, and the dynamic connection mechanism is also unknown. Methods To find effective regions related to different emotion recognition and intervention, our research focuses on finding emotional EEG brain networks using spiking neural network algorithm with binary coding. We collected EEG data while human participants watched emotional videos (fear, sadness, happiness, and neutrality), and analyzed the dynamic connections between the electrodes and the biological rhythms of different emotions. Results The analysis has shown that the local high-activation brain network of fear and sadness is mainly in the parietal lobe area. The local high-level brain network of happiness is in the prefrontal-temporal lobe-central area. Furthermore, the α frequency band could effectively represent negative emotions, while the α frequency band could be used as a biological marker of happiness. The decoding accuracy of the three emotions reached 86.36%, 95.18%, and 89.09%, respectively, fully reflecting the excellent emotional decoding performance of the spiking neural network with self- backpropagation. Discussion The introduction of the self-backpropagation mechanism effectively improves the performance of the spiking neural network model. Different emotions exhibit distinct EEG networks and neuro-oscillatory-based biological markers. These emotional brain networks and biological markers may provide important hints for brain-computer interface technique exploration to help related brain disease recovery.
Collapse
Affiliation(s)
- Hubo Xu
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence, Peking University, Beijing, China
- Department of Pharmacology, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Kexin Cao
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence, Peking University, Beijing, China
- Department of Pharmacology, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Hongguang Chen
- NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Peking University Institute of Mental Health, Peking University Sixth Hospital, Beijing, China
| | - Awuti Abudusalamu
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence, Peking University, Beijing, China
- Department of Pharmacology, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Wei Wu
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yanxue Xue
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence, Peking University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
- Key Laboratory for Neuroscience, Ministry of Education/National Health Commission, Peking University, Beijing, China
| |
Collapse
|
7
|
Nieto Mora D, Valencia S, Trujillo N, López JD, Martínez JD. Characterizing social and cognitive EEG-ERP through multiple kernel learning. Heliyon 2023; 9:e16927. [PMID: 37484433 PMCID: PMC10361029 DOI: 10.1016/j.heliyon.2023.e16927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/31/2023] [Accepted: 06/01/2023] [Indexed: 07/25/2023] Open
Abstract
EEG-ERP social-cognitive studies with healthy populations commonly fail to provide significant evidence due to low-quality data and the inherent similarity between groups. We propose a multiple kernel learning-based approach to enhance classification accuracy while keeping the traceability of the features (frequency bands or regions of interest) as a linear combination of kernels. These weights determine the relevance of each source of information, which is crucial for specialists. As a case study, we classify healthy ex-combatants of the Colombian armed conflict and civilians through a cognitive valence recognition task. Although previous works have shown accuracies below 80% with these groups, our proposal achieved an F1 score of 98%, revealing the most relevant bands and brain regions, which are the base for socio-cognitive trainings. With this methodology, we aim to contribute to standardizing EEG analyses and enhancing their statistics.
Collapse
Affiliation(s)
- Daniel Nieto Mora
- Máquinas Inteligentes y Reconocimiento de Patrones, Instituto Tecnológico Metropolitano ITM - Medellín, Colombia
| | - Stella Valencia
- Grupo de Investigación Salud Mental, Facultad Nacional de Salud Pública, Universidad de Antioquia UDEA - Medellín, Colombia
- Grupo de Neurociencias de Antioquia, Facultad de Medicina, Universidad de Antioquia UDEA - Medellín, Colombia
| | - Natalia Trujillo
- Grupo de Investigación Salud Mental, Facultad Nacional de Salud Pública, Universidad de Antioquia UDEA - Medellín, Colombia
- Grupo de Neurociencias de Antioquia, Facultad de Medicina, Universidad de Antioquia UDEA - Medellín, Colombia
| | - Jose David López
- Engineering Faculty, Universidad de Antioquia UDEA - Medellín, Colombia
| | | |
Collapse
|
8
|
Yuvaraj R, Baranwal A, Prince AA, Murugappan M, Mohammed JS. Emotion Recognition from Spatio-Temporal Representation of EEG Signals via 3D-CNN with Ensemble Learning Techniques. Brain Sci 2023; 13:brainsci13040685. [PMID: 37190650 DOI: 10.3390/brainsci13040685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/12/2023] [Accepted: 04/17/2023] [Indexed: 05/17/2023] Open
Abstract
The recognition of emotions is one of the most challenging issues in human-computer interaction (HCI). EEG signals are widely adopted as a method for recognizing emotions because of their ease of acquisition, mobility, and convenience. Deep neural networks (DNN) have provided excellent results in emotion recognition studies. Most studies, however, use other methods to extract handcrafted features, such as Pearson correlation coefficient (PCC), Principal Component Analysis, Higuchi Fractal Dimension (HFD), etc., even though DNN is capable of generating meaningful features. Furthermore, most earlier studies largely ignored spatial information between the different channels, focusing mainly on time domain and frequency domain representations. This study utilizes a pre-trained 3D-CNN MobileNet model with transfer learning on the spatio-temporal representation of EEG signals to extract features for emotion recognition. In addition to fully connected layers, hybrid models were explored using other decision layers such as multilayer perceptron (MLP), k-nearest neighbor (KNN), extreme learning machine (ELM), XGBoost (XGB), random forest (RF), and support vector machine (SVM). Additionally, this study investigates the effects of post-processing or filtering output labels. Extensive experiments were conducted on the SJTU Emotion EEG Dataset (SEED) (three classes) and SEED-IV (four classes) datasets, and the results obtained were comparable to the state-of-the-art. Based on the conventional 3D-CNN with ELM classifier, SEED and SEED-IV datasets showed a maximum accuracy of 89.18% and 81.60%, respectively. Post-filtering improved the emotional classification performance in the hybrid 3D-CNN with ELM model for SEED and SEED-IV datasets to 90.85% and 83.71%, respectively. Accordingly, spatial-temporal features extracted from the EEG, along with ensemble classifiers, were found to be the most effective in recognizing emotions compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Rajamanickam Yuvaraj
- National Institute of Education, Nanyang Technological University, Singapore 637616, Singapore
| | - Arapan Baranwal
- Department of Computer Science and Information Systems, BITS Pilani, Sancoale 403726, Goa, India
| | - A Amalin Prince
- Department of Electrical and Electronics Engineering, BITS Pilani, Sancoale 403726, Goa, India
| | - M Murugappan
- Intelligent Signal Processing (ISP) Research Lab, Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, Block 4, Doha 13133, Kuwait
- Department of Electronics and Communication Engineering, Faculty of Engineering, Vels Institute of Sciences, Technology, and Advanced Studies, Chennai 600117, Tamilnadu, India
- Centre for Excellence in Unmanned Aerial Systems (CoEUAS), Universiti Malaysia Perlis, Kangar 02600, Perlis, Malaysia
| | - Javeed Shaikh Mohammed
- Department of Biomedical Technology, College of Applied Medical Sciences, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
| |
Collapse
|
9
|
Inventive deep convolutional neural network classifier for emotion identification in accordance with EEG signals. SOCIAL NETWORK ANALYSIS AND MINING 2023. [DOI: 10.1007/s13278-023-01035-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
|
10
|
Abdel-Hamid L. An Efficient Machine Learning-Based Emotional Valence Recognition Approach Towards Wearable EEG. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23031255. [PMID: 36772295 PMCID: PMC9921881 DOI: 10.3390/s23031255] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 01/14/2023] [Accepted: 01/17/2023] [Indexed: 05/17/2023]
Abstract
Emotion artificial intelligence (AI) is being increasingly adopted in several industries such as healthcare and education. Facial expressions and tone of speech have been previously considered for emotion recognition, yet they have the drawback of being easily manipulated by subjects to mask their true emotions. Electroencephalography (EEG) has emerged as a reliable and cost-effective method to detect true human emotions. Recently, huge research effort has been put to develop efficient wearable EEG devices to be used by consumers in out of the lab scenarios. In this work, a subject-dependent emotional valence recognition method is implemented that is intended for utilization in emotion AI applications. Time and frequency features were computed from a single time series derived from the Fp1 and Fp2 channels. Several analyses were performed on the strongest valence emotions to determine the most relevant features, frequency bands, and EEG timeslots using the benchmark DEAP dataset. Binary classification experiments resulted in an accuracy of 97.42% using the alpha band, by that outperforming several approaches from literature by ~3-22%. Multiclass classification gave an accuracy of 95.0%. Feature computation and classification required less than 0.1 s. The proposed method thus has the advantage of reduced computational complexity as, unlike most methods in the literature, only two EEG channels were considered. In addition, minimal features concluded from the thorough analyses conducted in this study were used to achieve state-of-the-art performance. The implemented EEG emotion recognition method thus has the merits of being reliable and easily reproducible, making it well-suited for wearable EEG devices.
Collapse
Affiliation(s)
- Lamiaa Abdel-Hamid
- Department of Electronics & Communication, Faculty of Engineering, Misr International University (MIU), Heliopolis, Cairo P.O. Box 1 , Egypt
| |
Collapse
|
11
|
Wang X, Ren Y, Luo Z, He W, Hong J, Huang Y. Deep learning-based EEG emotion recognition: Current trends and future perspectives. Front Psychol 2023; 14:1126994. [PMID: 36923142 PMCID: PMC10009917 DOI: 10.3389/fpsyg.2023.1126994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 01/11/2023] [Indexed: 03/03/2023] Open
Abstract
Automatic electroencephalogram (EEG) emotion recognition is a challenging component of human-computer interaction (HCI). Inspired by the powerful feature learning ability of recently-emerged deep learning techniques, various advanced deep learning models have been employed increasingly to learn high-level feature representations for EEG emotion recognition. This paper aims to provide an up-to-date and comprehensive survey of EEG emotion recognition, especially for various deep learning techniques in this area. We provide the preliminaries and basic knowledge in the literature. We review EEG emotion recognition benchmark data sets briefly. We review deep learning techniques in details, including deep belief networks, convolutional neural networks, and recurrent neural networks. We describe the state-of-the-art applications of deep learning techniques for EEG emotion recognition in detail. We analyze the challenges and opportunities in this field and point out its future directions.
Collapse
Affiliation(s)
- Xiaohu Wang
- School of Intelligent Manufacturing and Mechanical Engineering, Hunan Institute of Technology, Hengyang, China
| | - Yongmei Ren
- School of Electrical and Information Engineering, Hunan Institute of Technology, Hengyang, China
| | - Ze Luo
- School of Intelligent Manufacturing and Mechanical Engineering, Hunan Institute of Technology, Hengyang, China
| | - Wei He
- School of Electrical and Information Engineering, Hunan Institute of Technology, Hengyang, China
| | - Jun Hong
- School of Intelligent Manufacturing and Mechanical Engineering, Hunan Institute of Technology, Hengyang, China
| | - Yinzhen Huang
- School of Computer and Information Engineering, Hunan Institute of Technology, Hengyang, China
| |
Collapse
|
12
|
Alsubai S. Emotion Detection Using Deep Normalized Attention-Based Neural Network and Modified-Random Forest. SENSORS (BASEL, SWITZERLAND) 2022; 23:225. [PMID: 36616823 PMCID: PMC9823734 DOI: 10.3390/s23010225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/22/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
In the contemporary world, emotion detection of humans is procuring huge scope in extensive dimensions such as bio-metric security, HCI (human-computer interaction), etc. Such emotions could be detected from various means, such as information integration from facial expressions, gestures, speech, etc. Though such physical depictions contribute to emotion detection, EEG (electroencephalogram) signals have gained significant focus in emotion detection due to their sensitivity to alterations in emotional states. Hence, such signals could explore significant emotional state features. However, manual detection from EEG signals is a time-consuming process. With the evolution of artificial intelligence, researchers have attempted to use different data mining algorithms for emotion detection from EEG signals. Nevertheless, they have shown ineffective accuracy. To resolve this, the present study proposes a DNA-RCNN (Deep Normalized Attention-based Residual Convolutional Neural Network) to extract the appropriate features based on the discriminative representation of features. The proposed NN also explores alluring features with the proposed attention modules leading to consistent performance. Finally, classification is performed by the proposed M-RF (modified-random forest) with an empirical loss function. In this process, the learning weights on the data subset alleviate loss amongst the predicted value and ground truth, which assists in precise classification. Performance and comparative analysis are considered to explore the better performance of the proposed system in detecting emotions from EEG signals that confirms its effectiveness.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
13
|
Cui D, Xuan H, Liu J, Gu G, Li X. Emotion Recognition on EEG Signal Using ResNeXt Attention 2D-3D Convolution Neural Networks. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-11120-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
14
|
Zhong X, Gu Y, Luo Y, Zeng X, Liu G. Bi-hemisphere asymmetric attention network: recognizing emotion from EEG signals based on the transformer. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04228-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
15
|
Flower XL, Poonguzhali S. Performance improvement and complexity reduction in the classification of EMG signals with mRMR-based CNN-KNN combined model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-220811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
For real-time applications, the performance in classifying the movement should be as high as possible, and the computational complexity should be low. This paper focuses on the classification of five upper arm movements which can be provided as a control for human-machine interface (HMI) based applications. The conventional machine learning algorithms are used for classification with both time and frequency domain features, and k-nearest neighbor (KNN) outplay others. To further improve the classification accuracy, pretrained CNN architectures are employed which leads to computational complexity and memory requirements. To overcome this, the deep convolutional neural network (CNN) model is introduced with three convolutional layers. To further improve the performance which is the key idea behind real-time applications, a hybrid CNN-KNN model is proposed. Even though the performance is high, the computation costs of the hybrid method are more. Minimum redundancy maximum relevance (mRMR), a feature selection method makes an effort to reduce feature dimensions. As a result, better performance is achieved by our proposed method CNN-KNN with mRMR which reduces computational complexity and memory requirement with a mean prediction accuracy of about 99.05±0.25% with 100 features.
Collapse
Affiliation(s)
- X. Little Flower
- Department of Electronics and Communication Engineering, College of Engineering Guindy (CEG), Anna University, Chennai, India
| | - S. Poonguzhali
- Department of Electronics and Communication Engineering, College of Engineering Guindy (CEG), Anna University, Chennai, India
| |
Collapse
|
16
|
Akter S, Prodhan RA, Pias TS, Eisenberg D, Fresneda Fernandez J. M1M2: Deep-Learning-Based Real-Time Emotion Recognition from Neural Activity. SENSORS (BASEL, SWITZERLAND) 2022; 22:8467. [PMID: 36366164 PMCID: PMC9654596 DOI: 10.3390/s22218467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/20/2022] [Accepted: 10/28/2022] [Indexed: 06/16/2023]
Abstract
Emotion recognition, or the ability of computers to interpret people's emotional states, is a very active research area with vast applications to improve people's lives. However, most image-based emotion recognition techniques are flawed, as humans can intentionally hide their emotions by changing facial expressions. Consequently, brain signals are being used to detect human emotions with improved accuracy, but most proposed systems demonstrate poor performance as EEG signals are difficult to classify using standard machine learning and deep learning techniques. This paper proposes two convolutional neural network (CNN) models (M1: heavily parameterized CNN model and M2: lightly parameterized CNN model) coupled with elegant feature extraction methods for effective recognition. In this study, the most popular EEG benchmark dataset, the DEAP, is utilized with two of its labels, valence, and arousal, for binary classification. We use Fast Fourier Transformation to extract the frequency domain features, convolutional layers for deep features, and complementary features to represent the dataset. The M1 and M2 CNN models achieve nearly perfect accuracy of 99.89% and 99.22%, respectively, which outperform every previous state-of-the-art model. We empirically demonstrate that the M2 model requires only 2 seconds of EEG signal for 99.22% accuracy, and it can achieve over 96% accuracy with only 125 milliseconds of EEG data for valence classification. Moreover, the proposed M2 model achieves 96.8% accuracy on valence using only 10% of the training dataset, demonstrating our proposed system's effectiveness. Documented implementation codes for every experiment are published for reproducibility.
Collapse
Affiliation(s)
- Sumya Akter
- Martin Tuchman School of Management, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - Rumman Ahmed Prodhan
- Martin Tuchman School of Management, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - Tanmoy Sarkar Pias
- Department of Computer Science, Virginia Tech, Blacksburg, VA 24061, USA
| | - David Eisenberg
- Department of Information Systems, Ying Wu College of Computing, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | | |
Collapse
|
17
|
Kaklauskas A, Abraham A, Ubarte I, Kliukas R, Luksaite V, Binkyte-Veliene A, Vetloviene I, Kaklauskiene L. A Review of AI Cloud and Edge Sensors, Methods, and Applications for the Recognition of Emotional, Affective and Physiological States. SENSORS (BASEL, SWITZERLAND) 2022; 22:7824. [PMID: 36298176 PMCID: PMC9611164 DOI: 10.3390/s22207824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/28/2022] [Accepted: 10/12/2022] [Indexed: 06/16/2023]
Abstract
Affective, emotional, and physiological states (AFFECT) detection and recognition by capturing human signals is a fast-growing area, which has been applied across numerous domains. The research aim is to review publications on how techniques that use brain and biometric sensors can be used for AFFECT recognition, consolidate the findings, provide a rationale for the current methods, compare the effectiveness of existing methods, and quantify how likely they are to address the issues/challenges in the field. In efforts to achieve the key goals of Society 5.0, Industry 5.0, and human-centered design better, the recognition of emotional, affective, and physiological states is progressively becoming an important matter and offers tremendous growth of knowledge and progress in these and other related fields. In this research, a review of AFFECT recognition brain and biometric sensors, methods, and applications was performed, based on Plutchik's wheel of emotions. Due to the immense variety of existing sensors and sensing systems, this study aimed to provide an analysis of the available sensors that can be used to define human AFFECT, and to classify them based on the type of sensing area and their efficiency in real implementations. Based on statistical and multiple criteria analysis across 169 nations, our outcomes introduce a connection between a nation's success, its number of Web of Science articles published, and its frequency of citation on AFFECT recognition. The principal conclusions present how this research contributes to the big picture in the field under analysis and explore forthcoming study trends.
Collapse
Affiliation(s)
- Arturas Kaklauskas
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ajith Abraham
- Machine Intelligence Research Labs, Scientific Network for Innovation and Research Excellence, Auburn, WA 98071, USA
| | - Ieva Ubarte
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Romualdas Kliukas
- Department of Applied Mechanics, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Vaida Luksaite
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Arune Binkyte-Veliene
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ingrida Vetloviene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Loreta Kaklauskiene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| |
Collapse
|
18
|
Li C, Wang B, Zhang S, Liu Y, Song R, Cheng J, Chen X. Emotion recognition from EEG based on multi-task learning with capsule network and attention mechanism. Comput Biol Med 2022; 143:105303. [PMID: 35217341 DOI: 10.1016/j.compbiomed.2022.105303] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Revised: 01/20/2022] [Accepted: 01/20/2022] [Indexed: 11/25/2022]
Abstract
Deep learning (DL) technologies have recently shown great potential in emotion recognition based on electroencephalography (EEG). However, existing DL-based EEG emotion recognition methods are built on single-task learning, i.e., learning arousal, valence, and dominance individually, which may ignore the complementary information of different tasks. In addition, single-task learning involves a new round of training every time a new task appears, which is time consuming. To this end, we propose a novel method for EEG-based emotion recognition based on multi-task learning with capsule network (CapsNet) and attention mechanism. First, multi-task learning can learn multiple tasks simultaneously while exploiting commonalities and differences across tasks, it can also obtain more data from different tasks, which can improve generalization and robustness. Second, the innovative structure of the CapsNet enables it to effectively characterize the intrinsic relationship among various EEG channels. Finally, the attention mechanism can change the weight of different channels to extract important information. In the DEAP dataset, the average accuracy reached 97.25%, 97.41%, and 98.35% on arousal, valence, and dominance, respectively. In the DREAMER dataset, average accuracy reached 94.96%, 95.54%, and 95.52% on arousal, valence, and dominance, respectively. Experimental results demonstrate the efficiency of the proposed method for EEG emotion recognition.
Collapse
Affiliation(s)
- Chang Li
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China; Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, Hefei University of Technology, Hefei, 230009, China.
| | - Bin Wang
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China.
| | - Silin Zhang
- Reproductive Medical Center, Renmin Hospital of Wuhan University, Wuhan, 430060, China.
| | - Yu Liu
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China.
| | - Rencheng Song
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China.
| | - Juan Cheng
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China.
| | - Xun Chen
- Department of Neurosurgery, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, Anhui, China; Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, China.
| |
Collapse
|
19
|
Physical Exercise Effects on University Students’ Attention: An EEG Analysis Approach. ELECTRONICS 2022. [DOI: 10.3390/electronics11050770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Physically active breaks (AB) are currently being proposed as an interesting tool to improve students’ attention. Reviews and meta-analyses confirm their effect on attention, but also warned about the sparse evidence based on vigilance and university students. Therefore, this pilot study aimed to (a) determine the effects of AB in comparison with passive breaks on university students’ vigilance and (b) to validate an analysis model based on machine learning algorithms in conjunction with a multiparametric model based on electroencephalography (EEG) signal features. Through a counterbalanced within-subject experimental study, six university students (two female; mean age = 25.67, STD = 3.61) had their vigilance performances (i.e., response time in Psycho-Motor Vigilance Task) and EEG measured, before and after a lecture with an AB and another lecture with a passive break. A multiparametric model based on the spectral power, signal entropy and response time has been developed. Furthermore, this model, together with different machine learning algorithms, shows that for the taken signals there are significant differences after the AB lesson, implying an improvement in attention. These differences are most noticeable with the SVM with RBF kernel and ANNs with F1-score of 85% and 88%, respectively. In conclusion, results showed that students performed better on vigilance after the lecture with AB. Although limited, the evidence found could help researchers to be more accurate in their EEG analyses and lecturers and teachers to improve their students’ attentions in a proper way.
Collapse
|
20
|
Maithri M, Raghavendra U, Gudigar A, Samanth J, Murugappan M, Chakole Y, Acharya UR. Automated emotion recognition: Current trends and future perspectives. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106646. [PMID: 35093645 DOI: 10.1016/j.cmpb.2022.106646] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 12/25/2021] [Accepted: 01/16/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Human emotions greatly affect the actions of a person. The automated emotion recognition has applications in multiple domains such as health care, e-learning, surveillance, etc. The development of computer-aided diagnosis (CAD) tools has led to the automated recognition of human emotions. OBJECTIVE This review paper provides an insight into various methods employed using electroencephalogram (EEG), facial, and speech signals coupled with multi-modal emotion recognition techniques. In this work, we have reviewed most of the state-of-the-art papers published on this topic. METHOD This study was carried out by considering the various emotion recognition (ER) models proposed between 2016 and 2021. The papers were analysed based on methods employed, classifier used and performance obtained. RESULTS There is a significant rise in the application of deep learning techniques for ER. They have been widely applied for EEG, speech, facial expression, and multimodal features to develop an accurate ER model. CONCLUSION Our study reveals that most of the proposed machine and deep learning-based systems have yielded good performances for automated ER in a controlled environment. However, there is a need to obtain high performance for ER even in an uncontrolled environment.
Collapse
Affiliation(s)
- M Maithri
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India.
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Murugappan Murugappan
- Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, 13133, Kuwait
| | - Yashas Chakole
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Clementi 599489, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
| |
Collapse
|
21
|
Cai J, Xiao R, Cui W, Zhang S, Liu G. Application of Electroencephalography-Based Machine Learning in Emotion Recognition: A Review. Front Syst Neurosci 2021; 15:729707. [PMID: 34887732 PMCID: PMC8649925 DOI: 10.3389/fnsys.2021.729707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 11/08/2021] [Indexed: 11/13/2022] Open
Abstract
Emotion recognition has become increasingly prominent in the medical field and human-computer interaction. When people’s emotions change under external stimuli, various physiological signals of the human body will fluctuate. Electroencephalography (EEG) is closely related to brain activity, making it possible to judge the subject’s emotional changes through EEG signals. Meanwhile, machine learning algorithms, which are good at digging out data features from a statistical perspective and making judgments, have developed by leaps and bounds. Therefore, using machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states to realize emotion recognition has a broad development prospect. This paper introduces the acquisition, preprocessing, feature extraction, and classification of EEG signals in sequence following the progress of EEG-based machine learning algorithms for emotion recognition. And it may help beginners who will use EEG-based machine learning algorithms for emotion recognition to understand the development status of this field. The journals we selected are all retrieved from the Web of Science retrieval platform. And the publication dates of most of the selected articles are concentrated in 2016–2021.
Collapse
Affiliation(s)
- Jing Cai
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Ruolan Xiao
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Wenjie Cui
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Shang Zhang
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Guangda Liu
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| |
Collapse
|
22
|
Rahman MM, Sarkar AK, Hossain MA, Hossain MS, Islam MR, Hossain MB, Quinn JMW, Moni MA. Recognition of human emotions using EEG signals: A review. Comput Biol Med 2021; 136:104696. [PMID: 34388471 DOI: 10.1016/j.compbiomed.2021.104696] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 07/23/2021] [Accepted: 07/23/2021] [Indexed: 10/20/2022]
Abstract
Assessment of the cognitive functions and state of clinical subjects is an important aspect of e-health care delivery, and in the development of novel human-machine interfaces. A subject can display a range of emotions that significantly influence cognition, and emotion classification through the analysis of physiological signals is a key means of detecting emotion. Electroencephalography (EEG) signals have become a common focus of such development compared to other physiological signals because EEG employs simple and subject-acceptable methods for obtaining data that can be used for emotion analysis. We have therefore reviewed published studies that have used EEG signal data to identify possible interconnections between emotion and brain activity. We then describe theoretical conceptualization of basic emotions, and interpret the prevailing techniques that have been adopted for feature extraction, selection, and classification. Finally, we have compared the outcomes of these recent studies and discussed the likely future directions and main challenges for researchers developing EEG-based emotion analysis methods.
Collapse
Affiliation(s)
- Md Mustafizur Rahman
- Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
| | - Ajay Krishno Sarkar
- Department of Electrical and Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi, 6204, Bangladesh.
| | - Md Amzad Hossain
- Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
| | - Md Selim Hossain
- Department of Electrical and Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi, 6204, Bangladesh.
| | - Md Rabiul Islam
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Md Biplob Hossain
- Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
| | - Julian M W Quinn
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia.
| | - Mohammad Ali Moni
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia; School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland St Lucia, QLD 4072, Australia.
| |
Collapse
|
23
|
Nandi A, Xhafa F, Subirats L, Fort S. Real-Time Emotion Classification Using EEG Data Stream in E-Learning Contexts. SENSORS 2021; 21:s21051589. [PMID: 33668757 PMCID: PMC7956809 DOI: 10.3390/s21051589] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 02/11/2021] [Accepted: 02/19/2021] [Indexed: 11/19/2022]
Abstract
In face-to-face and online learning, emotions and emotional intelligence have an influence and play an essential role. Learners’ emotions are crucial for e-learning system because they promote or restrain the learning. Many researchers have investigated the impacts of emotions in enhancing and maximizing e-learning outcomes. Several machine learning and deep learning approaches have also been proposed to achieve this goal. All such approaches are suitable for an offline mode, where the data for emotion classification are stored and can be accessed infinitely. However, these offline mode approaches are inappropriate for real-time emotion classification when the data are coming in a continuous stream and data can be seen to the model at once only. We also need real-time responses according to the emotional state. For this, we propose a real-time emotion classification system (RECS)-based Logistic Regression (LR) trained in an online fashion using the Stochastic Gradient Descent (SGD) algorithm. The proposed RECS is capable of classifying emotions in real-time by training the model in an online fashion using an EEG signal stream. To validate the performance of RECS, we have used the DEAP data set, which is the most widely used benchmark data set for emotion classification. The results show that the proposed approach can effectively classify emotions in real-time from the EEG data stream, which achieved a better accuracy and F1-score than other offline and online approaches. The developed real-time emotion classification system is analyzed in an e-learning context scenario.
Collapse
Affiliation(s)
- Arijit Nandi
- Department of Computer Science, Universitat Politècnica de Catalunya (BarcelonaTech), 08034 Barcelona, Spain;
- Eurecat, Centre Tecnològic de Catalunya, 08005 Barcelona, Spain; (L.S.); (S.F.)
| | - Fatos Xhafa
- Department of Computer Science, Universitat Politècnica de Catalunya (BarcelonaTech), 08034 Barcelona, Spain;
- Correspondence:
| | - Laia Subirats
- Eurecat, Centre Tecnològic de Catalunya, 08005 Barcelona, Spain; (L.S.); (S.F.)
- ADaS Lab, Universitat Oberta de Catalunya, 08018 Barcelona, Spain
| | - Santi Fort
- Eurecat, Centre Tecnològic de Catalunya, 08005 Barcelona, Spain; (L.S.); (S.F.)
| |
Collapse
|