1
|
Wang D, Lian J, Cheng H, Zhou Y. Music-evoked emotions classification using vision transformer in EEG signals. Front Psychol 2024; 15:1275142. [PMID: 38638516 PMCID: PMC11024288 DOI: 10.3389/fpsyg.2024.1275142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 03/20/2024] [Indexed: 04/20/2024] Open
Abstract
Introduction The field of electroencephalogram (EEG)-based emotion identification has received significant attention and has been widely utilized in both human-computer interaction and therapeutic settings. The process of manually analyzing electroencephalogram signals is characterized by a significant investment of time and work. While machine learning methods have shown promising results in classifying emotions based on EEG data, the task of extracting distinct characteristics from these signals still poses a considerable difficulty. Methods In this study, we provide a unique deep learning model that incorporates an attention mechanism to effectively extract spatial and temporal information from emotion EEG recordings. The purpose of this model is to address the existing gap in the field. The implementation of emotion EEG classification involves the utilization of a global average pooling layer and a fully linked layer, which are employed to leverage the discernible characteristics. In order to assess the effectiveness of the suggested methodology, we initially gathered a dataset of EEG recordings related to music-induced emotions. Experiments Subsequently, we ran comparative tests between the state-of-the-art algorithms and the method given in this study, utilizing this proprietary dataset. Furthermore, a publicly accessible dataset was included in the subsequent comparative trials. Discussion The experimental findings provide evidence that the suggested methodology outperforms existing approaches in the categorization of emotion EEG signals, both in binary (positive and negative) and ternary (positive, negative, and neutral) scenarios.
Collapse
Affiliation(s)
- Dong Wang
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| | - Jian Lian
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| | - Hebin Cheng
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| | - Yanan Zhou
- School of Arts, Beijing Foreign Studies University, Beijing, China
| |
Collapse
|
2
|
Gouveia C, Soares B, Albuquerque D, Barros F, Soares SC, Pinho P, Vieira J, Brás S. Remote Emotion Recognition Using Continuous-Wave Bio-Radar System. SENSORS (BASEL, SWITZERLAND) 2024; 24:1420. [PMID: 38474953 DOI: 10.3390/s24051420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/08/2024] [Accepted: 02/20/2024] [Indexed: 03/14/2024]
Abstract
The Bio-Radar is herein presented as a non-contact radar system able to capture vital signs remotely without requiring any physical contact with the subject. In this work, the ability to use the proposed system for emotion recognition is verified by comparing its performance on identifying fear, happiness and a neutral condition, with certified measuring equipment. For this purpose, machine learning algorithms were applied to the respiratory and cardiac signals captured simultaneously by the radar and the referenced contact-based system. Following a multiclass identification strategy, one could conclude that both systems present a comparable performance, where the radar might even outperform under specific conditions. Emotion recognition is possible using a radar system, with an accuracy equal to 99.7% and an F1-score of 99.9%. Thus, we demonstrated that it is perfectly possible to use the Bio-Radar system for this purpose, which is able to be operated remotely, avoiding the subject awareness of being monitored and thus providing more authentic reactions.
Collapse
Affiliation(s)
- Carolina Gouveia
- Instituto de Engenharia Electrónica e Telemática de Aveiro, Departamento de Electrónica, Telecomunicações e Informática, Intelligent Systems Associate Laboratory, University of Aveiro, 3810-193 Aveiro, Portugal
- Colab Almascience, Madan Parque, 2829-516 Caparica, Portugal
| | - Beatriz Soares
- Instituto de Telecomunicações, 3810-193 Aveiro, Portugal
- Departamento de Electrónica, Telecomunicações e Informática, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Daniel Albuquerque
- Instituto de Engenharia Electrónica e Telemática de Aveiro, Departamento de Electrónica, Telecomunicações e Informática, Intelligent Systems Associate Laboratory, University of Aveiro, 3810-193 Aveiro, Portugal
- Instituto de Telecomunicações, 3810-193 Aveiro, Portugal
- Escola Superior de Tecnologia e Gestão de Águeda, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Filipa Barros
- Center for Health Technology and Services Research, Department of Education and Psychology, University of Aveiro, 3810-193 Aveiro, Portugal
- William James Center for Research, Department of Education and Psychology, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Sandra C Soares
- William James Center for Research, Department of Education and Psychology, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Pedro Pinho
- Instituto de Telecomunicações, 3810-193 Aveiro, Portugal
- Departamento de Electrónica, Telecomunicações e Informática, University of Aveiro, 3810-193 Aveiro, Portugal
| | - José Vieira
- Instituto de Engenharia Electrónica e Telemática de Aveiro, Departamento de Electrónica, Telecomunicações e Informática, Intelligent Systems Associate Laboratory, University of Aveiro, 3810-193 Aveiro, Portugal
- Instituto de Telecomunicações, 3810-193 Aveiro, Portugal
| | - Susana Brás
- Instituto de Engenharia Electrónica e Telemática de Aveiro, Departamento de Electrónica, Telecomunicações e Informática, Intelligent Systems Associate Laboratory, University of Aveiro, 3810-193 Aveiro, Portugal
| |
Collapse
|
3
|
Lopez-Aguilar AA, Bustamante-Bello MR, Navarro-Tuch SA, Molina A. Development of a Framework for the Communication System Based on KNX for an Interactive Space for UX Evaluation. SENSORS (BASEL, SWITZERLAND) 2023; 23:9570. [PMID: 38067942 PMCID: PMC10708817 DOI: 10.3390/s23239570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 11/08/2023] [Accepted: 11/22/2023] [Indexed: 12/18/2023]
Abstract
Domotics (Home Automation) aims to improve the quality of life of people by integrating intelligent systems within inhabitable spaces. While traditionally associated with smart home systems, these technologies have potential for User Experience (UX) research. By emulating environments to test products and services, and integrating non-invasive user monitoring tools for emotion recognition, an objective UX evaluation can be performed. To achieve this objective, a testing booth was built and instrumented with devices based on KNX, an international standard for home automation, to conduct experiments and ensure replicability. A framework was designed based on Python to synchronize KNX systems with emotion recognition tools; the synchronization of these data allows finding patterns during the interaction process. To evaluate this framework, an experiment was conducted in a simulated laundry room within the testing booth to analyze the emotional responses of participants while interacting with prototypes of new detergent bottles. Emotional responses were contrasted with traditional questionnaires to determine the viability of using non-invasive methods. Using emulated environments alongside non-invasive monitoring tools allowed an immersive experience for participants. These results indicated that the testing booth can be implemented for a robust UX evaluation methodology.
Collapse
Affiliation(s)
- Ariel A. Lopez-Aguilar
- School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City 14380, Mexico; (S.A.N.-T.); (A.M.)
| | - M. Rogelio Bustamante-Bello
- School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City 14380, Mexico; (S.A.N.-T.); (A.M.)
| | | | | |
Collapse
|
4
|
Zafar K, Siddiqui HUR, Majid A, Rustam F, Alfarhood S, Safran M, Ashraf I. Enhancing Diagnosis of Anterior and Inferior Myocardial Infarctions Using UWB Radar and AI-Driven Feature Fusion Approach. SENSORS (BASEL, SWITZERLAND) 2023; 23:7756. [PMID: 37765813 PMCID: PMC10537523 DOI: 10.3390/s23187756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 09/02/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023]
Abstract
Despite significant improvement in prognosis, myocardial infarction (MI) remains a major cause of morbidity and mortality around the globe. MI is a life-threatening cardiovascular condition that requires prompt diagnosis and appropriate treatment. The primary objective of this research is to identify instances of anterior and inferior myocardial infarction by utilizing data obtained from Ultra-wideband radar technology in a hospital for patients of anterior and inferior MI. The collected data is preprocessed to extract spectral features. A novel feature engineering approach is designed to fuse temporal features and class prediction probability features derived from the spectral feature dataset. Several well-known machine learning models are implemented and fine-tuned to obtain optimal performance in the detection of anterior and inferior MI. The results demonstrate that integration of the fused feature set with machine learning models results in a notable improvement in both the accuracy and precision of MI detection. Notably, random forest (RF) and k-nearest neighbor showed superb performance with an accuracy of 98.8%. For demonstrating the capacity of models to generalize, K-fold cross-validation is carried out, wherein RF exhibits a mean accuracy of 99.1%. Furthermore, the examination of computational complexity indicates a low computational complexity, thereby indicating computational efficiency.
Collapse
Affiliation(s)
- Kainat Zafar
- Institute of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Abu Dhabi Road, Rahim Yar Khan 64200, Punjab, Pakistan; (K.Z.); (H.U.R.S.)
| | - Hafeez Ur Rehman Siddiqui
- Institute of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Abu Dhabi Road, Rahim Yar Khan 64200, Punjab, Pakistan; (K.Z.); (H.U.R.S.)
| | - Abdul Majid
- Cardiology Department, Sheikh Zayed Medical College & Hospital, Rahim Yar Khan 64200, Punjab, Pakistan;
| | - Furqan Rustam
- School of Computer Science, University College Dublin, D04 V1W8 Dublin, Ireland;
| | - Sultan Alfarhood
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia;
| | - Mejdl Safran
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia;
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| |
Collapse
|
5
|
Obuseh M, Cavuoto L, Stefanidis D, Yu D. A sensor-based framework for layout and workflow assessment in operating rooms. APPLIED ERGONOMICS 2023; 112:104059. [PMID: 37311305 DOI: 10.1016/j.apergo.2023.104059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 04/19/2023] [Accepted: 05/29/2023] [Indexed: 06/15/2023]
Abstract
Due to their large sizes and impediments to personnel workflows, integrating robotic technologies into the existing operating rooms (OR) is a challenge. In this study, we developed an ultra-wideband sensor-based human-machine-environment framework for layout and workflow assessments within the OR. In addition to providing best practices for use of the framework, we also demonstrated its effectiveness in understanding layout and workflow inefficiencies in 12 robotic-assisted surgeries (RAS) across 4 different surgical specialties. We found avoidable movements as the circulating nurse covers at least twice the distance of any other OR personnel before the patient cart (robot) is docked. OR areas of congestion and undesirable personnel-pair proximities across RAS phases that impose extra non-technical skill challenges were determined. Our findings highlight several implications for the added complexity of integrating robotic technologies into the OR, which can serve as drivers for objective evidence-based recommendations to combat RAS OR layout and workflow inefficiencies.
Collapse
Affiliation(s)
- Marian Obuseh
- School of Industrial Engineering, Purdue University, West Lafayette, IN, 47907, USA.
| | - Lora Cavuoto
- Department of Industrial and Systems Engineering, University of Buffalo, Buffalo, NY, 14260, USA.
| | - Dimitrios Stefanidis
- Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, 46202, USA.
| | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, IN, 47907, USA.
| |
Collapse
|
6
|
Kaklauskas A, Abraham A, Ubarte I, Kliukas R, Luksaite V, Binkyte-Veliene A, Vetloviene I, Kaklauskiene L. A Review of AI Cloud and Edge Sensors, Methods, and Applications for the Recognition of Emotional, Affective and Physiological States. SENSORS (BASEL, SWITZERLAND) 2022; 22:7824. [PMID: 36298176 PMCID: PMC9611164 DOI: 10.3390/s22207824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/28/2022] [Accepted: 10/12/2022] [Indexed: 06/16/2023]
Abstract
Affective, emotional, and physiological states (AFFECT) detection and recognition by capturing human signals is a fast-growing area, which has been applied across numerous domains. The research aim is to review publications on how techniques that use brain and biometric sensors can be used for AFFECT recognition, consolidate the findings, provide a rationale for the current methods, compare the effectiveness of existing methods, and quantify how likely they are to address the issues/challenges in the field. In efforts to achieve the key goals of Society 5.0, Industry 5.0, and human-centered design better, the recognition of emotional, affective, and physiological states is progressively becoming an important matter and offers tremendous growth of knowledge and progress in these and other related fields. In this research, a review of AFFECT recognition brain and biometric sensors, methods, and applications was performed, based on Plutchik's wheel of emotions. Due to the immense variety of existing sensors and sensing systems, this study aimed to provide an analysis of the available sensors that can be used to define human AFFECT, and to classify them based on the type of sensing area and their efficiency in real implementations. Based on statistical and multiple criteria analysis across 169 nations, our outcomes introduce a connection between a nation's success, its number of Web of Science articles published, and its frequency of citation on AFFECT recognition. The principal conclusions present how this research contributes to the big picture in the field under analysis and explore forthcoming study trends.
Collapse
Affiliation(s)
- Arturas Kaklauskas
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ajith Abraham
- Machine Intelligence Research Labs, Scientific Network for Innovation and Research Excellence, Auburn, WA 98071, USA
| | - Ieva Ubarte
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Romualdas Kliukas
- Department of Applied Mechanics, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Vaida Luksaite
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Arune Binkyte-Veliene
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ingrida Vetloviene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Loreta Kaklauskiene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| |
Collapse
|
7
|
A Review of Image Processing Techniques for Deepfakes. SENSORS 2022; 22:s22124556. [PMID: 35746333 PMCID: PMC9230855 DOI: 10.3390/s22124556] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 06/09/2022] [Accepted: 06/13/2022] [Indexed: 02/06/2023]
Abstract
Deep learning is used to address a wide range of challenging issues including large data analysis, image processing, object detection, and autonomous control. In the same way, deep learning techniques are also used to develop software and techniques that pose a danger to privacy, democracy, and national security. Fake content in the form of images and videos using digital manipulation with artificial intelligence (AI) approaches has become widespread during the past few years. Deepfakes, in the form of audio, images, and videos, have become a major concern during the past few years. Complemented by artificial intelligence, deepfakes swap the face of one person with the other and generate hyper-realistic videos. Accompanying the speed of social media, deepfakes can immediately reach millions of people and can be very dangerous to make fake news, hoaxes, and fraud. Besides the well-known movie stars, politicians have been victims of deepfakes in the past, especially US presidents Barak Obama and Donald Trump, however, the public at large can be the target of deepfakes. To overcome the challenge of deepfake identification and mitigate its impact, large efforts have been carried out to devise novel methods to detect face manipulation. This study also discusses how to counter the threats from deepfake technology and alleviate its impact. The outcomes recommend that despite a serious threat to society, business, and political institutions, they can be combated through appropriate policies, regulation, individual actions, training, and education. In addition, the evolution of technology is desired for deepfake identification, content authentication, and deepfake prevention. Different studies have performed deepfake detection using machine learning and deep learning techniques such as support vector machine, random forest, multilayer perceptron, k-nearest neighbors, convolutional neural networks with and without long short-term memory, and other similar models. This study aims to highlight the recent research in deepfake images and video detection, such as deepfake creation, various detection algorithms on self-made datasets, and existing benchmark datasets.
Collapse
|
8
|
Tiberi G, Ghavami M. Ultra-Wideband (UWB) Systems in Biomedical Sensing. SENSORS (BASEL, SWITZERLAND) 2022; 22:4403. [PMID: 35746186 PMCID: PMC9231255 DOI: 10.3390/s22124403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 06/06/2022] [Indexed: 06/15/2023]
Abstract
The extremely low power transmission levels of ultra-wideband (UWB) technology, alongside its advantageously large bandwidth, make it a prime candidate for being used in numerous healthcare scenarios, which require short-range high-data-rate communications and safe radar-based applications [...].
Collapse
Affiliation(s)
- Gianluigi Tiberi
- School of Engineering, London South Bank University, London SE1 0AA, UK;
- UBT—Umbria Bioengineering Technologies, 06081 Perugia, Italy
| | - Mohammad Ghavami
- School of Engineering, London South Bank University, London SE1 0AA, UK;
| |
Collapse
|