1
|
Claret AF, Casali KR, Cunha TS, Moraes MC. Automatic Classification of Emotions Based on Cardiac Signals: A Systematic Literature Review. Ann Biomed Eng 2023; 51:2393-2414. [PMID: 37543539 DOI: 10.1007/s10439-023-03341-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 07/28/2023] [Indexed: 08/07/2023]
Abstract
Emotions play a pivotal role in human cognition, exerting influence across diverse domains of individuals' lives. The widespread adoption of artificial intelligence and machine learning has spurred interest in systems capable of automatically recognizing and classifying emotions and affective states. However, the accurate identification of human emotions remains a formidable challenge, as they are influenced by various factors and accompanied by physiological changes. Numerous solutions have emerged to enable emotion recognition, leveraging the characterization of biological signals, including the utilization of cardiac signals acquired from low-cost and wearable sensors. The objective of this work was to comprehensively investigate the current trends in the field by conducting a Systematic Literature Review (SLR) that focuses specifically on the detection, recognition, and classification of emotions based on cardiac signals, to gain insights into the prevailing techniques employed for signal acquisition, the extracted features, the elicitation process, and the classification methods employed in these studies. A SLR was conducted using four research databases, and articles were assessed concerning the proposed research questions. Twenty seven articles met the selection criteria and were assessed for the feasibility of using cardiac signals, acquired from low-cost and wearable devices, for emotion recognition. Several emotional elicitation methods were found in the literature, including the algorithms applied for automatic classification, as well as the key challenges associated with emotion recognition relying solely on cardiac signals. This study extends the current body of knowledge and enables future research by providing insights into suitable techniques for designing automatic emotion recognition applications. It emphasizes the importance of utilizing low-cost, wearable, and unobtrusive devices to acquire cardiac signals for accurate and accessible emotion recognition.
Collapse
Affiliation(s)
- Anderson Faria Claret
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil
| | - Karina Rabello Casali
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil
| | - Tatiana Sousa Cunha
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil.
| | - Matheus Cardoso Moraes
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil
| |
Collapse
|
2
|
Liao J, Li X, Gan Y, Han S, Rong P, Wang W, Li W, Zhou L. Artificial intelligence assists precision medicine in cancer treatment. Front Oncol 2023; 12:998222. [PMID: 36686757 PMCID: PMC9846804 DOI: 10.3389/fonc.2022.998222] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/22/2022] [Indexed: 01/06/2023] Open
Abstract
Cancer is a major medical problem worldwide. Due to its high heterogeneity, the use of the same drugs or surgical methods in patients with the same tumor may have different curative effects, leading to the need for more accurate treatment methods for tumors and personalized treatments for patients. The precise treatment of tumors is essential, which renders obtaining an in-depth understanding of the changes that tumors undergo urgent, including changes in their genes, proteins and cancer cell phenotypes, in order to develop targeted treatment strategies for patients. Artificial intelligence (AI) based on big data can extract the hidden patterns, important information, and corresponding knowledge behind the enormous amount of data. For example, the ML and deep learning of subsets of AI can be used to mine the deep-level information in genomics, transcriptomics, proteomics, radiomics, digital pathological images, and other data, which can make clinicians synthetically and comprehensively understand tumors. In addition, AI can find new biomarkers from data to assist tumor screening, detection, diagnosis, treatment and prognosis prediction, so as to providing the best treatment for individual patients and improving their clinical outcomes.
Collapse
Affiliation(s)
- Jinzhuang Liao
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Xiaoying Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Yu Gan
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Shuangze Han
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Pengfei Rong
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
- Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Wei Wang
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
- Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Wei Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
- Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Li Zhou
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, Hunan, China
- Cell Transplantation and Gene Therapy Institute, The Third Xiangya Hospital, Central South University, Changsha, Hunan, China
- Department of Pathology, The Xiangya Hospital of Central South University, Changsha, Hunan, China
| |
Collapse
|
3
|
Dang X, Chen Z, Hao Z, Ga M, Han X, Zhang X, Yang J. Wireless Sensing Technology Combined with Facial Expression to Realize Multimodal Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2022; 23:338. [PMID: 36616935 PMCID: PMC9823763 DOI: 10.3390/s23010338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/08/2022] [Accepted: 12/23/2022] [Indexed: 06/17/2023]
Abstract
Emotions significantly impact human physical and mental health, and, therefore, emotion recognition has been a popular research area in neuroscience, psychology, and medicine. In this paper, we preprocess the raw signals acquired by millimeter-wave radar to obtain high-quality heartbeat and respiration signals. Then, we propose a deep learning model incorporating a convolutional neural network and gated recurrent unit neural network in combination with human face expression images. The model achieves a recognition accuracy of 84.5% in person-dependent experiments and 74.25% in person-independent experiments. The experiments show that it outperforms a single deep learning model compared to traditional machine learning algorithms.
Collapse
|
4
|
Islam SMM. Radar-based remote physiological sensing: Progress, challenges, and opportunities. Front Physiol 2022; 13:955208. [PMID: 36304581 PMCID: PMC9592800 DOI: 10.3389/fphys.2022.955208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 09/20/2022] [Indexed: 11/23/2022] Open
Abstract
Modern microwave Doppler radar-based physiological sensing is playing an important role in healthcare applications and during the last decade, there has been a significant advancement in this non-contact respiration sensing technology. The advantages of contactless, unobtrusive respiration monitoring have drawn interest in various medical applications such as sleep apnea, sudden infant death syndromes (SIDS), remote respiratory monitoring of burn victims, and COVID patients. This paper provides a perspective on recent advances in biomedical and healthcare applications of Doppler radar that can detect the tiny movement of the chest surfaces to extract heartbeat and respiration and its associated different vital signs parameters (tidal volume, heart rate variability (HRV), and so on) of the human subject. Additionally, it also highlights the challenges, and opportunities of this remote physiological sensing technology and several future research directions will be laid out to deploy this sensor technology in our day-to-day life.
Collapse
|
5
|
Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion. SENSORS 2022; 22:s22155611. [PMID: 35957167 PMCID: PMC9371233 DOI: 10.3390/s22155611] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Revised: 07/19/2022] [Accepted: 07/19/2022] [Indexed: 01/27/2023]
Abstract
Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study “in the wild” with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively.
Collapse
|
6
|
Teoh L, Ihalage AA, Harp S, F. Al-Khateeb Z, Michael-Titus AT, Tremoleda JL, Hao Y. Deep learning for behaviour classification in a preclinical brain injury model. PLoS One 2022; 17:e0268962. [PMID: 35704595 PMCID: PMC9200342 DOI: 10.1371/journal.pone.0268962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 05/11/2022] [Indexed: 11/18/2022] Open
Abstract
The early detection of traumatic brain injuries can directly impact the prognosis and survival of patients. Preceding attempts to automate the detection and the assessment of the severity of traumatic brain injury continue to be based on clinical diagnostic methods, with limited tools for disease outcomes in large populations. Despite advances in machine and deep learning tools, current approaches still use simple trends of statistical analysis which lack generality. The effectiveness of deep learning to extract information from large subsets of data can be further emphasised through the use of more elaborate architectures. We therefore explore the use of a multiple input, convolutional neural network and long short-term memory (LSTM) integrated architecture in the context of traumatic injury detection through predicting the presence of brain injury in a murine preclinical model dataset. We investigated the effectiveness and validity of traumatic brain injury detection in the proposed model against various other machine learning algorithms such as the support vector machine, the random forest classifier and the feedforward neural network. Our dataset was acquired using a home cage automated (HCA) system to assess the individual behaviour of mice with traumatic brain injury or non-central nervous system (non-CNS) injured controls, whilst housed in their cages. Their distance travelled, body temperature, separation from other mice and movement were recorded every 15 minutes, for 72 hours weekly, for 5 weeks following intervention. The HCA behavioural data was used to train a deep learning model, which then predicts if the animals were subjected to a brain injury or just a sham intervention without brain damage. We also explored and evaluated different ways to handle the class imbalance present in the uninjured class of our training data. We then evaluated our models with leave-one-out cross validation. Our proposed deep learning model achieved the best performance and showed promise in its capability to detect the presence of brain trauma in mice.
Collapse
Affiliation(s)
- Lucas Teoh
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End, London, United Kingdom
| | - Achintha Avin Ihalage
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End, London, United Kingdom
| | - Srooley Harp
- Centre for Neuroscience, Surgery and Trauma, The Blizard Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | - Zahra F. Al-Khateeb
- Centre for Neuroscience, Surgery and Trauma, The Blizard Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | - Adina T. Michael-Titus
- Centre for Neuroscience, Surgery and Trauma, The Blizard Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | - Jordi L. Tremoleda
- Centre for Neuroscience, Surgery and Trauma, The Blizard Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
- * E-mail: (YH); (JLT)
| | - Yang Hao
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End, London, United Kingdom
- * E-mail: (YH); (JLT)
| |
Collapse
|
7
|
Khan MM, Hossain S, Mozumdar P, Akter S, Ashique RH. A review on machine learning and deep learning for various antenna design applications. Heliyon 2022; 8:e09317. [PMID: 35520616 PMCID: PMC9061263 DOI: 10.1016/j.heliyon.2022.e09317] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 08/13/2021] [Accepted: 04/19/2022] [Indexed: 11/22/2022] Open
Abstract
The next generation of wireless communication networks will rely heavily on machine learning and deep learning. In comparison to traditional ground-based systems, the development of various communication-based applications is projected to increase coverage and spectrum efficiency. Machine learning and deep learning can be used to optimize solutions in a variety of applications, including antennas. The latter have grown popular for obtaining effective solutions due to high computational processing, clean data, and large data storage capability. In this research, machine learning and deep learning for various antenna design applications have been discussed in detail. The general concept of machine learning and deep learning is introduced. However, the main focus is on various antenna applications, such as millimeter wave, body-centric, terahertz, satellite, unmanned aerial vehicle, global positioning system, and textiles. The feasibility of antenna applications with respect to conventional methods, acceleration of the antenna design process, reduced number of simulations, and better computational feasibility features are highlighted. Overall, machine learning and deep learning provide satisfactory results for antenna design.
Collapse
Affiliation(s)
- Mohammad Monirujjaman Khan
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Sazzad Hossain
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Puezia Mozumdar
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Shamima Akter
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Ratil H. Ashique
- Department of Electrical Engineering, Green University Bangladesh, Dhaka, Bangladesh
| |
Collapse
|