1
|
Tsintotas KA, Kansizoglou I, Pastra K, Aloimonos Y, Gasteratos A, Sirakoulis GC, Sandini G. Editorial: Enhanced human modeling in robotics for socially-aware place navigation. Front Robot AI 2024; 11:1348022. [PMID: 38495301 PMCID: PMC10940522 DOI: 10.3389/frobt.2024.1348022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/14/2024] [Indexed: 03/19/2024] Open
Affiliation(s)
| | | | | | | | | | | | - Giulio Sandini
- Italian Institute of Technology (IIT), Genova, Liguria, Italy
| |
Collapse
|
2
|
Sapidis GM, Kansizoglou I, Naoum MC, Papadopoulos NA, Chalioris CE. A Deep Learning Approach for Autonomous Compression Damage Identification in Fiber-Reinforced Concrete Using Piezoelectric Lead Zirconate Titanate Transducers. SENSORS (BASEL, SWITZERLAND) 2024; 24:386. [PMID: 38257479 PMCID: PMC10818412 DOI: 10.3390/s24020386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/28/2023] [Accepted: 01/03/2024] [Indexed: 01/24/2024]
Abstract
Effective damage identification is paramount to evaluating safety conditions and preventing catastrophic failures of concrete structures. Although various methods have been introduced in the literature, developing robust and reliable structural health monitoring (SHM) procedures remains an open research challenge. This study proposes a new approach utilizing a 1-D convolution neural network to identify the formation of cracks from the raw electromechanical impedance (EMI) signature of externally bonded piezoelectric lead zirconate titanate (PZT) transducers. Externally bonded PZT transducers were used to determine the EMI signature of fiber-reinforced concrete specimens subjected to monotonous and repeatable compression loading. A leave-one-specimen-out cross-validation scenario was adopted for the proposed SHM approach for a stricter and more realistic validation procedure. The experimental study and the obtained results clearly demonstrate the capacity of the introduced approach to provide autonomous and reliable damage identification in a PZT-enabled SHM system, with a mean accuracy of 95.24% and a standard deviation of 5.64%.
Collapse
Affiliation(s)
- George M. Sapidis
- Laboratory of Reinforced Concrete and Seismic Design of Structures, Structural Engineering Science Division, Civil Engineering Department, School of Engineering, Democritus University of Thrace, 67100 Xanthi, Greece; (G.M.S.); (M.C.N.); (N.A.P.)
| | - Ioannis Kansizoglou
- Department of Production and Management Engineering, School of Engineering, Democritus University of Thrace, V. Sofias 12, 67132 Xanthi, Greece;
| | - Maria C. Naoum
- Laboratory of Reinforced Concrete and Seismic Design of Structures, Structural Engineering Science Division, Civil Engineering Department, School of Engineering, Democritus University of Thrace, 67100 Xanthi, Greece; (G.M.S.); (M.C.N.); (N.A.P.)
| | - Nikos A. Papadopoulos
- Laboratory of Reinforced Concrete and Seismic Design of Structures, Structural Engineering Science Division, Civil Engineering Department, School of Engineering, Democritus University of Thrace, 67100 Xanthi, Greece; (G.M.S.); (M.C.N.); (N.A.P.)
| | - Constantin E. Chalioris
- Laboratory of Reinforced Concrete and Seismic Design of Structures, Structural Engineering Science Division, Civil Engineering Department, School of Engineering, Democritus University of Thrace, 67100 Xanthi, Greece; (G.M.S.); (M.C.N.); (N.A.P.)
| |
Collapse
|
3
|
Moutsis SN, Tsintotas KA, Gasteratos A. PIPTO: Precise Inertial-Based Pipeline for Threshold-Based Fall Detection Using Three-Axis Accelerometers. SENSORS (BASEL, SWITZERLAND) 2023; 23:7951. [PMID: 37766008 PMCID: PMC10534597 DOI: 10.3390/s23187951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 09/04/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
After traffic-related incidents, falls are the second cause of human death, presenting the highest percentage among the elderly. Aiming to address this problem, the research community has developed methods built upon different sensors, such as wearable, ambiance, or hybrid, and various techniques, such as those that are machine learning- and heuristic based. Concerning the models used in the former case, they classify the input data between fall and no fall, and specific data dimensions are required. Yet, when algorithms that adopt heuristic techniques, mainly using thresholds, are combined with the previous models, they reduce the computational cost. To this end, this article presents a pipeline for detecting falls through a threshold-based technique over the data provided by a three-axis accelerometer. This way, we propose a low-complexity system that can be adopted from any acceleration sensor that receives information at different frequencies. Moreover, the input lengths can differ, while we achieve to detect multiple falls in a time series of sum vector magnitudes, providing the specific time range of the fall. As evaluated on several datasets, our pipeline reaches high performance results at 90.40% and 91.56% sensitivity on MMsys and KFall, respectively, while the generated specificity is 93.96% and 85.90%. Lastly, aiming to facilitate the research community, our framework, entitled PIPTO (drawing inspiration from the Greek verb "πι´πτω", signifying "to fall"), is open sourced in Python and C.
Collapse
Affiliation(s)
- Stavros N. Moutsis
- Department of Production and Management Engineering, Democritus University of Thrace, 12 Vas. Sophias, GR-671 32 Xanthi, Greece; (K.A.T.); (A.G.)
| | | | | |
Collapse
|
4
|
Moreno Escobar JJ, Morales Matamoros O, Aguilar del Villar EY, Quintana Espinosa H, Chanona Hernández L. DS-CNN: Deep Convolutional Neural Networks for Facial Emotion Detection in Children with Down Syndrome during Dolphin-Assisted Therapy. Healthcare (Basel) 2023; 11:2295. [PMID: 37628493 PMCID: PMC10454875 DOI: 10.3390/healthcare11162295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 08/05/2023] [Accepted: 08/13/2023] [Indexed: 08/27/2023] Open
Abstract
In Mexico, according to data from the General Directorate of Health Information (2018), there is an annual incidence of 689 newborns with Trisomy 21, well-known as Down Syndrome. Worldwide, this incidence is estimated between 1 in every 1000 newborns, approximately. That is why this work focuses on the detection and analysis of facial emotions in children with Down Syndrome in order to predict their emotions throughout a dolphin-assisted therapy. In this work, two databases are used: Exploratory Data Analysis, with a total of 20,214 images, and the Down's Syndrome Dataset database, with 1445 images for training, validation, and testing of the neural network models. The construction of two architectures based on a Deep Convolutional Neural Network manages an efficiency of 79%, when these architectures are tested with a large reference image database. Then, the architecture that achieves better results is trained, validated, and tested in a small-image database with the facial emotions of children with Down Syndrome, obtaining an efficiency of 72%. However, this increases by 9% when the brain activity of the child is included in the training, resulting in an average precision of 81%. Using electroencephalogram (EEG) signals in a Convolutional Neural Network (CNN) along with the Down's Syndrome Dataset (DSDS) has promising advantages in the field of brain-computer interfaces. EEG provides direct access to the electrical activity of the brain, allowing for real-time monitoring and analysis of cognitive states. Integrating EEG signals into a CNN architecture can enhance learning and decision-making capabilities. It is important to note that this work has the primary objective of addressing a doubly vulnerable population, as these children also have a disability.
Collapse
Affiliation(s)
- Jesús Jaime Moreno Escobar
- Escuela Superior de Ingeniería Mecánica y Eléctrica, Unidad Zacatenco, Instituto Politécnico Nacional, Ciudad de México 07340, Mexico
| | | | | | | | | |
Collapse
|
5
|
Cîrneanu AL, Popescu D, Iordache D. New Trends in Emotion Recognition Using Image Analysis by Neural Networks, A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:7092. [PMID: 37631629 PMCID: PMC10458371 DOI: 10.3390/s23167092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 07/29/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023]
Abstract
Facial emotion recognition (FER) is a computer vision process aimed at detecting and classifying human emotional expressions. FER systems are currently used in a vast range of applications from areas such as education, healthcare, or public safety; therefore, detection and recognition accuracies are very important. Similar to any computer vision task based on image analyses, FER solutions are also suitable for integration with artificial intelligence solutions represented by different neural network varieties, especially deep neural networks that have shown great potential in the last years due to their feature extraction capabilities and computational efficiency over large datasets. In this context, this paper reviews the latest developments in the FER area, with a focus on recent neural network models that implement specific facial image analysis algorithms to detect and recognize facial emotions. This paper's scope is to present from historical and conceptual perspectives the evolution of the neural network architectures that proved significant results in the FER area. This paper endorses convolutional neural network (CNN)-based architectures against other neural network architectures, such as recurrent neural networks or generative adversarial networks, highlighting the key elements and performance of each architecture, and the advantages and limitations of the proposed models in the analyzed papers. Additionally, this paper presents the available datasets that are currently used for emotion recognition from facial expressions and micro-expressions. The usage of FER systems is also highlighted in various domains such as healthcare, education, security, or social IoT. Finally, open issues and future possible developments in the FER area are identified.
Collapse
Affiliation(s)
- Andrada-Livia Cîrneanu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania;
| | - Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania;
| | - Dragoș Iordache
- The National Institute for Research & Development in Informatics-ICI Bucharest, 011455 Bucharest, Romania;
| |
Collapse
|
6
|
Hosseini MSK, Firoozabadi SM, Badie K, Azadfallah P. Personality-Based Emotion Recognition Using EEG Signals with a CNN-LSTM Network. Brain Sci 2023; 13:947. [PMID: 37371425 PMCID: PMC10296308 DOI: 10.3390/brainsci13060947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/27/2023] [Accepted: 05/30/2023] [Indexed: 06/29/2023] Open
Abstract
The accurate detection of emotions has significant implications in healthcare, psychology, and human-computer interaction. Integrating personality information into emotion recognition can enhance its utility in various applications. The present study introduces a novel deep learning approach to emotion recognition, which utilizes electroencephalography (EEG) signals and the Big Five personality traits. The study recruited 60 participants and recorded their EEG data while they viewed unique sequence stimuli designed to effectively capture the dynamic nature of human emotions and personality traits. A pre-trained convolutional neural network (CNN) was used to extract emotion-related features from the raw EEG data. Additionally, a long short-term memory (LSTM) network was used to extract features related to the Big Five personality traits. The network was able to accurately predict personality traits from EEG data. The extracted features were subsequently used in a novel network to predict emotional states within the arousal and valence dimensions. The experimental results showed that the proposed classifier outperformed common classifiers, with a high accuracy of 93.97%. The findings suggest that incorporating personality traits as features in the designed network, for emotion recognition, leads to higher accuracy, highlighting the significance of examining these traits in the analysis of emotions.
Collapse
Affiliation(s)
| | - Seyed Mohammad Firoozabadi
- Department of Medical Physics, Faculty of Medicine, Tarbiat Modares University, Tehran 14117-13116, Iran;
| | - Kambiz Badie
- Content & E-Services Research Group, IT Research Faculty, ICT Research Institute, Tehran 14399-55471, Iran;
| | - Parviz Azadfallah
- Department of Psychology, Faculty of Humanities, Tarbiat Modares University, Tehran 14117-13116, Iran
| |
Collapse
|
7
|
He Q. Human-computer interaction based on background knowledge and emotion certainty. PeerJ Comput Sci 2023; 9:e1418. [PMID: 37346639 PMCID: PMC10280641 DOI: 10.7717/peerj-cs.1418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 05/08/2023] [Indexed: 06/23/2023]
Abstract
Aiming at the problems of lack of background knowledge and the inconsistent response of robots in the current human-computer interaction system, we proposed a human-computer interaction model based on a knowledge graph ripple network. The model simulated the natural human communication process to realize a more natural and intelligent human-computer interaction system. This study had three contributions: first, the affective friendliness of human-computer interaction was obtained by calculating the affective evaluation value and the emotional measurement of human-computer interaction. Then, the external knowledge graph was introduced as the background knowledge of the robot, and the conversation entity was embedded into the ripple network of the knowledge graph to obtain the potential entity content of interest of the participant. Finally, the robot replies based on emotional friendliness and content friendliness. The experimental results showed that, compared with the comparison models, the emotional friendliness and coherence of robots with background knowledge and emotional measurement effectively improve the response accuracy by 5.5% at least during human-computer interaction.
Collapse
|
8
|
Zhou S, Wu X, Jiang F, Huang Q, Huang C. Emotion Recognition from Large-Scale Video Clips with Cross-Attention and Hybrid Feature Weighting Neural Networks. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:1400. [PMID: 36674161 PMCID: PMC9859118 DOI: 10.3390/ijerph20021400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 01/06/2023] [Accepted: 01/07/2023] [Indexed: 06/17/2023]
Abstract
The emotion of humans is an important indicator or reflection of their mental states, e.g., satisfaction or stress, and recognizing or detecting emotion from different media is essential to perform sequence analysis or for certain applications, e.g., mental health assessments, job stress level estimation, and tourist satisfaction assessments. Emotion recognition based on computer vision techniques, as an important method of detecting emotion from visual media (e.g., images or videos) of human behaviors with the use of plentiful emotional cues, has been extensively investigated because of its significant applications. However, most existing models neglect inter-feature interaction and use simple concatenation for feature fusion, failing to capture the crucial complementary gains between face and context information in video clips, which is significant in addressing the problems of emotion confusion and emotion misunderstanding. Accordingly, in this paper, to fully exploit the complementary information between face and context features, we present a novel cross-attention and hybrid feature weighting network to achieve accurate emotion recognition from large-scale video clips, and the proposed model consists of a dual-branch encoding (DBE) network, a hierarchical-attention encoding (HAE) network, and a deep fusion (DF) block. Specifically, the face and context encoding blocks in the DBE network generate the respective shallow features. After this, the HAE network uses the cross-attention (CA) block to investigate and capture the complementarity between facial expression features and their contexts via a cross-channel attention operation. The element recalibration (ER) block is introduced to revise the feature map of each channel by embedding global information. Moreover, the adaptive-attention (AA) block in the HAE network is developed to infer the optimal feature fusion weights and obtain the adaptive emotion features via a hybrid feature weighting operation. Finally, the DF block integrates these adaptive emotion features to predict an individual emotional state. Extensive experimental results of the CAER-S dataset demonstrate the effectiveness of our method, exhibiting its potential in the analysis of tourist reviews with video clips, estimation of job stress levels with visual emotional evidence, or assessments of mental healthiness with visual media.
Collapse
Affiliation(s)
| | | | | | - Qionghao Huang
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua 321004, China
| | | |
Collapse
|