1
|
Paola Patricia AC, Rosberg PC, Butt-Aziz S, Marlon Alberto PM, Roberto-Cesar MO, Miguel UT, Naz S. Semi-supervised ensemble learning for human activity recognition in casas Kyoto dataset. Heliyon 2024; 10:e29398. [PMID: 38655356 PMCID: PMC11035997 DOI: 10.1016/j.heliyon.2024.e29398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 03/31/2024] [Accepted: 04/08/2024] [Indexed: 04/26/2024] Open
Abstract
-The automatic identification of human physical activities, commonly referred to as Human Activity Recognition (HAR), has garnered significant interest and application across various sectors, including entertainment, sports, and notably health. Within the realm of health, a myriad of applications exists, contingent upon the nature of experimentation, the activities under scrutiny, and the methodology employed for data and information acquisition. This diversity opens doors to multifaceted applications, including support for the well-being and safeguarding of elderly individuals afflicted with neurodegenerative diseases, especially in the context of smart homes. Within the existing literature, a multitude of datasets from both indoor and outdoor environments have surfaced, significantly contributing to the activity identification processes. One prominent dataset, the CASAS project developed by Washington State University (WSU) University, encompasses experiments conducted in indoor settings. This dataset facilitates the identification of a range of activities, such as cleaning, cooking, eating, washing hands, and even making phone calls. This article introduces a model founded on the principles of Semi-supervised Ensemble Learning, enabling the harnessing of the potential inherent in distance-based clustering analysis. This technique aids in the identification of distinct clusters, each encapsulating unique activity characteristics. These clusters serve as pivotal inputs for the subsequent classification process, which leverages supervised techniques. The outcomes of this approach exhibit great promise, as evidenced by the quality metrics' analysis, showcasing favorable results compared to the existing state-of-the-art methods. This integrated framework not only contributes to the field of HAR but also holds immense potential for enhancing the capabilities of smart homes and related applications.
Collapse
Affiliation(s)
| | - Pacheco-Cuentas Rosberg
- Universidad de la Costa, Department of Computer Science and Electronics, Barranquilla, Colombia
| | - Shariq Butt-Aziz
- School of Systems and Technology, Department of Computer Science, University of Management and Technology, Lahore, Pakistan
| | | | | | - Urina-Triana Miguel
- Universidad Simón Bolívar, Faculty of Health Sciences, Barranquilla, Colombia
| | - Sumera Naz
- Department of Mathematics, Division of Science and Technology, University of Education, Lahore, Pakistan
| |
Collapse
|
2
|
Yin M, Li J, Wang T. A Low-Cost Inertial Measurement Unit Motion Capture System for Operation Posture Collection and Recognition. SENSORS (BASEL, SWITZERLAND) 2024; 24:686. [PMID: 38276378 PMCID: PMC11154322 DOI: 10.3390/s24020686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/12/2024] [Accepted: 01/19/2024] [Indexed: 01/27/2024]
Abstract
In factories, human posture recognition facilitates human-machine collaboration, human risk management, and workflow improvement. Compared to optical sensors, inertial sensors have the advantages of portability and resistance to obstruction, making them suitable for factories. However, existing product-level inertial sensing solutions are generally expensive. This paper proposes a low-cost human motion capture system based on BMI 160, a type of six-axis inertial measurement unit (IMU). Based on WIFI communication, the collected data are processed to obtain the displacement of human joints' rotation angles around XYZ directions and the displacement in XYZ directions, then the human skeleton hierarchical relationship was combined to calculate the real-time human posture. Furthermore, the digital human model was been established on Unity3D to synchronously visualize and present human movements. We simulated assembly operations in a virtual reality environment for human posture data collection and posture recognition experiments. Six inertial sensors were placed on the chest, waist, knee joints, and ankle joints of both legs. There were 16,067 labeled samples obtained for posture recognition model training, and the accumulated displacement and the rotation angle of six joints in the three directions were used as input features. The bi-directional long short-term memory (BiLSTM) model was used to identify seven common operation postures: standing, slightly bending, deep bending, half-squatting, squatting, sitting, and supine, with an average accuracy of 98.24%. According to the experiment result, the proposed method could be used to develop a low-cost and effective solution to human posture recognition for factory operation.
Collapse
Affiliation(s)
- Mingyue Yin
- School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 150001, China;
| | - Jianguang Li
- School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 150001, China;
| | - Tiancong Wang
- School of Astronautics, Harbin Institute of Technology, Harbin 150001, China;
| |
Collapse
|
3
|
Jaramillo IE, Jeong JG, Lopez PR, Lee CH, Kang DY, Ha TJ, Oh JH, Jung H, Lee JH, Lee WH, Kim TS. Real-Time Human Activity Recognition with IMU and Encoder Sensors in Wearable Exoskeleton Robot via Deep Learning Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:9690. [PMID: 36560059 PMCID: PMC9783602 DOI: 10.3390/s22249690] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/02/2022] [Accepted: 12/08/2022] [Indexed: 06/17/2023]
Abstract
Wearable exoskeleton robots have become a promising technology for supporting human motions in multiple tasks. Activity recognition in real-time provides useful information to enhance the robot's control assistance for daily tasks. This work implements a real-time activity recognition system based on the activity signals of an inertial measurement unit (IMU) and a pair of rotary encoders integrated into the exoskeleton robot. Five deep learning models have been trained and evaluated for activity recognition. As a result, a subset of optimized deep learning models was transferred to an edge device for real-time evaluation in a continuous action environment using eight common human tasks: stand, bend, crouch, walk, sit-down, sit-up, and ascend and descend stairs. These eight robot wearer's activities are recognized with an average accuracy of 97.35% in real-time tests, with an inference time under 10 ms and an overall latency of 0.506 s per recognition using the selected edge device.
Collapse
Affiliation(s)
- Ismael Espinoza Jaramillo
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Jin Gyun Jeong
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | | | | | - Do-Yeon Kang
- Hyundai Rotem, Uiwang-si 16082, Republic of Korea
| | - Tae-Jun Ha
- Hyundai Rotem, Uiwang-si 16082, Republic of Korea
| | - Ji-Heon Oh
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Hwanseok Jung
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Jin Hyuk Lee
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Won Hee Lee
- Department of Software Convergence, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Tae-Seong Kim
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| |
Collapse
|
4
|
Improving the Performance and Explainability of Indoor Human Activity Recognition in the Internet of Things Environment. Symmetry (Basel) 2022. [DOI: 10.3390/sym14102022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Traditional indoor human activity recognition (HAR) has been defined as a time-series data classification problem and requires feature extraction. The current indoor HAR systems still lack transparent, interpretable, and explainable approaches that can generate human-understandable information. This paper proposes a new approach, called Human Activity Recognition on Signal Images (HARSI), which defines the HAR problem as an image classification problem to improve both explainability and recognition accuracy. The proposed HARSI method collects sensor data from the Internet of Things (IoT) environment and transforms the raw signal data into some visual understandable images to take advantage of the strengths of convolutional neural networks (CNNs) in handling image data. This study focuses on the recognition of symmetric human activities, including walking, jogging, moving downstairs, moving upstairs, standing, and sitting. The experimental results carried out on a real-world dataset showed that a significant improvement (13.72%) was achieved by the proposed HARSI model compared to the traditional machine learning models. The results also showed that our method (98%) outperformed the state-of-the-art methods (90.94%) in terms of classification accuracy.
Collapse
|
5
|
Accurate Detection of Electricity Theft Using Classification Algorithms and Internet of Things in Smart Grid. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-021-06313-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Ariza-Colpas PP, Vicario E, Oviedo-Carrascal AI, Butt Aziz S, Piñeres-Melo MA, Quintero-Linero A, Patara F. Human Activity Recognition Data Analysis: History, Evolutions, and New Trends. SENSORS 2022; 22:s22093401. [PMID: 35591091 PMCID: PMC9103712 DOI: 10.3390/s22093401] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 03/31/2022] [Accepted: 04/04/2022] [Indexed: 01/23/2023]
Abstract
The Assisted Living Environments Research Area–AAL (Ambient Assisted Living), focuses on generating innovative technology, products, and services to assist, medical care and rehabilitation to older adults, to increase the time in which these people can live. independently, whether they suffer from neurodegenerative diseases or some disability. This important area is responsible for the development of activity recognition systems—ARS (Activity Recognition Systems), which is a valuable tool when it comes to identifying the type of activity carried out by older adults, to provide them with assistance. that allows you to carry out your daily activities with complete normality. This article aims to show the review of the literature and the evolution of the different techniques for processing this type of data from supervised, unsupervised, ensembled learning, deep learning, reinforcement learning, transfer learning, and metaheuristics approach applied to this sector of science. health, showing the metrics of recent experiments for researchers in this area of knowledge. As a result of this article, it can be identified that models based on reinforcement or transfer learning constitute a good line of work for the processing and analysis of human recognition activities.
Collapse
Affiliation(s)
- Paola Patricia Ariza-Colpas
- Department of Computer Science and Electronics, Universidad de la Costa CUC, Barranquilla 080002, Colombia
- Faculty of Engineering in Information and Communication Technologies, Universidad Pontificia Bolivariana, Medellín 050031, Colombia;
- Correspondence:
| | - Enrico Vicario
- Department of Information Engineering, University of Florence, 50139 Firenze, Italy; (E.V.); (F.P.)
| | - Ana Isabel Oviedo-Carrascal
- Faculty of Engineering in Information and Communication Technologies, Universidad Pontificia Bolivariana, Medellín 050031, Colombia;
| | - Shariq Butt Aziz
- Department of Computer Science and IT, University of Lahore, Lahore 44000, Pakistan;
| | | | | | - Fulvio Patara
- Department of Information Engineering, University of Florence, 50139 Firenze, Italy; (E.V.); (F.P.)
| |
Collapse
|