1
|
Javeed M, Abdelhaq M, Algarni A, Jalal A. Biosensor-Based Multimodal Deep Human Locomotion Decoding via Internet of Healthcare Things. MICROMACHINES 2023; 14:2204. [PMID: 38138373 PMCID: PMC10745656 DOI: 10.3390/mi14122204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 11/28/2023] [Accepted: 11/30/2023] [Indexed: 12/24/2023]
Abstract
Multiple Internet of Healthcare Things (IoHT)-based devices have been utilized as sensing methodologies for human locomotion decoding to aid in applications related to e-healthcare. Different measurement conditions affect the daily routine monitoring, including the sensor type, wearing style, data retrieval method, and processing model. Currently, several models are present in this domain that include a variety of techniques for pre-processing, descriptor extraction, and reduction, along with the classification of data captured from multiple sensors. However, such models consisting of multiple subject-based data using different techniques may degrade the accuracy rate of locomotion decoding. Therefore, this study proposes a deep neural network model that not only applies the state-of-the-art Quaternion-based filtration technique for motion and ambient data along with background subtraction and skeleton modeling for video-based data, but also learns important descriptors from novel graph-based representations and Gaussian Markov random-field mechanisms. Due to the non-linear nature of data, these descriptors are further utilized to extract the codebook via the Gaussian mixture regression model. Furthermore, the codebook is provided to the recurrent neural network to classify the activities for the locomotion-decoding system. We show the validity of the proposed model across two publicly available data sampling strategies, namely, the HWU-USP and LARa datasets. The proposed model is significantly improved over previous systems, as it achieved 82.22% and 82.50% for the HWU-USP and LARa datasets, respectively. The proposed IoHT-based locomotion-decoding model is useful for unobtrusive human activity recognition over extended periods in e-healthcare facilities.
Collapse
Affiliation(s)
- Madiha Javeed
- Department of Computer Science, Air University, Islamabad 44000, Pakistan;
| | - Maha Abdelhaq
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Asaad Algarni
- Department of Computer Sciences, Faculty of Computing and Information Technology, Northern Border University, Rafha 91911, Saudi Arabia;
| | - Ahmad Jalal
- Department of Computer Science, Air University, Islamabad 44000, Pakistan;
| |
Collapse
|
2
|
Ahmed S, Irfan S, Kiran N, Masood N, Anjum N, Ramzan N. Remote Health Monitoring Systems for Elderly People: A Survey. SENSORS (BASEL, SWITZERLAND) 2023; 23:7095. [PMID: 37631632 PMCID: PMC10458487 DOI: 10.3390/s23167095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 08/03/2023] [Accepted: 08/03/2023] [Indexed: 08/27/2023]
Abstract
This paper addresses the growing demand for healthcare systems, particularly among the elderly population. The need for these systems arises from the desire to enable patients and seniors to live independently in their homes without relying heavily on their families or caretakers. To achieve substantial improvements in healthcare, it is essential to ensure the continuous development and availability of information technologies tailored explicitly for patients and elderly individuals. The primary objective of this study is to comprehensively review the latest remote health monitoring systems, with a specific focus on those designed for older adults. To facilitate a comprehensive understanding, we categorize these remote monitoring systems and provide an overview of their general architectures. Additionally, we emphasize the standards utilized in their development and highlight the challenges encountered throughout the developmental processes. Moreover, this paper identifies several potential areas for future research, which promise further advancements in remote health monitoring systems. Addressing these research gaps can drive progress and innovation, ultimately enhancing the quality of healthcare services available to elderly individuals. This, in turn, empowers them to lead more independent and fulfilling lives while enjoying the comforts and familiarity of their own homes. By acknowledging the importance of healthcare systems for the elderly and recognizing the role of information technologies, we can address the evolving needs of this population. Through ongoing research and development, we can continue to enhance remote health monitoring systems, ensuring they remain effective, efficient, and responsive to the unique requirements of elderly individuals.
Collapse
Affiliation(s)
- Salman Ahmed
- Department of Computer Science, Capital University of Science and Technology, Islamabad 44000, Pakistan; (N.M.); (N.A.)
| | - Saad Irfan
- Department of Information Engineering Technology, National Skills University, Islamabad 44000, Pakistan;
| | - Nasira Kiran
- School of Computing, Engineering and Physical Sciences, University of the West of Scotland, Paisley PA1 2BE, UK; (N.K.); (N.R.)
| | - Nayyer Masood
- Department of Computer Science, Capital University of Science and Technology, Islamabad 44000, Pakistan; (N.M.); (N.A.)
| | - Nadeem Anjum
- Department of Computer Science, Capital University of Science and Technology, Islamabad 44000, Pakistan; (N.M.); (N.A.)
| | - Naeem Ramzan
- School of Computing, Engineering and Physical Sciences, University of the West of Scotland, Paisley PA1 2BE, UK; (N.K.); (N.R.)
| |
Collapse
|
3
|
Guerra BMV, Ramat S, Beltrami G, Schmid M. Recurrent Network Solutions for Human Posture Recognition Based on Kinect Skeletal Data. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115260. [PMID: 37299986 DOI: 10.3390/s23115260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 05/09/2023] [Accepted: 05/17/2023] [Indexed: 06/12/2023]
Abstract
Ambient Assisted Living (AAL) systems are designed to provide unobtrusive and user-friendly support in daily life and can be used for monitoring frail people based on various types of sensors, including wearables and cameras. Although cameras can be perceived as intrusive in terms of privacy, low-cost RGB-D devices (i.e., Kinect V2) that extract skeletal data can partially overcome these limits. In addition, deep learning-based algorithms, such as Recurrent Neural Networks (RNNs), can be trained on skeletal tracking data to automatically identify different human postures in the AAL domain. In this study, we investigate the performance of two RNN models (2BLSTM and 3BGRU) in identifying daily living postures and potentially dangerous situations in a home monitoring system, based on 3D skeletal data acquired with Kinect V2. We tested the RNN models with two different feature sets: one consisting of eight human-crafted kinematic features selected by a genetic algorithm, and another consisting of 52 ego-centric 3D coordinates of each considered skeleton joint, plus the subject's distance from the Kinect V2. To improve the generalization ability of the 3BGRU model, we also applied a data augmentation method to balance the training dataset. With this last solution we reached an accuracy of 88%, the best we achieved so far.
Collapse
Affiliation(s)
- Bruna Maria Vittoria Guerra
- Laboratory of Bioengineering, Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| | - Stefano Ramat
- Laboratory of Bioengineering, Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| | - Giorgio Beltrami
- Laboratory of Bioengineering, Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| | - Micaela Schmid
- Laboratory of Bioengineering, Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| |
Collapse
|
4
|
Bhola G, Vishwakarma DK. A review of vision-based indoor HAR: state-of-the-art, challenges, and future prospects. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-41. [PMID: 37362688 PMCID: PMC10173923 DOI: 10.1007/s11042-023-15443-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 02/10/2023] [Accepted: 04/18/2023] [Indexed: 06/28/2023]
Abstract
With the advent of technology, we are getting more comfortable with the use of gadgets, cameras, etc., and find Artificial Intelligence as an integral part of most of the tasks we perform throughout the day. In such a scenario, the use of cameras and vision-based sensors comes as an escape from many real-time problems and challenges. One major application of these vision-based systems is Indoor Human Activity Recognition (HAR) which serves in a variety of scenarios ranging from smart homes, elderly care, assisted living, and human behavior pattern analysis for identifying any abnormal behavior to abnormal activity recognition like falling, slipping, domestic violence, etc. The effect of HAR in real time has made the area of indoor activity recognition a more explored zone by the industrial segment to attract users with their products in multiple domains. Hence, considering these aspects of HAR, this work proposes a detailed survey on indoor HAR. Through this work, we have highlighted the recent methodologies and their performance in the field of indoor activity recognition. We have also discussed- the challenges, detailed study of approaches with real-world applications of indoor-HAR, datasets available for indoor activity, and their technical details in this work. We have proposed a taxonomy for indoor HAR and highlighted the state-of-the-art and future prospects by mentioning the research gaps and the shortcomings of recent surveys with respect to our work.
Collapse
Affiliation(s)
- Geetanjali Bhola
- Biometric Research Laboratory, Department of Information Technology, Delhi Technological University, Bawana Road, Delhi, 11042 India
| | - Dinesh Kumar Vishwakarma
- Biometric Research Laboratory, Department of Information Technology, Delhi Technological University, Bawana Road, Delhi, 11042 India
| |
Collapse
|
5
|
Tiribelli S, Monnot A, Shah SFH, Arora A, Toong PJ, Kong S. Ethics Principles for Artificial Intelligence-Based Telemedicine for Public Health. Am J Public Health 2023; 113:577-584. [PMID: 36893365 PMCID: PMC10088937 DOI: 10.2105/ajph.2023.307225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
Abstract
The use of artificial intelligence (AI) in the field of telemedicine has grown exponentially over the past decade, along with the adoption of AI-based telemedicine to support public health systems. Although AI-based telemedicine can open up novel opportunities for the delivery of clinical health and care and become a strong aid to public health systems worldwide, it also comes with ethical risks that should be detected, prevented, or mitigated for the responsible use of AI-based telemedicine in and for public health. However, despite the current proliferation of AI ethics frameworks, thus far, none have been developed for the design of AI-based telemedicine, especially for the adoption of AI-based telemedicine in and for public health. We aimed to fill this gap by mapping the most relevant AI ethics principles for AI-based telemedicine for public health and by showing the need to revise them via major ethical themes emerging from bioethics, medical ethics, and public health ethics toward the definition of a unified set of 6 AI ethics principles for the implementation of AI-based telemedicine. (Am J Public Health. Published online ahead of print March 9, 2023:e1-e8. https://doi.org/10.2105/AJPH.2022.307225).
Collapse
Affiliation(s)
- Simona Tiribelli
- Simona Tiribelli is with the Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, Italy, and the Institute for Technology and Global Health, Cambridge, MA. Annabelle Monnot is with Polygeia, Global Health Think Tank, Cambridge, UK. Syed F. H. Shah and Anmol Arora are with the School of Clinical Medicine, University of Cambridge, Cambridge. Ping J. Toong is with the Department of Pathology, University of Cambridge. Sokanha Kong is with the Department of Medical Genetics, University of Cambridge
| | - Annabelle Monnot
- Simona Tiribelli is with the Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, Italy, and the Institute for Technology and Global Health, Cambridge, MA. Annabelle Monnot is with Polygeia, Global Health Think Tank, Cambridge, UK. Syed F. H. Shah and Anmol Arora are with the School of Clinical Medicine, University of Cambridge, Cambridge. Ping J. Toong is with the Department of Pathology, University of Cambridge. Sokanha Kong is with the Department of Medical Genetics, University of Cambridge
| | - Syed F H Shah
- Simona Tiribelli is with the Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, Italy, and the Institute for Technology and Global Health, Cambridge, MA. Annabelle Monnot is with Polygeia, Global Health Think Tank, Cambridge, UK. Syed F. H. Shah and Anmol Arora are with the School of Clinical Medicine, University of Cambridge, Cambridge. Ping J. Toong is with the Department of Pathology, University of Cambridge. Sokanha Kong is with the Department of Medical Genetics, University of Cambridge
| | - Anmol Arora
- Simona Tiribelli is with the Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, Italy, and the Institute for Technology and Global Health, Cambridge, MA. Annabelle Monnot is with Polygeia, Global Health Think Tank, Cambridge, UK. Syed F. H. Shah and Anmol Arora are with the School of Clinical Medicine, University of Cambridge, Cambridge. Ping J. Toong is with the Department of Pathology, University of Cambridge. Sokanha Kong is with the Department of Medical Genetics, University of Cambridge
| | - Ping J Toong
- Simona Tiribelli is with the Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, Italy, and the Institute for Technology and Global Health, Cambridge, MA. Annabelle Monnot is with Polygeia, Global Health Think Tank, Cambridge, UK. Syed F. H. Shah and Anmol Arora are with the School of Clinical Medicine, University of Cambridge, Cambridge. Ping J. Toong is with the Department of Pathology, University of Cambridge. Sokanha Kong is with the Department of Medical Genetics, University of Cambridge
| | - Sokanha Kong
- Simona Tiribelli is with the Department of Political Sciences, Communication, and International Relations, University of Macerata, Macerata, Italy, and the Institute for Technology and Global Health, Cambridge, MA. Annabelle Monnot is with Polygeia, Global Health Think Tank, Cambridge, UK. Syed F. H. Shah and Anmol Arora are with the School of Clinical Medicine, University of Cambridge, Cambridge. Ping J. Toong is with the Department of Pathology, University of Cambridge. Sokanha Kong is with the Department of Medical Genetics, University of Cambridge
| |
Collapse
|
6
|
Sardari S, Sharifzadeh S, Daneshkhah A, Nakisa B, Loke SW, Palade V, Duncan MJ. Artificial Intelligence for skeleton-based physical rehabilitation action evaluation: A systematic review. Comput Biol Med 2023; 158:106835. [PMID: 37019012 DOI: 10.1016/j.compbiomed.2023.106835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 03/09/2023] [Accepted: 03/26/2023] [Indexed: 04/03/2023]
Abstract
Performing prescribed physical exercises during home-based rehabilitation programs plays an important role in regaining muscle strength and improving balance for people with different physical disabilities. However, patients attending these programs are not able to assess their action performance in the absence of a medical expert. Recently, vision-based sensors have been deployed in the activity monitoring domain. They are capable of capturing accurate skeleton data. Furthermore, there have been significant advancements in Computer Vision (CV) and Deep Learning (DL) methodologies. These factors have promoted the solutions for designing automatic patient's activity monitoring models. Then, improving such systems' performance to assist patients and physiotherapists has attracted wide interest of the research community. This paper provides a comprehensive and up-to-date literature review on different stages of skeleton data acquisition processes for the aim of physio exercise monitoring. Then, the previously reported Artificial Intelligence (AI) - based methodologies for skeleton data analysis will be reviewed. In particular, feature learning from skeleton data, evaluation, and feedback generation for the purpose of rehabilitation monitoring will be studied. Furthermore, the associated challenges to these processes will be reviewed. Finally, the paper puts forward several suggestions for future research directions in this area.
Collapse
|
7
|
Islam MM, Nooruddin S, Karray F, Muhammad G. Human activity recognition using tools of convolutional neural networks: A state of the art review, data sets, challenges, and future prospects. Comput Biol Med 2022; 149:106060. [DOI: 10.1016/j.compbiomed.2022.106060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 08/09/2022] [Accepted: 08/27/2022] [Indexed: 01/02/2023]
|
8
|
|
9
|
Shang M, De Raedt W, Varon C, Vanrumste B. Are Gyroscopes an Added Value in Leave-One-Subject-Out Activity Recognition with IMUs? ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2399-2402. [PMID: 36085705 DOI: 10.1109/embc48229.2022.9871845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Inertial sensors have played a key role in the development of Human Activity Recognition (HAR) systems. Adding gyroscopes in HAR systems leads to increased battery and processing resources. Therefore, it is important to explore their added value compared with using accelerometers only. This study evaluates the added value of gyroscopes in activity recognition. Two public available datasets recorded by accelerometers and gyroscopes were studied. These datasets focus on multiple types of activities: UCI HAR dataset includes walking, walking upstairs, walking downstairs, sitting, standing, laying and WISDM dataset includes 18 hand-oriented and non-hand-oriented activities. Several machine learning models were applied to both datasets for activity recognition. Leave-one-subject-out cross-validation (LOSO) was applied to evaluate the models, where the training set and test set were from different subjects. For UCI HAR dataset, the multilayer perceptron (MLP) model obtained the highest f1-scores. Adding a gyroscope on the waist significantly improved the f1-scores of sitting and laying (both ). For WISDM dataset, the support vector machines (SVM) model obtained the highest f1-scores. The gyroscope on the wrist improved hand-oriented activities while the gyroscope in the pockets improved non-hand-oriented activities (all . The results showed the improvement for recognition performance by adding gyroscopes. However, the improvement was dependent on the type of activity and the mounting place of the gyroscope. Clinical relevance- Gyroscopes are common sensors for activity recognition in wearable healthcare systems. This study proves the added value by adding gyroscopes on different mounting places for recognition performance.
Collapse
|
10
|
D’Arco L, Wang H, Zheng H. Assessing Impact of Sensors and Feature Selection in Smart-Insole-Based Human Activity Recognition. Methods Protoc 2022; 5:mps5030045. [PMID: 35736546 PMCID: PMC9230734 DOI: 10.3390/mps5030045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 05/26/2022] [Accepted: 05/27/2022] [Indexed: 11/16/2022] Open
Abstract
Human Activity Recognition (HAR) is increasingly used in a variety of applications, including health care, fitness tracking, and rehabilitation. To reduce the impact on the user’s daily activities, wearable technologies have been advanced throughout the years. In this study, an improved smart insole-based HAR system is proposed. The impact of data segmentation, sensors used, and feature selection on HAR was fully investigated. The Support Vector Machine (SVM), a supervised learning algorithm, has been used to recognise six ambulation activities: downstairs, sit to stand, sitting, standing, upstairs, and walking. Considering the impact that data segmentation can have on the classification, the sliding window size was optimised, identifying the length of 10 s with 50% of overlap as the best performing. The inertial sensors and pressure sensors embedded into the smart insoles have been assessed to determine the importance that each one has in the classification. A feature selection technique has been applied to reduce the number of features from 272 to 227 to improve the robustness of the proposed system and to investigate the importance of features in the dataset. According to the findings, the inertial sensors are reliable for the recognition of dynamic activities, while pressure sensors are reliable for stationary activities; however, the highest accuracy (94.66%) was achieved by combining both types of sensors.
Collapse
|
11
|
|
12
|
Liu L, He J, Ren K, Lungu J, Hou Y, Dong R. An Information Gain-Based Model and an Attention-Based RNN for Wearable Human Activity Recognition. ENTROPY 2021; 23:e23121635. [PMID: 34945941 PMCID: PMC8700115 DOI: 10.3390/e23121635] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/26/2021] [Accepted: 12/03/2021] [Indexed: 12/03/2022]
Abstract
Wearable sensor-based HAR (human activity recognition) is a popular human activity perception method. However, due to the lack of a unified human activity model, the number and positions of sensors in the existing wearable HAR systems are not the same, which affects the promotion and application. In this paper, an information gain-based human activity model is established, and an attention-based recurrent neural network (namely Attention-RNN) for human activity recognition is designed. Besides, the attention-RNN, which combines bidirectional long short-term memory (BiLSTM) with attention mechanism, was tested on the UCI opportunity challenge dataset. Experiments prove that the proposed human activity model provides guidance for the deployment location of sensors and provides a basis for the selection of the number of sensors, which can reduce the number of sensors used to achieve the same classification effect. In addition, experiments show that the proposed Attention-RNN achieves F1 scores of 0.898 and 0.911 in the ML (Modes of Locomotion) task and GR (Gesture Recognition) task, respectively.
Collapse
Affiliation(s)
- Leyuan Liu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
| | - Jian He
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
- Beijing Engineering Research Center for IOT Software and Systems, Beijing University of Technology, Beijing 100124, China
- Correspondence: (J.H.); (K.R.)
| | - Keyan Ren
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
- Beijing Engineering Research Center for IOT Software and Systems, Beijing University of Technology, Beijing 100124, China
- Correspondence: (J.H.); (K.R.)
| | - Jonathan Lungu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
| | - Yibin Hou
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
- Beijing Engineering Research Center for IOT Software and Systems, Beijing University of Technology, Beijing 100124, China
| | - Ruihai Dong
- School of Computer Science, University College Dublin, D04 V1W8 Dublin 4, Ireland;
| |
Collapse
|
13
|
Review of Wearable Devices and Data Collection Considerations for Connected Health. SENSORS 2021; 21:s21165589. [PMID: 34451032 PMCID: PMC8402237 DOI: 10.3390/s21165589] [Citation(s) in RCA: 64] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/22/2021] [Accepted: 08/02/2021] [Indexed: 12/16/2022]
Abstract
Wearable sensor technology has gradually extended its usability into a wide range of well-known applications. Wearable sensors can typically assess and quantify the wearer’s physiology and are commonly employed for human activity detection and quantified self-assessment. Wearable sensors are increasingly utilised to monitor patient health, rapidly assist with disease diagnosis, and help predict and often improve patient outcomes. Clinicians use various self-report questionnaires and well-known tests to report patient symptoms and assess their functional ability. These assessments are time consuming and costly and depend on subjective patient recall. Moreover, measurements may not accurately demonstrate the patient’s functional ability whilst at home. Wearable sensors can be used to detect and quantify specific movements in different applications. The volume of data collected by wearable sensors during long-term assessment of ambulatory movement can become immense in tuple size. This paper discusses current techniques used to track and record various human body movements, as well as techniques used to measure activity and sleep from long-term data collected by wearable technology devices.
Collapse
|
14
|
Abdelkawy H, Ayari N, Chibani A, Amirat Y, Attal F. Spatio-Temporal Convolutional Networks and N-Ary Ontologies for Human Activity-Aware Robotic System. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3047780] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System. SUSTAINABILITY 2021. [DOI: 10.3390/su13020970] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Due to the constantly increasing demand for automatic tracking and recognition systems, there is a need for more proficient, intelligent and sustainable human activity tracking. The main purpose of this study is to develop an accurate and sustainable human action tracking system that is capable of error-free identification of human movements irrespective of the environment in which those actions are performed. Therefore, in this paper we propose a stereoscopic Human Action Recognition (HAR) system based on the fusion of RGB (red, green, blue) and depth sensors. These sensors give an extra depth of information which enables the three-dimensional (3D) tracking of each and every movement performed by humans. Human actions are tracked according to four features, namely, (1) geodesic distance; (2) 3D Cartesian-plane features; (3) joints Motion Capture (MOCAP) features and (4) way-points trajectory generation. In order to represent these features in an optimized form, Particle Swarm Optimization (PSO) is applied. After optimization, a neuro-fuzzy classifier is used for classification and recognition. Extensive experimentation is performed on three challenging datasets: A Nanyang Technological University (NTU) RGB+D dataset; a UoL (University of Lincoln) 3D social activity dataset and a Collective Activity Dataset (CAD). Evaluation experiments on the proposed system proved that a fusion of vision sensors along with our unique features is an efficient approach towards developing a robust HAR system, having achieved a mean accuracy of 93.5% with the NTU RGB+D dataset, 92.2% with the UoL dataset and 89.6% with the Collective Activity dataset. The developed system can play a significant role in many computer vision-based applications, such as intelligent homes, offices and hospitals, and surveillance systems.
Collapse
|
16
|
Otebolaku A, Enamamu T, Alfoudi A, Ikpehai A, Marchang J, Lee GM. Deep Sensing: Inertial and Ambient Sensing for Activity Context Recognition Using Deep Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3803. [PMID: 32646025 PMCID: PMC7374292 DOI: 10.3390/s20133803] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 06/19/2020] [Accepted: 07/03/2020] [Indexed: 02/02/2023]
Abstract
With the widespread use of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications, such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose augmentation of the time series signals from inertial sensors with signals from ambient sensing to train Deep Convolutional Neural Network (DCNNs) models. DCNNs provide the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertial and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors, such as environmental noise level and illumination, with an overall improvement of 5.3% accuracy.
Collapse
Affiliation(s)
- Abayomi Otebolaku
- Department of Computing, Sheffield Hallam University, Sheffield S1 2NU, UK; (T.E.); (J.M.)
| | - Timibloudi Enamamu
- Department of Computing, Sheffield Hallam University, Sheffield S1 2NU, UK; (T.E.); (J.M.)
| | - Ali Alfoudi
- College of Computer Science & Information Technology, University of Al-Qadisiyah, Al Diwaniyah 58002, Iraq;
| | - Augustine Ikpehai
- Department of Engineering and Mathematics, Sheffield Hallam University, Sheffield S1 2NU, UK;
| | - Jims Marchang
- Department of Computing, Sheffield Hallam University, Sheffield S1 2NU, UK; (T.E.); (J.M.)
| | - Gyu Myoung Lee
- Department of Computer Science, Liverpool John Moores University, Liverpool L3 3AF, UK;
| |
Collapse
|