1
|
Zhang Y, Tang H, Wu Y, Wang B, Yang D. FMCW Radar Human Action Recognition Based on Asymmetric Convolutional Residual Blocks. SENSORS (BASEL, SWITZERLAND) 2024; 24:4570. [PMID: 39065968 PMCID: PMC11281001 DOI: 10.3390/s24144570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Revised: 07/11/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024]
Abstract
Human action recognition based on optical and infrared video data is greatly affected by the environment, and feature extraction in traditional machine learning classification methods is complex; therefore, this paper proposes a method for human action recognition using Frequency Modulated Continuous Wave (FMCW) radar based on an asymmetric convolutional residual network. First, the radar echo data are analyzed and processed to extract the micro-Doppler time domain spectrograms of different actions. Second, a strategy combining asymmetric convolution and the Mish activation function is adopted in the residual block of the ResNet18 network to address the limitations of linear and nonlinear transformations in the residual block for micro-Doppler spectrum recognition. This approach aims to enhance the network's ability to learn features effectively. Finally, the Improved Convolutional Block Attention Module (ICBAM) is integrated into the residual block to enhance the model's attention and comprehension of input data. The experimental results demonstrate that the proposed method achieves a high accuracy of 98.28% in action recognition and classification within complex scenes, surpassing classic deep learning approaches. Moreover, this method significantly improves the recognition accuracy for actions with similar micro-Doppler features and demonstrates excellent anti-noise recognition performance.
Collapse
Affiliation(s)
- Yuan Zhang
- School of Information Science and Technology, North China University of Technology, Beijing 100144, China; (H.T.); (B.W.); (D.Y.)
| | - Haotian Tang
- School of Information Science and Technology, North China University of Technology, Beijing 100144, China; (H.T.); (B.W.); (D.Y.)
| | - Ye Wu
- Intel Intelligent Edge Computing Joint Research Institute, Nanjing 211100, China;
| | - Bolun Wang
- School of Information Science and Technology, North China University of Technology, Beijing 100144, China; (H.T.); (B.W.); (D.Y.)
| | - Dalin Yang
- School of Information Science and Technology, North China University of Technology, Beijing 100144, China; (H.T.); (B.W.); (D.Y.)
| |
Collapse
|
2
|
Krauss D, Engel L, Ott T, Bräunig J, Richer R, Gambietz M, Albrecht N, Hille EM, Ullmann I, Braun M, Dabrock P, Kölpin A, Koelewijn AD, Eskofier BM, Vossiek M. A Review and Tutorial on Machine Learning-Enabled Radar-Based Biomedical Monitoring. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:680-699. [PMID: 39193041 PMCID: PMC11348957 DOI: 10.1109/ojemb.2024.3397208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 04/09/2024] [Accepted: 05/02/2024] [Indexed: 08/29/2024] Open
Abstract
Radio detection and ranging-based (radar) sensing offers unique opportunities for biomedical monitoring and can help overcome the limitations of currently established solutions. Due to its contactless and unobtrusive measurement principle, it can facilitate the longitudinal recording of human physiology and can help to bridge the gap from laboratory to real-world assessments. However, radar sensors typically yield complex and multidimensional data that are hard to interpret without domain expertise. Machine learning (ML) algorithms can be trained to extract meaningful information from radar data for medical experts, enhancing not only diagnostic capabilities but also contributing to advancements in disease prevention and treatment. However, until now, the two aspects of radar-based data acquisition and ML-based data processing have mostly been addressed individually and not as part of a holistic and end-to-end data analysis pipeline. For this reason, we present a tutorial on radar-based ML applications for biomedical monitoring that equally emphasizes both dimensions. We highlight the fundamentals of radar and ML theory, data acquisition and representation and outline categories of clinical relevance. Since the contactless and unobtrusive nature of radar-based sensing also raises novel ethical concerns regarding biomedical monitoring, we additionally present a discussion that carefully addresses the ethical aspects of this novel technology, particularly regarding data privacy, ownership, and potential biases in ML algorithms.
Collapse
Affiliation(s)
- Daniel Krauss
- Machine Learning and Data Analytics LabFriedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| | - Lukas Engel
- Institute of Microwaves and PhotonicsFriedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| | - Tabea Ott
- Chair of Systematic Theology II (Ethics)Friedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| | - Johanna Bräunig
- Institute of Microwaves and PhotonicsFriedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| | - Robert Richer
- Machine Learning and Data Analytics LabFriedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| | - Markus Gambietz
- Machine Learning and Data Analytics LabFriedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| | - Nils Albrecht
- Institute of High-Frequency TechnologyTechnische Universität Hamburg21073HamburgGermany
| | - Eva M. Hille
- Chair of Social EthicsUniversity of Bonn53113BonnGermany
| | - Ingrid Ullmann
- Institute of Microwaves and PhotonicsFriedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| | - Matthias Braun
- Chair of Social EthicsUniversity of Bonn53113BonnGermany
| | - Peter Dabrock
- Chair of Systematic Theology II (Ethics)Friedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| | - Alexander Kölpin
- Institute of High-Frequency TechnologyTechnische Universität Hamburg21073HamburgGermany
| | - Anne D. Koelewijn
- Machine Learning and Data Analytics LabFriedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| | - Bjoern M. Eskofier
- Machine Learning and Data Analytics LabFriedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
- Translational Digital Health Group, Institute of AI for HealthHelmholtz Zentrum München—German Research Center for Environmental Health85764NeuherbergGermany
| | - Martin Vossiek
- Institute of Microwaves and PhotonicsFriedrich-Alexander-Universität Erlangen-Nürnberg91054ErlangenGermany
| |
Collapse
|
3
|
Wang K, Ghafurian M, Chumachenko D, Cao S, Butt ZA, Salim S, Abhari S, Morita PP. Application of artificial intelligence in active assisted living for aging population in real-world setting with commercial devices - A scoping review. Comput Biol Med 2024; 173:108340. [PMID: 38555702 DOI: 10.1016/j.compbiomed.2024.108340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 02/23/2024] [Accepted: 03/17/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND The aging population is steadily increasing, posing new challenges and opportunities for healthcare systems worldwide. Technological advancements, particularly in commercially available Active Assisted Living devices, offer a promising alternative. These readily accessible products, ranging from smartwatches to home automation systems, are often equipped with Artificial Intelligence capabilities that can monitor health metrics, predict adverse events, and facilitate a safer living environment. However, there is no review exploring how Artificial Intelligence has been integrated into commercially available Active Assisted Living technologies, and how these devices monitor health metrics and provide healthcare solutions in a real-world environment for healthy aging. This review is essential because it fills a knowledge gap in understanding AI's integration in Active Assisted Living technologies in promoting healthy aging in real-world settings, identifying key issues that require to be addressed in future studies. OBJECTIVE The aim of this overview is to outline current understanding, identify potential research opportunities, and highlight research gaps from published studies regarding the use of Artificial Intelligence in commercially available Active Assisted Living technologies that assists older individuals aging at home. METHODS A comprehensive search was conducted in six databases-PubMed, CINAHL, IEEE Xplore, Scopus, ACM Digital Library, and Web of Science-to identify relevant studies published over the past decade from 2013 to 2024. Our methodology adhered to the PRISMA extension for scoping reviews to ensure rigor and transparency throughout the review process. After applying predefined inclusion and exclusion criteria on 825 retrieved articles, a total of 64 papers were included for analysis and synthesis. RESULTS Several trends emerged from our analysis of the 64 selected papers. A majority of the work (39/64, 61%) was published after the year 2020. Geographically, most of the studies originated from East Asia and North America (36/64, 56%). The primary application goal of Artificial Intelligence in the reviewed literature was focused on activity recognition (34/64, 53%), followed by daily monitoring (10/64, 16%). Methodologically, tree-based and neural network-based approaches were the most prevalent Artificial Intelligence algorithms used in studies (32/64, 50% and 31/64, 48% respectively). A notable proportion of the studies (32/64, 50%) carried out their research using specially designed smart home testbeds that simulate the conditions in real-world. Moreover, ambient technology was a common thread (49/64, 77%), with occupancy-related data (such as motion and electrical appliance usage logs) and environmental sensors (indicators like temperature and humidity) being the most frequently used. CONCLUSION Our results suggest that Artificial Intelligence has been increasingly deployed in the real-world Active Assisted Living context over the past decade, offering a variety of applications aimed at healthy aging and facilitating independent living for the older adults. A wide range of smart home indicators were leveraged for comprehensive data analysis, exploring and enhancing the potentials and effectiveness of solutions. However, our review has identified multiple research gaps that need further investigation. First, most research has been conducted in controlled testbed environments, leaving a lack of real-world applications that could validate the technologies' efficacy and scalability. Second, there is a noticeable absence of research leveraging cloud technology, an essential tool for large-scale deployment and standardized data collection and management. Future work should prioritize these areas to maximize the potential benefits of Artificial Intelligence in Active Assisted Living settings.
Collapse
Affiliation(s)
- Kang Wang
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| | - Moojan Ghafurian
- Department of Systems Design Engineering, University of Waterloo, ON, Canada
| | - Dmytro Chumachenko
- National Aerospace University "Kharkiv Aviation Institute", Kharkiv, Ukraine
| | - Shi Cao
- Department of Systems Design Engineering, University of Waterloo, ON, Canada
| | - Zahid A Butt
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| | - Shahan Salim
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| | - Shahabeddin Abhari
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| | - Plinio P Morita
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada; Department of Systems Design Engineering, University of Waterloo, ON, Canada; Centre for Digital Therapeutics, Techna Institute, University Health Network, Toronto, ON, Canada; Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
4
|
Deepa K, Bacanin N, Askar SS, Abouhawwash M. Elderly and visually impaired indoor activity monitoring based on Wi-Fi and Deep Hybrid convolutional neural network. Sci Rep 2023; 13:22470. [PMID: 38110422 PMCID: PMC10728209 DOI: 10.1038/s41598-023-48860-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 11/30/2023] [Indexed: 12/20/2023] Open
Abstract
A drop in physical activity and a deterioration in the capacity to undertake daily life activities are both connected with ageing and have negative effects on physical and mental health. An Elderly and Visually Impaired Human Activity Monitoring (EV-HAM) system that keeps tabs on a person's routine and steps in if a change in behaviour or a crisis might greatly help an elderly person or a visually impaired. These individuals may find greater freedom with the help of an EVHAM system. As the backbone of human-centric applications like actively supported living and in-home monitoring for the elderly and visually impaired, an EVHAM system is essential. Big data-driven product design is flourishing in this age of 5G and the IoT. Recent advancements in processing power and software architectures have also contributed to the emergence and development of artificial intelligence (AI). In this context, the digital twin has emerged as a state-of-the-art technology that bridges the gap between the real and virtual worlds by evaluating data from several sensors using artificial intelligence algorithms. Although promising findings have been reported by Wi-Fi-based human activity identification techniques so far, their effectiveness is vulnerable to environmental variations. Using the environment-independent fingerprints generated from the Wi-Fi channel state information (CSI), we introduce Wi-Sense. This human activity identification system employs a Deep Hybrid convolutional neural network (DHCNN). The proposed system begins by collecting the CSI with a regular Wi-Fi Network Interface Controller. Wi-Sense uses the CSI ratio technique to lessen the effect of noise and the phase offset. The t- Distributed Stochastic Neighbor Embedding (t-SNE) is used to eliminate unnecessary data further. The data dimension is decreased, and the negative effects on the environment are eliminated in this process. The resulting spectrogram of the processed data exposes the activity's micro-Doppler fingerprints as a function of both time and location. These spectrograms are put to use in the training of a DHCNN. Based on our findings, EVHAM can accurately identify these actions 99% of the time.
Collapse
Affiliation(s)
- K Deepa
- Department of Computer Science and Engineering, K.Ramakrishnan College of Technology, Trichy, 621112, India
| | | | - S S Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, 11451, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Computational Mathematics, Science and Engineering (CMSE), College of Engineering, Michigan State University, East Lansing, MI, 48824, USA
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt
| |
Collapse
|
5
|
Yuan L, He Z, Wang Q, Xu L, Ma X. Improving Small-Scale Human Action Recognition Performance Using a 3D Heatmap Volume. SENSORS (BASEL, SWITZERLAND) 2023; 23:6364. [PMID: 37514658 PMCID: PMC10383990 DOI: 10.3390/s23146364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/18/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023]
Abstract
In recent years, skeleton-based human action recognition has garnered significant research attention, with proposed recognition or segmentation methods typically validated on large-scale coarse-grained action datasets. However, there remains a lack of research on the recognition of small-scale fine-grained human actions using deep learning methods, which have greater practical significance. To address this gap, we propose a novel approach based on heatmap-based pseudo videos and a unified, general model applicable to all modality datasets. Leveraging anthropometric kinematics as prior information, we extract common human motion features among datasets through an ad hoc pre-trained model. To overcome joint mismatch issues, we partition the human skeleton into five parts, a simple yet effective technique for information sharing. Our approach is evaluated on two datasets, including the public Nursing Activities and our self-built Tai Chi Action dataset. Results from linear evaluation protocol and fine-tuned evaluation demonstrate that our pre-trained model effectively captures common motion features among human actions and achieves steady and precise accuracy across all training settings, while mitigating network overfitting. Notably, our model outperforms state-of-the-art models in recognition accuracy when fusing joint and limb modality features along the channel dimension.
Collapse
Affiliation(s)
- Lin Yuan
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Zhen He
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Qiang Wang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Leiyang Xu
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Xiang Ma
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| |
Collapse
|
6
|
Diraco G, Rescio G, Caroppo A, Manni A, Leone A. Human Action Recognition in Smart Living Services and Applications: Context Awareness, Data Availability, Personalization, and Privacy. SENSORS (BASEL, SWITZERLAND) 2023; 23:6040. [PMID: 37447889 DOI: 10.3390/s23136040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 06/20/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023]
Abstract
Smart living, an increasingly prominent concept, entails incorporating sophisticated technologies in homes and urban environments to elevate the quality of life for citizens. A critical success factor for smart living services and applications, from energy management to healthcare and transportation, is the efficacy of human action recognition (HAR). HAR, rooted in computer vision, seeks to identify human actions and activities using visual data and various sensor modalities. This paper extensively reviews the literature on HAR in smart living services and applications, amalgamating key contributions and challenges while providing insights into future research directions. The review delves into the essential aspects of smart living, the state of the art in HAR, and the potential societal implications of this technology. Moreover, the paper meticulously examines the primary application sectors in smart living that stand to gain from HAR, such as smart homes, smart healthcare, and smart cities. By underscoring the significance of the four dimensions of context awareness, data availability, personalization, and privacy in HAR, this paper offers a comprehensive resource for researchers and practitioners striving to advance smart living services and applications. The methodology for this literature review involved conducting targeted Scopus queries to ensure a comprehensive coverage of relevant publications in the field. Efforts have been made to thoroughly evaluate the existing literature, identify research gaps, and propose future research directions. The comparative advantages of this review lie in its comprehensive coverage of the dimensions essential for smart living services and applications, addressing the limitations of previous reviews and offering valuable insights for researchers and practitioners in the field.
Collapse
Affiliation(s)
- Giovanni Diraco
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Gabriele Rescio
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Andrea Caroppo
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Andrea Manni
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Alessandro Leone
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| |
Collapse
|
7
|
Mazurek P. Application of Feedforward and Recurrent Neural Networks for Fusion of Data from Radar and Depth Sensors Applied for Healthcare-Oriented Characterisation of Persons' Gait. SENSORS (BASEL, SWITZERLAND) 2023; 23:1457. [PMID: 36772497 PMCID: PMC9919234 DOI: 10.3390/s23031457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 01/22/2023] [Accepted: 01/26/2023] [Indexed: 06/18/2023]
Abstract
In this paper, the useability of feedforward and recurrent neural networks for fusion of data from impulse-radar sensors and depth sensors, in the context of healthcare-oriented monitoring of elderly persons, is investigated. Two methods of data fusion are considered, viz., one based on a multilayer perceptron and one based on a nonlinear autoregressive network with exogenous inputs. These two methods are compared with a reference method with respect to their capacity for decreasing the uncertainty of estimation of a monitored person's position and uncertainty of estimation of several parameters enabling medical personnel to make useful inferences on the health condition of that person, viz., the number of turns made during walking, the travelled distance, and the mean walking speed. Both artificial neural networks were trained on the synthetic data. The numerical experiments show the superiority of the method based on a nonlinear autoregressive network with exogenous inputs. This may be explained by the fact that for this type of network, the prediction of the person's position at each time instant is based on the position of that person at the previous time instants.
Collapse
Affiliation(s)
- Paweł Mazurek
- Warsaw University of Technology, Faculty of Electronics and Information Technology, Institute of Radioelectronics and Multimedia Technology, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
| |
Collapse
|
8
|
Unsupervised Learning-Based Non-Invasive Fetal ECG Muti-Level Signal Quality Assessment. BIOENGINEERING (BASEL, SWITZERLAND) 2023; 10:bioengineering10010066. [PMID: 36671638 PMCID: PMC9854747 DOI: 10.3390/bioengineering10010066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/16/2022] [Accepted: 12/26/2022] [Indexed: 01/06/2023]
Abstract
OBJECTIVE To monitor fetal health and growth, fetal heart rate is a critical indicator. The non-invasive fetal electrocardiogram is a widely employed measurement for fetal heart rate estimation, which is extracted from the electrodes placed on the surface of the maternal abdomen. The qualities of the fetal ECG recordings, however, are frequently affected by the noises from various interference sources. In general, the fetal heart rate estimates are unreliable when low-quality fetal ECG signals are used for fetal heart rate estimation, which makes accurate fetal heart rate estimation a challenging task. So, the signal quality assessment for the fetal ECG records is an essential step before fetal heart rate estimation. In other words, some low-quality fetal ECG signal segments are supposed to be detected and removed by utilizing signal quality assessment, so as to improve the accuracy of fetal heart rate estimation. A few supervised learning-based fetal ECG signal quality assessment approaches have been introduced and shown to accurately classify high- and low-quality fetal ECG signal segments, but large fetal ECG datasets with quality annotation are required in these methods. Yet, the labeled fetal ECG datasets are limited. Proposed methods: An unsupervised learning-based multi-level fetal ECG signal quality assessment approach is proposed in this paper for identifying three levels of fetal ECG signal quality. We extracted some features associated with signal quality, including entropy-based features, statistical features, and ECG signal quality indices. Additionally, an autoencoder-based feature is calculated, which is related to the reconstruction error of the spectrograms generated from fetal ECG signal segments. The high-, medium-, and low-quality fetal ECG signal segments are classified by inputting these features into a self-organizing map. MAIN RESULTS The experimental results showed that our proposal achieved a weighted average F1-score of 90% in three-level fetal ECG signal quality classification. Moreover, with the acceptable removal of detected low-quality signal segments, the errors of fetal heart rate estimation were reduced to a certain extent.
Collapse
|
9
|
Skeleton-based Tai Chi action segmentation using trajectory primitives and content. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08185-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
10
|
Ahmed S, Lee Y, Lim YH, Cho SH, Park HK, Cho SH. Noncontact assessment for fatigue based on heart rate variability using IR-UWB radar. Sci Rep 2022; 12:14211. [PMID: 35987815 PMCID: PMC9392064 DOI: 10.1038/s41598-022-18498-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 08/12/2022] [Indexed: 11/16/2022] Open
Abstract
Physical fatigue can be assessed using heart rate variability (HRV). We measured HRV at rest and in a fatigued state using impulse-radio ultra wideband (IR-UWB) radar in a noncontact fashion and compared the measurements with those obtained using electrocardiography (ECG) to assess the reliability and validity of the radar measurements. HRV was measured in 15 subjects using radar and ECG simultaneously before (rest for 10 min before exercise) and after a 20-min exercise session (fatigue level 1 for 0–9 min; fatigue level 2 for 10–19 min; recovery for ≥ 20 min after exercise). HRV was analysed in the frequency domain, including the low-frequency component (LF), high-frequency component (HF) and LF/HF ratio. The LF/HF ratio measured using radar highly agreed with that measured using ECG during rest (ICC = 0.807), fatigue-1 (ICC = 0.712), fatigue-2 (ICC = 0.741) and recovery (ICC = 0.764) in analyses using intraclass correlation coefficients (ICCs). The change pattern in the LH/HF ratios during the experiment was similar between radar and ECG. The subject’s body fat percentage was linearly associated with the time to recovery from physical fatigue (R2 = 0.96, p < 0.001). Our results demonstrated that fatigue and rest states can be distinguished accurately based on HRV measurements using IR-UWB radar in a noncontact fashion.
Collapse
|
11
|
Werthen-Brabants L, Bhavanasi G, Couckuyt I, Dhaene T, Deschrijver D. Split BiRNN for real-time activity recognition using radar and deep learning. Sci Rep 2022; 12:7436. [PMID: 35523811 PMCID: PMC9076655 DOI: 10.1038/s41598-022-08240-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 03/04/2022] [Indexed: 11/09/2022] Open
Abstract
Radar systems can be used to perform human activity recognition in a privacy preserving manner. This can be achieved by using Deep Neural Networks, which are able to effectively process the complex radar data. Often these networks are large and do not scale well when processing a large amount of radar streams at once, for example when monitoring multiple rooms in a hospital. This work presents a framework that splits the processing of data in two parts. First, a forward Recurrent Neural Network (RNN) calculation is performed on an on-premise device (usually close to the radar sensor) which already gives a prediction of what activity is performed, and can be used for time-sensitive use-cases. Next, a part of the calculation and the prediction is sent to a more capable off-premise machine (most likely in the cloud or a data center) where a backward RNN calculation is performed that improves the previous prediction sent by the on-premise device. This enables fast notifications to staff if troublesome activities occur (such as falling) by the on-premise device, while the off-premise device captures activities missed or misclassified by the on-premise device.
Collapse
Affiliation(s)
| | | | - Ivo Couckuyt
- Ghent University, IDLab - imec, 9000, Ghent, Belgium
| | - Tom Dhaene
- Ghent University, IDLab - imec, 9000, Ghent, Belgium
| | | |
Collapse
|