1
|
Ahmed S, Yoon S, Cho SH. A public dataset of dogs vital signs recorded with ultra wideband radar and reference sensors. Sci Data 2024; 11:107. [PMID: 38253685 PMCID: PMC10803748 DOI: 10.1038/s41597-024-02947-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 01/10/2024] [Indexed: 01/24/2024] Open
Abstract
Recently, radar sensors have been extensively used for vital sign monitoring in dogs, owing to their noncontact and noninvasive nature. However, a public dataset on dog vital signs has yet to be proposed since capturing data from dogs requires special training and approval. This work presents the first ever ultra wideband radar-based dog vital sign (UWB-DVS) dataset, which was captured in two independent scenarios. In the first scenario, clinical reference sensors are attached to the fainted dogs, and data from UWB radar and reference sensors are captured synchronously. In the second scenario, the dogs can move freely, and video recordings are provided as a reference for movement detection and breathing extraction. For technical validation, a high correlation, above 0.9, is found between the radar and clinical reference sensors for both the heart rate and breathing rate measurements in scenario 1. In scenario 2, the vital signs and movement of the dogs are shown in the form of dashboards, demonstrating the long-term monitoring capability of the radar sensor.
Collapse
Affiliation(s)
- Shahzad Ahmed
- Department of Electronic Engineering, Hanyang University, Seoul, 04763, South Korea
| | - Seongkwon Yoon
- Department of Electronic Engineering, Hanyang University, Seoul, 04763, South Korea
| | - Sung Ho Cho
- Department of Electronic Engineering, Hanyang University, Seoul, 04763, South Korea.
| |
Collapse
|
2
|
Eren C, Karamzadeh S, Kartal M. Radar human breathing dataset for applications of ambient assisted living and search and rescue operations. Data Brief 2023; 51:109757. [PMID: 38053604 PMCID: PMC10694063 DOI: 10.1016/j.dib.2023.109757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 10/09/2023] [Accepted: 10/30/2023] [Indexed: 12/07/2023] Open
Abstract
This dataset consists of signatures of human vital signs that are recorded by ultrawideband radar and lidar sensors. The data acquisition scene considers the human posture models(supine/lateral/facedown), different radar antenna angles towards the human, various set of distances and operational radar characteristics (bandwidth selection/mean power). The raw data files of lidar&radar and processed data files are presented separately in the data repository. The lidar sensor is chosen as a reference sensor. There are 432 data records, and each data scene's trial number is eight. There is a homogeneous wooden table to mimic clutter while forming a dataset. Thus, this dataset covers applications of search and rescue operations, sleep monitoring, and ambient assisted living (AAL) applications.
Collapse
Affiliation(s)
- Cansu Eren
- Satellite Communication and Remote Sensing, Department of Communication Systems, Informatics Institute, Istanbul Technical University, Istanbul, Türkiye
| | - Saeid Karamzadeh
- Millimeter Wave Technologies, Intelligent Wireless System, Silicon Austria Labs (SAL), 4040 Linz, Austria. Electrical and Electronics Engineering Department, Faculty of Engineering and Natural Sciences, Bahçeşehir University, 34349 Istanbul, Türkiye
| | - Mesut Kartal
- Department of Electronics and Communication Engineering, Istanbul Technical University, Istanbul, Türkiye
| |
Collapse
|
3
|
SluŸters A, Lambot S, Vanderdonckt J, Vatavu RD. RadarSense: Accurate Recognition of Mid-Air Hand Gestures with Radar Sensing and Few Training Examples. ACM T INTERACT INTEL 2023. [DOI: 10.1145/3589645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic literature review of (
N
= 118) scientific papers on radar sensing, unveiling a large variety of radar technology of different operating frequencies and bandwidths, antenna configurations, but also various gesture recognition techniques. Although highly accurate, these techniques require a large amount of training data that depend on the type of radar. Therefore, the training results cannot be easily transferred to other radars. To address this aspect, we introduce a new gesture recognition pipeline that implements advanced full-wave electromagnetic modeling and inversion to retrieve physical characteristics of gestures that are radar independent,
i.e.
, independent of the source, antennas, and radar-hand interactions. Inversion of radar signals further reduces the size of the dataset by several orders of magnitude, while preserving the essential information. This approach is compatible with conventional gesture recognizers, such as those based on template matching, which only need a few training examples to deliver high recognition accuracy rates. To evaluate our gesture recognition pipeline, we conducted user-dependent and user-independent evaluations on a dataset of 16 gesture types collected with the Walabot, a low-cost off-the-shelf array radar. We contrast these results with those obtained for the same gesture types collected with an ultra-wideband radar made of a vector network analyzer with a single horn antenna and with a computer vision sensor, respectively. Based on our findings, we suggest some design implications to support future development in radar-based gesture recognition.
Collapse
|
4
|
A Method for UWB Localization Based on CNN-SVM and Hybrid Locating Algorithm. INFORMATION 2023. [DOI: 10.3390/info14010046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Abstract
In this paper, aiming at the severe problems of UWB positioning in NLOS-interference circumstances, a complete method is proposed for NLOS/LOS classification, NLOS identification and mitigation, and a final accurate UWB coordinate solution through the integration of two machine learning algorithms and a hybrid localization algorithm, which is called the C-T-CNN-SVM algorithm. This algorithm consists of three basic processes: an LOS/NLOS signal classification method based on SVM, an NLOS signal recognition and error elimination method based on CNN, and an accurate coordinate solution based on the hybrid weighting of the Chan–Taylor method. Finally, the validity and accuracy of the C-T-CNN-SVM algorithm are proved through a comparison with traditional and state-of-the-art methods. (i) Focusing on four main prediction errors (range measurements, maxNoise, stdNoise and rangeError), the standard deviation decreases from 13.65 cm to 4.35 cm, while the mean error decreases from 3.65 cm to 0.27 cm, and the errors are practically distributed normally, demonstrating that after training a SVM for LOS/NLOS signal classification and a CNN for NLOS recognition and mitigation, the accuracy of UWB range measurements may be greatly increased. (ii) After target positioning, the proposed method can realize a one-dimensional X-axis and Y-axis accuracy within 175 mm, and a Z-axis accuracy within 200 mm; a 2D (X,Y) accuracy within 200 mm; and a 3D accuracy within 200 mm, most of which fall within (100 mm, 100 mm, 100 mm). (iii) Compared with the traditional algorithms, the proposed C-T-CNN-SVM algorithm performs better in location accuracy, cumulative error probability (CDF), and root-mean-square difference (RMSE): the 1D, 2D, and 3D accuracy of the proposed method is 2.5 times that of the traditional methods. When the location error is less than 10 cm, the CDF of the proposed algorithm only reaches a value of 0.17; when the positioning error reaches 30 cm, only the CDF of the proposed algorithm remains in an acceptable range. The RMSE of the proposed algorithm remains ideal when the distance error is greater than 30 cm. The results of this paper and the idea of a combination of machine learning methods with the classical locating algorithms for improved UWB positioning under NLOS interference could meet the growing need for wireless indoor locating and communication, which indicates the possibility for the practical deployment of such a method in the future.
Collapse
|
5
|
Brishtel I, Krauss S, Chamseddine M, Rambach JR, Stricker D. Driving Activity Recognition Using UWB Radar and Deep Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:818. [PMID: 36679616 PMCID: PMC9862485 DOI: 10.3390/s23020818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/19/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
In-car activity monitoring is a key enabler of various automotive safety functions. Existing approaches are largely based on vision systems. Radar, however, can provide a low-cost, privacy-preserving alternative. To this day, such systems based on the radar are not widely researched. In our work, we introduce a novel approach that uses the Doppler signal of an ultra-wideband (UWB) radar as an input to deep neural networks for the classification of driving activities. In contrast to previous work in the domain, we focus on generalization to unseen persons and make a new radar driving activity dataset (RaDA) available to the scientific community to encourage comparison and the benchmarking of future methods.
Collapse
Affiliation(s)
- Iuliia Brishtel
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
- Department of Computer Science, RPTU, Erwin-Schrödinger-Str. 57, 67663 Kaiserslautern, Germany
| | - Stephan Krauss
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
| | - Mahdi Chamseddine
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
| | - Jason Raphael Rambach
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
| | - Didier Stricker
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
- Department of Computer Science, RPTU, Erwin-Schrödinger-Str. 57, 67663 Kaiserslautern, Germany
| |
Collapse
|
6
|
Bocus MJ, Li W, Vishwakarma S, Kou R, Tang C, Woodbridge K, Craddock I, McConville R, Santos-Rodriguez R, Chetty K, Piechocki R. OPERAnet, a multimodal activity recognition dataset acquired from radio frequency and vision-based sensors. Sci Data 2022; 9:474. [PMID: 35922418 PMCID: PMC9349197 DOI: 10.1038/s41597-022-01573-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 07/19/2022] [Indexed: 12/02/2022] Open
Abstract
This paper presents a comprehensive dataset intended to evaluate passive Human Activity Recognition (HAR) and localization techniques with measurements obtained from synchronized Radio-Frequency (RF) devices and vision-based sensors. The dataset consists of RF data including Channel State Information (CSI) extracted from a WiFi Network Interface Card (NIC), Passive WiFi Radar (PWR) built upon a Software Defined Radio (SDR) platform, and Ultra-Wideband (UWB) signals acquired via commercial off-the-shelf hardware. It also consists of vision/Infra-red based data acquired from Kinect sensors. Approximately 8 hours of annotated measurements are provided, which are collected across two rooms from 6 participants performing 6 daily activities. This dataset can be exploited to advance WiFi and vision-based HAR, for example, using pattern recognition, skeletal representation, deep learning algorithms or other novel approaches to accurately recognize human activities. Furthermore, it can potentially be used to passively track a human in an indoor environment. Such datasets are key tools required for the development of new algorithms and methods in the context of smart homes, elderly care, and surveillance applications.
Collapse
Affiliation(s)
- Mohammud J Bocus
- School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK.
| | - Wenda Li
- Department of Security and Crime Science, University College London, London, WC1H 9EZ, UK.
| | - Shelly Vishwakarma
- Department of Security and Crime Science, University College London, London, WC1H 9EZ, UK.
| | - Roget Kou
- School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK
| | - Chong Tang
- Department of Security and Crime Science, University College London, London, WC1H 9EZ, UK
| | - Karl Woodbridge
- Department of Security and Crime Science, University College London, London, WC1H 9EZ, UK
| | - Ian Craddock
- School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK
| | - Ryan McConville
- School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK
| | - Raul Santos-Rodriguez
- School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK
| | - Kevin Chetty
- Department of Security and Crime Science, University College London, London, WC1H 9EZ, UK
| | - Robert Piechocki
- School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK
| |
Collapse
|
7
|
Bian S, Liu M, Zhou B, Lukowicz P. The State-of-the-Art Sensing Techniques in Human Activity Recognition: A Survey. SENSORS (BASEL, SWITZERLAND) 2022; 22:4596. [PMID: 35746376 PMCID: PMC9229953 DOI: 10.3390/s22124596] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 06/13/2022] [Accepted: 06/16/2022] [Indexed: 06/02/2023]
Abstract
Human activity recognition (HAR) has become an intensive research topic in the past decade because of the pervasive user scenarios and the overwhelming development of advanced algorithms and novel sensing approaches. Previous HAR-related sensing surveys were primarily focused on either a specific branch such as wearable sensing and video-based sensing or a full-stack presentation of both sensing and data processing techniques, resulting in weak focus on HAR-related sensing techniques. This work tries to present a thorough, in-depth survey on the state-of-the-art sensing modalities in HAR tasks to supply a solid understanding of the variant sensing principles for younger researchers of the community. First, we categorized the HAR-related sensing modalities into five classes: mechanical kinematic sensing, field-based sensing, wave-based sensing, physiological sensing, and hybrid/others. Specific sensing modalities are then presented in each category, and a thorough description of the sensing tricks and the latest related works were given. We also discussed the strengths and weaknesses of each modality across the categorization so that newcomers could have a better overview of the characteristics of each sensing modality for HAR tasks and choose the proper approaches for their specific application. Finally, we summarized the presented sensing techniques with a comparison concerning selected performance metrics and proposed a few outlooks on the future sensing techniques used for HAR tasks.
Collapse
Affiliation(s)
- Sizhen Bian
- German Research Centre for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany; (M.L.); (B.Z.); (P.L.)
| | | | | | | |
Collapse
|
8
|
Kim J, Lee WH, Kim SH, Na JY, Lim YH, Cho SH, Cho SH, Park HK. Preclinical trial of noncontact anthropometric measurement using IR-UWB radar. Sci Rep 2022; 12:8174. [PMID: 35581250 PMCID: PMC9112269 DOI: 10.1038/s41598-022-12209-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 05/06/2022] [Indexed: 11/08/2022] Open
Abstract
Anthropometric profiles are important indices for assessing medical conditions, including malnutrition, obesity, and growth disorders. Noncontact methods for estimating those parameters could have considerable value in many practical situations, such as the assessment of young, uncooperative infants or children and the prevention of infectious disease transmission. The purpose of this study was to investigate the feasibility of obtaining noncontact anthropometric measurements using the impulse-radio ultrawideband (IR-UWB) radar sensor technique. A total of 45 healthy adults were enrolled, and a convolutional neural network (CNN) algorithm was implemented to analyze data extracted from IR-UWB radar. The differences (root-mean-square error, RMSE) between values from the radar and bioelectrical impedance analysis (BIA) as a reference in the measurement of height, weight, and body mass index (BMI) were 2.78, 5.31, and 2.25, respectively; predicted data from the radar highly agreed with those from the BIA. The intraclass correlation coefficients (ICCs) were 0.93, 0.94, and 0.83. In conclusion, IR-UWB radar can provide accurate estimates of anthropometric parameters in a noncontact manner; this study is the first to support the radar sensor as an applicable method in clinical situations.
Collapse
Affiliation(s)
- Jinsup Kim
- Department of Pediatrics, Hanyang University College of Medicine, Seoul, 04763, Republic of Korea
| | - Won Hyuk Lee
- Department of Electronics and Computer Engineering, Hanyang University, Seoul, 04763, Republic of Korea
| | - Seung Hyun Kim
- Department of Pediatrics, Hanyang University College of Medicine, Seoul, 04763, Republic of Korea
| | - Jae Yoon Na
- Department of Pediatrics, Hanyang University College of Medicine, Seoul, 04763, Republic of Korea
| | - Young-Hyo Lim
- Division of Cardiology, Department of Internal Medicine, Hanyang University College of Medicine, Seoul, 04763, Republic of Korea
| | - Seok Hyun Cho
- Department of Otorhinolaryngology, Hanyang University College of Medicine, Seoul, 04763, Republic of Korea
| | - Sung Ho Cho
- Department of Electronics and Computer Engineering, Hanyang University, Seoul, 04763, Republic of Korea.
| | - Hyun-Kyung Park
- Department of Pediatrics, Hanyang University College of Medicine, Seoul, 04763, Republic of Korea.
| |
Collapse
|