1
|
Li Z, Chen K, Xie Y. A Deep Learning Method for Human Sleeping Pose Estimation with Millimeter Wave Radar. SENSORS (BASEL, SWITZERLAND) 2024; 24:5900. [PMID: 39338645 PMCID: PMC11435949 DOI: 10.3390/s24185900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 09/03/2024] [Accepted: 09/09/2024] [Indexed: 09/30/2024]
Abstract
Recognizing sleep posture is crucial for the monitoring of people with sleeping disorders. Existing contact-based systems might interfere with sleeping, while camera-based systems may raise privacy concerns. In contrast, radar-based sensors offer a promising solution with high penetration ability and the capability to detect vital bio-signals. This study propose a deep learning method for human sleep pose recognition from signals acquired from single-antenna Frequency-Modulated Continuous Wave (FMCW) radar device. To capture both frequency features and sequential features, we introduce ResTCN, an effective architecture combining Residual blocks and Temporal Convolution Network (TCN) to recognize different sleeping postures, from augmented statistical motion features of the radar time series. We rigorously evaluated our method with an experimentally acquired data set which contains sleeping radar sequences from 16 volunteers. We report a classification accuracy of 82.74% on average, which outperforms the state-of-the-art methods.
Collapse
Affiliation(s)
- Zisheng Li
- Shenzhen lnstitute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
- University of Chinese Academy of Sciences, Beijing 100190, China
| | - Ken Chen
- Shenzhen lnstitute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yaoqin Xie
- Shenzhen lnstitute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| |
Collapse
|
2
|
Lai DKH, Tam AYC, So BPH, Chan ACH, Zha LW, Wong DWC, Cheung JCW. Deciphering Optimal Radar Ensemble for Advancing Sleep Posture Prediction through Multiview Convolutional Neural Network (MVCNN) Approach Using Spatial Radio Echo Map (SREM). SENSORS (BASEL, SWITZERLAND) 2024; 24:5016. [PMID: 39124063 PMCID: PMC11314943 DOI: 10.3390/s24155016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Revised: 08/01/2024] [Accepted: 08/01/2024] [Indexed: 08/12/2024]
Abstract
Assessing sleep posture, a critical component in sleep tests, is crucial for understanding an individual's sleep quality and identifying potential sleep disorders. However, monitoring sleep posture has traditionally posed significant challenges due to factors such as low light conditions and obstructions like blankets. The use of radar technolsogy could be a potential solution. The objective of this study is to identify the optimal quantity and placement of radar sensors to achieve accurate sleep posture estimation. We invited 70 participants to assume nine different sleep postures under blankets of varying thicknesses. This was conducted in a setting equipped with a baseline of eight radars-three positioned at the headboard and five along the side. We proposed a novel technique for generating radar maps, Spatial Radio Echo Map (SREM), designed specifically for data fusion across multiple radars. Sleep posture estimation was conducted using a Multiview Convolutional Neural Network (MVCNN), which serves as the overarching framework for the comparative evaluation of various deep feature extractors, including ResNet-50, EfficientNet-50, DenseNet-121, PHResNet-50, Attention-50, and Swin Transformer. Among these, DenseNet-121 achieved the highest accuracy, scoring 0.534 and 0.804 for nine-class coarse- and four-class fine-grained classification, respectively. This led to further analysis on the optimal ensemble of radars. For the radars positioned at the head, a single left-located radar proved both essential and sufficient, achieving an accuracy of 0.809. When only one central head radar was used, omitting the central side radar and retaining only the three upper-body radars resulted in accuracies of 0.779 and 0.753, respectively. This study established the foundation for determining the optimal sensor configuration in this application, while also exploring the trade-offs between accuracy and the use of fewer sensors.
Collapse
Affiliation(s)
- Derek Ka-Hei Lai
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Andy Yiu-Chau Tam
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Bryan Pak-Hei So
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Andy Chi-Ho Chan
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Li-Wen Zha
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Duo Wai-Chi Wong
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - James Chung-Wai Cheung
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
- Research Institute of Smart Ageing, The Hong Kong Polytechnic University, Hong Kong 999077, China
| |
Collapse
|
3
|
Hu D, Gao W, Ang KK, Hu M, Chuai G, Huang R. Smart Sleep Monitoring: Sparse Sensor-Based Spatiotemporal CNN for Sleep Posture Detection. SENSORS (BASEL, SWITZERLAND) 2024; 24:4833. [PMID: 39123879 PMCID: PMC11314976 DOI: 10.3390/s24154833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/20/2024] [Accepted: 07/24/2024] [Indexed: 08/12/2024]
Abstract
Sleep quality is heavily influenced by sleep posture, with research indicating that a supine posture can worsen obstructive sleep apnea (OSA) while lateral postures promote better sleep. For patients confined to beds, regular changes in posture are crucial to prevent the development of ulcers and bedsores. This study presents a novel sparse sensor-based spatiotemporal convolutional neural network (S3CNN) for detecting sleep posture. This S3CNN holistically incorporates a pair of spatial convolution neural networks to capture cardiorespiratory activity maps and a pair of temporal convolution neural networks to capture the heart rate and respiratory rate. Sleep data were collected in actual sleep conditions from 22 subjects using a sparse sensor array. The S3CNN was then trained to capture the spatial pressure distribution from the cardiorespiratory activity and temporal cardiopulmonary variability from the heart and respiratory data. Its performance was evaluated using three rounds of 10 fold cross-validation on the 8583 data samples collected from the subjects. The results yielded 91.96% recall, 92.65% precision, and 93.02% accuracy, which are comparable to the state-of-the-art methods that use significantly more sensors for marginally enhanced accuracy. Hence, the proposed S3CNN shows promise for sleep posture monitoring using sparse sensors, demonstrating potential for a more cost-effective approach.
Collapse
Affiliation(s)
- Dikun Hu
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), No. 10 Xitucheng Road, Haidian District, Beijing 100876, China; (D.H.); (W.G.); (G.C.)
| | - Weidong Gao
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), No. 10 Xitucheng Road, Haidian District, Beijing 100876, China; (D.H.); (W.G.); (G.C.)
| | - Kai Keng Ang
- Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #21-01 Connexis (South Tower), Singapore 138632, Singapore;
- College of Computing and Data Science, Nanyang Technological University, 50 Nanyang Ave., Singapore 639798, Singapore
| | - Mengjiao Hu
- Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #21-01 Connexis (South Tower), Singapore 138632, Singapore;
| | - Gang Chuai
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), No. 10 Xitucheng Road, Haidian District, Beijing 100876, China; (D.H.); (W.G.); (G.C.)
| | - Rong Huang
- Department of Respiratory and Critical Care Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan Wangfujing, Beijing 100730, China;
| |
Collapse
|
4
|
Tsujimoto M, Hisajima T, Matsuda S, Tanaka S, Suzuki K, Shimokakimoto T, Toyama Y. Exploratory analysis of swallowing behaviour in community-dwelling older adults using a wearable device: Differences by age and ingestant under different task loads. Digit Health 2024; 10:20552076241264640. [PMID: 39070893 PMCID: PMC11282566 DOI: 10.1177/20552076241264640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 06/10/2024] [Indexed: 07/30/2024] Open
Abstract
Objective To develop a new method of evaluating swallowing behaviour. Methods Sixty-nine healthy participants were divided into a younger (16 males and 16 females, mean age 39.09 ± 12.16 years) and older (18 males and 19 females, mean age 71.43 ± 5.50 years) group. The participants ingested water and yoghurt twice (directed and free swallowing) at rest and after performing simple daily life tasks (calculation and exercise). To measure swallowing frequency, we employed a smartphone-based, portable and neck-worn swallowing-sound-monitoring device. This device monitors swallowing behaviour continuously by collecting biological sounds from the neck without imposing behavioural restrictions. A neural network model of swallowing sound identification by deep learning was used for the subsequent evaluation. This device was used to obtain two types of saliva-swallowing sounds associated with different ingestants, at rest and after performing a stimulating task. Furthermore, we assessed the associated subjective psychological states. Results The younger group showed a higher directed swallowing frequency (for both water and yoghurt) than the older group did. Regarding the type of ingestant, the swallowing frequency for yoghurt was higher during free swallowing in both the young and the older groups. 'Feeling calm' was reported significantly more often in the older group after swallowing yoghurt following exercise. Conclusions Swallowing status in daily life was measured non-invasively using a wearable mobile device. It is important to consider the type of ingestant, daily living activities, and age when assessing swallowing.
Collapse
Affiliation(s)
- Masashi Tsujimoto
- National Center for Geriatrics and Gerontology, Innovation Center for Translational Research, Obu, Japan
| | | | | | - Seiya Tanaka
- National Center for Geriatrics and Gerontology, Innovation Center for Translational Research, Obu, Japan
| | - Keisuke Suzuki
- National Center for Geriatrics and Gerontology, Innovation Center for Translational Research, Obu, Japan
| | | | | |
Collapse
|
5
|
Kang JH, Hsieh EH, Lee CY, Sun YM, Lee TY, Hsu JBK, Chang TH. Assessing Non-Specific Neck Pain through Pose Estimation from Images Based on Ensemble Learning. Life (Basel) 2023; 13:2292. [PMID: 38137893 PMCID: PMC10744896 DOI: 10.3390/life13122292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 11/27/2023] [Accepted: 11/28/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND Mobile phones, laptops, and computers have become an indispensable part of our lives in recent years. Workers may have an incorrect posture when using a computer for a prolonged period of time. Using these products with an incorrect posture can lead to neck pain. However, there are limited data on postures in real-life situations. METHODS In this study, we used a common camera to record images of subjects carrying out three different tasks (a typing task, a gaming task, and a video-watching task) on a computer. Different artificial intelligence (AI)-based pose estimation approaches were applied to analyze the head's yaw, pitch, and roll and coordinate information of the eyes, nose, neck, and shoulders in the images. We used machine learning models such as random forest, XGBoost, logistic regression, and ensemble learning to build a model to predict whether a subject had neck pain by analyzing their posture when using the computer. RESULTS After feature selection and adjustment of the predictive models, nested cross-validation was applied to evaluate the models and fine-tune the hyperparameters. Finally, the ensemble learning approach was utilized to construct a model via bagging, which achieved a performance with 87% accuracy, 92% precision, 80.3% recall, 95.5% specificity, and an AUROC of 0.878. CONCLUSIONS We developed a predictive model for the identification of non-specific neck pain using 2D video images without the need for costly devices, advanced environment settings, or extra sensors. This method could provide an effective way for clinically evaluating poor posture during real-world computer usage scenarios.
Collapse
Affiliation(s)
- Jiunn-Horng Kang
- Department of Physical Medicine and Rehabilitation, Taipei Medical University Hospital, Taipei 110, Taiwan;
- Graduate Institute of Nanomedicine and Medical Engineering, Taipei Medical University, Taipei 110, Taiwan
| | - En-Han Hsieh
- Graduate Institute of Biomedical Informatics, Taipei Medical University, Taipei 110, Taiwan
| | - Cheng-Yang Lee
- Graduate Institute of Biomedical Informatics, Taipei Medical University, Taipei 110, Taiwan
| | | | - Tzong-Yi Lee
- Institute of Bioinformatics and Systems Biology, National Yang Ming Chiao Tung University, Hsinchu 300, Taiwan
| | - Justin Bo-Kai Hsu
- Department of Computer Science and Engineering, Yuan Ze University, Taoyuan 320, Taiwan
| | - Tzu-Hao Chang
- Graduate Institute of Biomedical Informatics, Taipei Medical University, Taipei 110, Taiwan
- Clinical Big Data Research Center, Taipei Medical University Hospital, Taipei 110, Taiwan
| |
Collapse
|
6
|
Lai DKH, Yu ZH, Leung TYN, Lim HJ, Tam AYC, So BPH, Mao YJ, Cheung DSK, Wong DWC, Cheung JCW. Vision Transformers (ViT) for Blanket-Penetrating Sleep Posture Recognition Using a Triple Ultra-Wideband (UWB) Radar System. SENSORS (BASEL, SWITZERLAND) 2023; 23:2475. [PMID: 36904678 PMCID: PMC10006965 DOI: 10.3390/s23052475] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/16/2023] [Accepted: 02/20/2023] [Indexed: 06/18/2023]
Abstract
Sleep posture has a crucial impact on the incidence and severity of obstructive sleep apnea (OSA). Therefore, the surveillance and recognition of sleep postures could facilitate the assessment of OSA. The existing contact-based systems might interfere with sleeping, while camera-based systems introduce privacy concerns. Radar-based systems might overcome these challenges, especially when individuals are covered with blankets. The aim of this research is to develop a nonobstructive multiple ultra-wideband radar sleep posture recognition system based on machine learning models. We evaluated three single-radar configurations (top, side, and head), three dual-radar configurations (top + side, top + head, and side + head), and one tri-radar configuration (top + side + head), in addition to machine learning models, including CNN-based networks (ResNet50, DenseNet121, and EfficientNetV2) and vision transformer-based networks (traditional vision transformer and Swin Transformer V2). Thirty participants (n = 30) were invited to perform four recumbent postures (supine, left side-lying, right side-lying, and prone). Data from eighteen participants were randomly chosen for model training, another six participants' data (n = 6) for model validation, and the remaining six participants' data (n = 6) for model testing. The Swin Transformer with side and head radar configuration achieved the highest prediction accuracy (0.808). Future research may consider the application of the synthetic aperture radar technique.
Collapse
Affiliation(s)
- Derek Ka-Hei Lai
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Zi-Han Yu
- School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Tommy Yau-Nam Leung
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Hyo-Jung Lim
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Andy Yiu-Chau Tam
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Bryan Pak-Hei So
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Ye-Jiao Mao
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Daphne Sze Ki Cheung
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Duo Wai-Chi Wong
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - James Chung-Wai Cheung
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China
- Research Institute of Smart Ageing, The Hong Kong Polytechnic University, Hong Kong 999077, China
| |
Collapse
|