1
|
Gleason A, Richter F, Beller N, Arivazhagan N, Feng R, Holmes E, Glicksberg BS, Morton SU, La Vega-Talbott M, Fields M, Guttmann K, Nadkarni GN, Richter F. Accurate prediction of neurologic changes in critically ill infants using pose AI. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.04.17.24305953. [PMID: 38699362 PMCID: PMC11064996 DOI: 10.1101/2024.04.17.24305953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
Infant alertness and neurologic changes can reflect life-threatening pathology but are assessed by exam, which can be intermittent and subjective. Reliable, continuous methods are needed. We hypothesized that our computer vision method to track movement, pose AI, could predict neurologic changes in the neonatal intensive care unit (NICU). We collected 4,705 hours of video linked to electroencephalograms (EEG) from 115 infants. We trained a deep learning pose algorithm that accurately predicted anatomic landmarks in three evaluation sets (ROC-AUCs 0.83-0.94), showing feasibility of applying pose AI in an ICU. We then trained classifiers on landmarks from pose AI and observed high performance for sedation (ROC-AUCs 0.87-0.91) and cerebral dysfunction (ROC-AUCs 0.76-0.91), demonstrating that an EEG diagnosis can be predicted from video data alone. Taken together, deep learning with pose AI may offer a scalable, minimally invasive method for neuro-telemetry in the NICU.
Collapse
|
2
|
Reich C, Prangemeier T, Francani AO, Koeppl H. An Instance Segmentation Dataset of Yeast Cells in Microstructures. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083295 DOI: 10.1109/embc40787.2023.10340268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Extracting single-cell information from microscopy data requires accurate instance-wise segmentations. Obtaining pixel-wise segmentations from microscopy imagery remains a challenging task, especially with the added complexity of microstructured environments. This paper presents a novel dataset for segmenting yeast cells in microstructures. We offer pixel-wise instance segmentation labels for both cells and trap microstructures. In total, we release 493 densely annotated microscopy images. To facilitate a unified comparison between novel segmentation algorithms, we propose a standardized evaluation strategy for our dataset. The aim of the dataset and evaluation strategy is to facilitate the development of new cell segmentation approaches. The dataset is publicly available at https://christophreich1996.github.io/yeast_in_microstructures_dataset/.
Collapse
|
3
|
Peng Z, Kommers D, Liang RH, Long X, Cottaar W, Niemarkt H, Andriessen P, van Pul C. Continuous sensing and quantification of body motion in infants: A systematic review. Heliyon 2023; 9:e18234. [PMID: 37501976 PMCID: PMC10368857 DOI: 10.1016/j.heliyon.2023.e18234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 06/26/2023] [Accepted: 07/12/2023] [Indexed: 07/29/2023] Open
Abstract
Abnormal body motion in infants may be associated with neurodevelopmental delay or critical illness. In contrast to continuous patient monitoring of the basic vitals, the body motion of infants is only determined by discrete periodic clinical observations of caregivers, leaving the infants unattended for observation for a longer time. One step to fill this gap is to introduce and compare different sensing technologies that are suitable for continuous infant body motion quantification. Therefore, we conducted this systematic review for infant body motion quantification based on the PRISMA method (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). In this systematic review, we introduce and compare several sensing technologies with motion quantification in different clinical applications. We discuss the pros and cons of each sensing technology for motion quantification. Additionally, we highlight the clinical value and prospects of infant motion monitoring. Finally, we provide suggestions with specific needs in clinical practice, which can be referred by clinical users for their implementation. Our findings suggest that motion quantification can improve the performance of vital sign monitoring, and can provide clinical value to the diagnosis of complications in infants.
Collapse
Affiliation(s)
- Zheng Peng
- Department of Applied Physics, Eindhoven University of Technology, Eindhoven, the Netherlands
- Department of Clinical Physics, Máxima Medical Centre, Veldhoven, the Netherlands
| | - Deedee Kommers
- Department of Applied Physics, Eindhoven University of Technology, Eindhoven, the Netherlands
- Department of Neonatology, Máxima Medical Centre, Veldhoven, the Netherlands
| | - Rong-Hao Liang
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
- Department of Industrial Design, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Xi Long
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
- Philips Research, Eindhoven, the Netherlands
| | - Ward Cottaar
- Department of Applied Physics, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Hendrik Niemarkt
- Department of Applied Physics, Eindhoven University of Technology, Eindhoven, the Netherlands
- Department of Neonatology, Máxima Medical Centre, Veldhoven, the Netherlands
| | - Peter Andriessen
- Department of Applied Physics, Eindhoven University of Technology, Eindhoven, the Netherlands
- Department of Neonatology, Máxima Medical Centre, Veldhoven, the Netherlands
| | - Carola van Pul
- Department of Applied Physics, Eindhoven University of Technology, Eindhoven, the Netherlands
- Department of Clinical Physics, Máxima Medical Centre, Veldhoven, the Netherlands
| |
Collapse
|
4
|
Gleichauf J, Hennemann L, Fahlbusch FB, Hofmann O, Niebler C, Koelpin A. Sensor Fusion for the Robust Detection of Facial Regions of Neonates Using Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:4910. [PMID: 37430829 PMCID: PMC10223875 DOI: 10.3390/s23104910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 05/14/2023] [Accepted: 05/16/2023] [Indexed: 07/12/2023]
Abstract
The monitoring of vital signs and increasing patient comfort are cornerstones of modern neonatal intensive care. Commonly used monitoring methods are based on skin contact which can cause irritations and discomfort in preterm neonates. Therefore, non-contact approaches are the subject of current research aiming to resolve this dichotomy. Robust neonatal face detection is essential for the reliable detection of heart rate, respiratory rate and body temperature. While solutions for adult face detection are established, the unique neonatal proportions require a tailored approach. Additionally, sufficient open-source data of neonates on the NICU is lacking. We set out to train neural networks with the thermal-RGB-fusion data of neonates. We propose a novel indirect fusion approach including the sensor fusion of a thermal and RGB camera based on a 3D time-of-flight (ToF) camera. Unlike other approaches, this method is tailored for close distances encountered in neonatal incubators. Two neural networks were used with the fusion data and compared to RGB and thermal networks. For the class "head" we reached average precision values of 0.9958 (RetinaNet) and 0.9455 (YOLOv3) for the fusion data. Compared with the literature, similar precision was achieved, but we are the first to train a neural network with fusion data of neonates. The advantage of this approach is in calculating the detection area directly from the fusion image for the RGB and thermal modality. This increases data efficiency by 66%. Our results will facilitate the future development of non-contact monitoring to further improve the standard of care for preterm neonates.
Collapse
Affiliation(s)
| | - Lukas Hennemann
- Nuremberg Institute of Technology, 90489 Nuremberg, Germany (C.N.)
| | - Fabian B. Fahlbusch
- Division of Neonatology and Pediatric Intensive Care, Department of Pediatrics and Adolescent Medicine, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany;
- University Children’s Hospital Augsburg, Neonatal and Pediatric Intensive Care Unit, 86156 Augsburg, Germany
| | - Oliver Hofmann
- Nuremberg Institute of Technology, 90489 Nuremberg, Germany (C.N.)
| | | | | |
Collapse
|
5
|
Voss F, Brechmann N, Lyra S, Rixen J, Leonhardt S, Hoog Antink C. Multi-modal body part segmentation of infants using deep learning. Biomed Eng Online 2023; 22:28. [PMID: 36949491 PMCID: PMC10031929 DOI: 10.1186/s12938-023-01092-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 03/09/2023] [Indexed: 03/24/2023] Open
Abstract
BACKGROUND Monitoring the body temperature of premature infants is vital, as it allows optimal temperature control and may provide early warning signs for severe diseases such as sepsis. Thermography may be a non-contact and wireless alternative to state-of-the-art, cable-based methods. For monitoring use in clinical practice, automatic segmentation of the different body regions is necessary due to the movement of the infant. METHODS This work presents and evaluates algorithms for automatic segmentation of infant body parts using deep learning methods. Based on a U-Net architecture, three neural networks were developed and compared. While the first two only used one imaging modality (visible light or thermography), the third applied a feature fusion of both. For training and evaluation, a dataset containing 600 visible light and 600 thermography images from 20 recordings of infants was created and manually labeled. In addition, we used transfer learning on publicly available datasets of adults in combination with data augmentation to improve the segmentation results. RESULTS Individual optimization of the three deep learning models revealed that transfer learning and data augmentation improved segmentation regardless of the imaging modality. The fusion model achieved the best results during the final evaluation with a mean Intersection-over-Union (mIoU) of 0.85, closely followed by the RGB model. Only the thermography model achieved a lower accuracy (mIoU of 0.75). The results of the individual classes showed that all body parts were well-segmented, only the accuracy on the torso is inferior since the models struggle when only small areas of the skin are visible. CONCLUSION The presented multi-modal neural networks represent a new approach to the problem of infant body segmentation with limited available data. Robust results were obtained by applying feature fusion, cross-modality transfer learning and classical augmentation strategies.
Collapse
Affiliation(s)
- Florian Voss
- Chair of Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Aachen, Deutschland.
| | - Noah Brechmann
- Chair of Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Aachen, Deutschland
- Fraunhofer Institute for Microelectronic Circuits and Systems, Duisburg, Germany
| | - Simon Lyra
- Chair of Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Aachen, Deutschland
| | - Jöran Rixen
- Chair of Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Aachen, Deutschland
| | - Steffen Leonhardt
- Chair of Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Aachen, Deutschland
| | - Christoph Hoog Antink
- Chair of Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Aachen, Deutschland
- KIS*MED (AI Systems in Medicine), Department of Electrical Engineering and Information Technology, Technische Universität Darmstadt, Darmstadt, Germany
| |
Collapse
|
6
|
Lyra S, Mustafa A, Rixen J, Borik S, Lueken M, Leonhardt S. Conditional Generative Adversarial Networks for Data Augmentation of a Neonatal Image Dataset. SENSORS (BASEL, SWITZERLAND) 2023; 23:999. [PMID: 36679796 PMCID: PMC9864455 DOI: 10.3390/s23020999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 01/10/2023] [Accepted: 01/12/2023] [Indexed: 06/17/2023]
Abstract
In today's neonatal intensive care units, monitoring vital signs such as heart rate and respiration is fundamental for neonatal care. However, the attached sensors and electrodes restrict movement and can cause medical-adhesive-related skin injuries due to the immature skin of preterm infants, which may lead to serious complications. Thus, unobtrusive camera-based monitoring techniques in combination with image processing algorithms based on deep learning have the potential to allow cable-free vital signs measurements. Since the accuracy of deep-learning-based methods depends on the amount of training data, proper validation of the algorithms is difficult due to the limited image data of neonates. In order to enlarge such datasets, this study investigates the application of a conditional generative adversarial network for data augmentation by using edge detection frames from neonates to create RGB images. Different edge detection algorithms were used to validate the input images' effect on the adversarial network's generator. The state-of-the-art network architecture Pix2PixHD was adapted, and several hyperparameters were optimized. The quality of the generated RGB images was evaluated using a Mechanical Turk-like multistage survey conducted by 30 volunteers and the FID score. In a fake-only stage, 23% of the images were categorized as real. A direct comparison of generated and real (manually augmented) images revealed that 28% of the fake data were evaluated as more realistic. An FID score of 103.82 was achieved. Therefore, the conducted study shows promising results for the training and application of conditional generative adversarial networks to augment highly limited neonatal image datasets.
Collapse
Affiliation(s)
- Simon Lyra
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany
| | - Arian Mustafa
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany
| | - Jöran Rixen
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany
| | - Stefan Borik
- Department of Electromagnetic and Biomedical Engineering, Faculty of Electrical Engineering and Information Technology, University of Zilina, 010 26 Zilina, Slovakia
| | - Markus Lueken
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany
| | - Steffen Leonhardt
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany
| |
Collapse
|
7
|
Microfeature Segmentation Algorithm for Biological Images Using Improved Density Peak Clustering. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8630449. [PMID: 36035280 PMCID: PMC9410864 DOI: 10.1155/2022/8630449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 07/21/2022] [Indexed: 11/17/2022]
Abstract
To address the problem of low precision in feature segmentation of biological images with large noise, a microfeature segmentation algorithm for biological images using improved density peak clustering was proposed. First, the center pixel and edge information of a biological image were obtained to remove some redundant information. The three-dimensional space of the image is constructed, and the coordinate system is used to describe every superpixel of the biological image. Second, the image symmetry and reversibility are used to obtain the stopping position of pixels, other adjacent points are used to obtain the current color and shape information, and more vectors are used to express the density to complete the image pretreatment. Finally, the improved density peak clustering method is used to cluster the image, and the pixels completed by clustering and the remaining pixels are evenly distributed into the space to segment the image so as to complete the microfeature segmentation of the biological image based on the improved density peak clustering method. The results show that the proposed algorithm improves the segmentation efficiency, segmentation integrity rate, and segmentation accuracy. The time consumed by the proposed biological image microfeature segmentation algorithm is always less than 2 minutes, and the segmentation integrity rate can reach more than 90%. Furthermore, the proposed algorithm can reduce the missing condition and the noise of the segmented image and improve the image feature segmentation effect.
Collapse
|
8
|
Peng Z, van de Sande D, Lorato I, Long X, Liang RH, Andriessen P, Cottaar W, Stuijk S, van Pul C. A Comparison of Video-based Methods for Neonatal Body Motion Detection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3047-3050. [PMID: 36086375 DOI: 10.1109/embc48229.2022.9871700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Preterm infants in a neonatal intensive care unit (NICU) are continuously monitored for their vital signs, such as heart rate and oxygen saturation. Body motion patterns are documented intermittently by clinical observations. Changing motion patterns in preterm infants are associated with maturation and clinical events such as late-onset sepsis and seizures. However, continuous motion monitoring in the NICU setting is not yet performed. Video-based motion monitoring is a promising method due to its non-contact nature and therefore unobtrusiveness. This study aims to determine the feasibility of simple video-based methods for infant body motion detection. We investigated and compared four methods to detect the motion in videos of infants, using two datasets acquired with different types of cameras. The thermal dataset contains 32 hours of annotated videos from 13 infants in open beds. The RGB dataset contains 9 hours of annotated videos from 5 infants in incubators. The compared methods include background substruction (BS), sparse optical flow (SOF), dense optical flow (DOF), and oriented FAST and rotated BRIEF (ORB). The detection performance and computation time were evaluated by the area under receiver operating curves (AUC) and run time. We conducted experiments to detect motion and gross motion respectively. In the thermal dataset, the best performance of both experiments is achieved by BS with mean (standard deviation) AUCs of 0.86 (0.03) and 0.93 (0.03). In the RGB dataset, SOF outperforms the other methods in both experiments with AUCs of 0.82 (0.10) and 0.91 (0.05). All methods are efficient to be integrated into a camera system when using low-resolution thermal cameras.
Collapse
|
9
|
Lyra S, Rixen J, Heimann K, Karthik S, Joseph J, Jayaraman K, Orlikowsky T, Sivaprakasam M, Leonhardt S, Hoog Antink C. Camera fusion for real-time temperature monitoring of neonates using deep learning. Med Biol Eng Comput 2022; 60:1787-1800. [PMID: 35505175 PMCID: PMC9079037 DOI: 10.1007/s11517-022-02561-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 03/25/2022] [Indexed: 11/23/2022]
Abstract
Abstract The continuous monitoring of vital signs is a crucial aspect of medical care in neonatal intensive care units. Since cable-based sensors pose a potential risk for the immature skin of preterm infants, unobtrusive monitoring techniques using camera systems are increasingly investigated. The combination of deep learning–based algorithms and camera modalities such as RGB and infrared thermography can improve the development of cable-free methods for the extraction of vital parameters. In this study, a real-time approach for local extraction of temperatures on the body surface of neonates using a multi-modal clinical dataset was implemented. Therefore, a trained deep learning–based keypoint detector was used for body landmark prediction in RGB. Image registration was conducted to transfer the RGB points to the corresponding thermographic recordings. These landmarks were used to extract the body surface temperature in various regions to determine the central-peripheral temperature difference. A validation of the keypoint detector showed a mean average precision of 0.82. The registration resulted in mean absolute errors of 16.4 px (8.2 mm) for x and 22.4 px (11.2 mm) for y. The evaluation of the temperature extraction revealed a mean absolute error of 0.55 \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^{\circ }$$\end{document}∘C. A final performance of 31 fps was observed on the NVIDIA Jetson Xavier NX module, which proves real-time capability on an embedded GPU system. As a result, the approach can perform real-time temperature extraction on a low-cost GPU module. Graphical abstract ![]()
Collapse
|
10
|
Asano H, Hirakawa E, Hayashi H, Hamada K, Asayama Y, Oohashi M, Uchiyama A, Higashino T. A method for improving semantic segmentation using thermographic images in infants. BMC Med Imaging 2022; 22:1. [PMID: 34979965 PMCID: PMC8721998 DOI: 10.1186/s12880-021-00730-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 12/23/2021] [Indexed: 12/20/2022] Open
Abstract
Background Regulation of temperature is clinically important in the care of neonates because it has a significant impact on prognosis. Although probes that make contact with the skin are widely used to monitor temperature and provide spot central and peripheral temperature information, they do not provide details of the temperature distribution around the body. Although it is possible to obtain detailed temperature distributions using multiple probes, this is not clinically practical. Thermographic techniques have been reported for measurement of temperature distribution in infants. However, as these methods require manual selection of the regions of interest (ROIs), they are not suitable for introduction into clinical settings in hospitals. Here, we describe a method for segmentation of thermal images that enables continuous quantitative contactless monitoring of the temperature distribution over the whole body of neonates. Methods The semantic segmentation method, U-Net, was applied to thermal images of infants. The optimal combination of Weight Normalization, Group Normalization, and Flexible Rectified Linear Unit (FReLU) was evaluated. U-Net Generative Adversarial Network (U-Net GAN) was applied to thermal images, and a Self-Attention (SA) module was finally applied to U-Net GAN (U-Net GAN + SA) to improve precision. The semantic segmentation performance of these methods was evaluated. Results The optimal semantic segmentation performance was obtained with application of FReLU and Group Normalization to U-Net, showing accuracy of 92.9% and Mean Intersection over Union (mIoU) of 64.5%. U-Net GAN improved the performance, yielding accuracy of 93.3% and mIoU of 66.9%, and U-Net GAN + SA showed further improvement with accuracy of 93.5% and mIoU of 70.4%. Conclusions FReLU and Group Normalization are appropriate semantic segmentation methods for application to neonatal thermal images. U-Net GAN and U-Net GAN + SA significantly improved the mIoU of segmentation.
Collapse
Affiliation(s)
- Hidetsugu Asano
- Technical Department, Atom Medical Corporation, 2-2-1, Dojo, Sakura-ku, Saitama city, Saitama, 338-0835, Japan.
| | - Eiji Hirakawa
- Department of Neonatology, Nagasaki Harbor Medical Center, 6-39, Shinchi-machi, Nagasaki City, Nagasaki, 850-8555, Japan.,Department of Neonatology, Kagoshima City Hospital, 37-1 Uearata-cho, Kagoshima City, Kagoshima, 890-8760, Japan
| | - Hayato Hayashi
- Technical Department, Atom Medical Corporation, 2-2-1, Dojo, Sakura-ku, Saitama city, Saitama, 338-0835, Japan
| | - Keisuke Hamada
- Department of Clinical Engineering, Nagasaki Harbor Medical Center, 6-39, Shinchi-machi, Nagasaki City, Nagasaki, 850-8555, Japan.,Department of Comprehensive Community Care Education, Nagasaki University Graduate School of Biomedical Sciences, 1-14, Bunkyo-machi, Nagasaki City, Nagasaki, 852-8521, Japan
| | - Yuto Asayama
- Technical Department, Atom Medical Corporation, 2-2-1, Dojo, Sakura-ku, Saitama city, Saitama, 338-0835, Japan
| | - Masaaki Oohashi
- Technical Department, Atom Medical Corporation, 2-2-1, Dojo, Sakura-ku, Saitama city, Saitama, 338-0835, Japan
| | - Akira Uchiyama
- Mobile Computing Laboratory, Graduate School of Information Science and Technology, Osaka University, 1-5, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Teruo Higashino
- Mobile Computing Laboratory, Graduate School of Information Science and Technology, Osaka University, 1-5, Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
11
|
Cheng CH, Wong KL, Chin JW, Chan TT, So RHY. Deep Learning Methods for Remote Heart Rate Measurement: A Review and Future Research Agenda. SENSORS (BASEL, SWITZERLAND) 2021; 21:6296. [PMID: 34577503 PMCID: PMC8473186 DOI: 10.3390/s21186296] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 09/13/2021] [Accepted: 09/16/2021] [Indexed: 01/05/2023]
Abstract
Heart rate (HR) is one of the essential vital signs used to indicate the physiological health of the human body. While traditional HR monitors usually require contact with skin, remote photoplethysmography (rPPG) enables contactless HR monitoring by capturing subtle light changes of skin through a video camera. Given the vast potential of this technology in the future of digital healthcare, remote monitoring of physiological signals has gained significant traction in the research community. In recent years, the success of deep learning (DL) methods for image and video analysis has inspired researchers to apply such techniques to various parts of the remote physiological signal extraction pipeline. In this paper, we discuss several recent advances of DL-based methods specifically for remote HR measurement, categorizing them based on model architecture and application. We further detail relevant real-world applications of remote physiological monitoring and summarize various common resources used to accelerate related research progress. Lastly, we analyze the implications of research findings and discuss research gaps to guide future explorations.
Collapse
Affiliation(s)
- Chun-Hong Cheng
- Department of Computer Science, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China;
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China; (J.-W.C.); (T.-T.C.); (R.H.Y.S.)
| | - Kwan-Long Wong
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China; (J.-W.C.); (T.-T.C.); (R.H.Y.S.)
- Department of Bioengineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Jing-Wei Chin
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China; (J.-W.C.); (T.-T.C.); (R.H.Y.S.)
- Department of Industrial Engineering and Decision Analytics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Tsz-Tai Chan
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China; (J.-W.C.); (T.-T.C.); (R.H.Y.S.)
- Department of Industrial Engineering and Decision Analytics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Richard H. Y. So
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China; (J.-W.C.); (T.-T.C.); (R.H.Y.S.)
- Department of Industrial Engineering and Decision Analytics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| |
Collapse
|
12
|
Lyra S, Mayer L, Ou L, Chen D, Timms P, Tay A, Chan PY, Ganse B, Leonhardt S, Hoog Antink C. A Deep Learning-Based Camera Approach for Vital Sign Monitoring Using Thermography Images for ICU Patients. SENSORS (BASEL, SWITZERLAND) 2021; 21:1495. [PMID: 33670066 PMCID: PMC7926634 DOI: 10.3390/s21041495] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 02/11/2021] [Accepted: 02/16/2021] [Indexed: 12/14/2022]
Abstract
Infrared thermography for camera-based skin temperature measurement is increasingly used in medical practice, e.g., to detect fevers and infections, such as recently in the COVID-19 pandemic. This contactless method is a promising technology to continuously monitor the vital signs of patients in clinical environments. In this study, we investigated both skin temperature trend measurement and the extraction of respiration-related chest movements to determine the respiratory rate using low-cost hardware in combination with advanced algorithms. In addition, the frequency of medical examinations or visits to the patients was extracted. We implemented a deep learning-based algorithm for real-time vital sign extraction from thermography images. A clinical trial was conducted to record data from patients on an intensive care unit. The YOLOv4-Tiny object detector was applied to extract image regions containing vital signs (head and chest). The infrared frames were manually labeled for evaluation. Validation was performed on a hold-out test dataset of 6 patients and revealed good detector performance (0.75 intersection over union, 0.94 mean average precision). An optical flow algorithm was used to extract the respiratory rate from the chest region. The results show a mean absolute error of 2.69 bpm. We observed a computational performance of 47 fps on an NVIDIA Jetson Xavier NX module for YOLOv4-Tiny, which proves real-time capability on an embedded GPU system. In conclusion, the proposed method can perform real-time vital sign extraction on a low-cost system-on-module and may thus be a useful method for future contactless vital sign measurements.
Collapse
Affiliation(s)
- Simon Lyra
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany; (L.M.); (L.O.); (S.L.); (C.H.A.)
| | - Leon Mayer
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany; (L.M.); (L.O.); (S.L.); (C.H.A.)
| | - Liyang Ou
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany; (L.M.); (L.O.); (S.L.); (C.H.A.)
| | - David Chen
- Eastern Health Clinical School, Monash University Melbourne, Box Hill, VIC 3128, Australia; (D.C.); (P.T.); (A.T.); (P.Y.C.)
| | - Paddy Timms
- Eastern Health Clinical School, Monash University Melbourne, Box Hill, VIC 3128, Australia; (D.C.); (P.T.); (A.T.); (P.Y.C.)
| | - Andrew Tay
- Eastern Health Clinical School, Monash University Melbourne, Box Hill, VIC 3128, Australia; (D.C.); (P.T.); (A.T.); (P.Y.C.)
| | - Peter Y. Chan
- Eastern Health Clinical School, Monash University Melbourne, Box Hill, VIC 3128, Australia; (D.C.); (P.T.); (A.T.); (P.Y.C.)
| | - Bergita Ganse
- Research Centre for Musculoskeletal Science and Sports Medicine, Manchester Metropolitan University, Manchester M1 5GD, UK;
| | - Steffen Leonhardt
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany; (L.M.); (L.O.); (S.L.); (C.H.A.)
| | - Christoph Hoog Antink
- Medical Information Technology, Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, 52074 Aachen, Germany; (L.M.); (L.O.); (S.L.); (C.H.A.)
- Biomedical Engineering, Electrical Engineering and Information Technology, TU Darmstadt, 64289 Darmstadt, Germany
| |
Collapse
|