1
|
Hausmann J, Salekin MS, Zamzmi G, Mouton PR, Prescott S, Ho T, Sun YU, Goldgof D. Accurate Neonatal Face Detection for Improved Pain Classification in the Challenging NICU Setting. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2024; 12:49122-49133. [PMID: 38994038 PMCID: PMC11238607 DOI: 10.1109/access.2024.3383789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/13/2024]
Abstract
There is a tendency for object detection systems using off-the-shelf algorithms to fail when deployed in complex scenes. The present work describes a case for detecting facial expression in post-surgical neonates (newborns) as a modality for predicting and classifying severe pain in the Neonatal Intensive Care Unit (NICU). Our initial testing showed that both an off-the-shelf face detector and a machine learning algorithm trained on adult faces failed to detect facial expression of neonates in the NICU. We improved accuracy in this complex scene by training a state-of-the-art "You-Only-Look-Once" (YOLO) face detection model using the USF-MNPAD-I dataset of neonate faces. At run-time our trained YOLO model showed a difference of 8.6% mean Average Precision (mAP) and 21.2% Area under the ROC Curve (AUC) for automatic classification of neonatal pain compared with manual pain scoring by NICU nurses. Given the challenges, time and effort associated with collecting ground truth from the faces of post-surgical neonates, here we share the weights from training our YOLO model with these facial expression data. These weights can facilitate the further development of accurate strategies for detecting facial expression, which can be used to predict the time to pain onset in combination with other sensory modalities (body movements, crying frequency, vital signs). Reliable predictions of time to pain onset in turn create a therapeutic window of time wherein NICU nurses and providers can implement safe and effective strategies to mitigate severe pain in this vulnerable patient population.
Collapse
Affiliation(s)
- Jacqueline Hausmann
- Department of Computer Science and Engineering, College of Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Md Sirajus Salekin
- Department of Computer Science and Engineering, College of Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Ghada Zamzmi
- Department of Computer Science and Engineering, College of Engineering, University of South Florida, Tampa, FL 33620, USA
| | | | - Stephanie Prescott
- College of Nursing, USF Health, University of South Florida, Tampa, FL 33620, USA
| | - Thao Ho
- Department of Pediatrics, College of Medicine, University of South Florida, Tampa, FL 33606, USA
| | - Y U Sun
- Department of Computer Science and Engineering, College of Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, College of Engineering, University of South Florida, Tampa, FL 33620, USA
| |
Collapse
|
2
|
Shah STH, Shah SAH, Qureshi SA, Di Terlizzi A, Deriu MA. Automated facial characterization and image retrieval by convolutional neural networks. Front Artif Intell 2023; 6:1230383. [PMID: 38174109 PMCID: PMC10761416 DOI: 10.3389/frai.2023.1230383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 11/21/2023] [Indexed: 01/05/2024] Open
Abstract
Introduction Developing efficient methods to infer relations among different faces consisting of numerous expressions or on the same face at different times (e.g., disease progression) is an open issue in imaging related research. In this study, we present a novel method for facial feature extraction, characterization, and identification based on classical computer vision coupled with deep learning and, more specifically, convolutional neural networks. Methods We describe the hybrid face characterization system named FRetrAIval (FRAI), which is a hybrid of the GoogleNet and the AlexNet Neural Network (NN) models. Images analyzed by the FRAI network are preprocessed by computer vision techniques such as the oriented gradient-based algorithm that can extract only the face region from any kind of picture. The Aligned Face dataset (AFD) was used to train and test the FRAI solution for extracting image features. The Labeled Faces in the Wild (LFW) holdout dataset has been used for external validation. Results and discussion Overall, in comparison to previous techniques, our methodology has shown much better results on k-Nearest Neighbors (KNN) by yielding the maximum precision, recall, F1, and F2 score values (92.00, 92.66, 92.33, and 92.52%, respectively) for AFD and (95.00% for each variable) for LFW dataset, which were used as training and testing datasets. The FRAI model may be potentially used in healthcare and criminology as well as many other applications where it is important to quickly identify face features such as fingerprint for a specific identification target.
Collapse
Affiliation(s)
- Syed Taimoor Hussain Shah
- PolitoBIOMed Lab, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Turin, Italy
| | - Syed Adil Hussain Shah
- PolitoBIOMed Lab, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Turin, Italy
- Department of Research and Development (R&D), GPI SpA, Trento, Italy
| | - Shahzad Ahmad Qureshi
- Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Islamabad, Pakistan
| | | | - Marco Agostino Deriu
- PolitoBIOMed Lab, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Turin, Italy
| |
Collapse
|
3
|
Grooby E, Sitaula C, Ahani S, Holsti L, Malhotra A, Dumont GA, Marzbanrad F. Neonatal Face and Facial Landmark Detection from Video Recordings. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083549 DOI: 10.1109/embc40787.2023.10340960] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
This paper explores automated face and facial landmark detection of neonates, which is an important first step in many video-based neonatal health applications, such as vital sign estimation, pain assessment, sleep-wake classification, and jaundice detection. Utilising three publicly available datasets of neonates in the clinical environment, 366 images (258 subjects) and 89 (66 subjects) were annotated for training and testing, respectively. Transfer learning was applied to two YOLO-based models, with input training images augmented with random horizontal flipping, photo-metric colour distortion, translation and scaling during each training epoch. Additionally, the re-orientation of input images and fusion of trained deep learning models was explored. Our proposed model based on YOLOv7Face outperformed existing methods with a mean average precision of 84.8% for face detection, and a normalised mean error of 0.072 for facial landmark detection. Overall, this will assist in the development of fully automated neonatal health assessment algorithms.Clinical relevance- Accurate face and facial landmark detection provides an automated and non-contact option to assist in video-based neonatal health applications.
Collapse
|
4
|
Cheng X, Zhu H, Mei L, Luo F, Chen X, Zhao Y, Chen S, Pan Y. Artificial Intelligence Based Pain Assessment Technology in Clinical Application of Real-World Neonatal Blood Sampling. Diagnostics (Basel) 2022; 12:diagnostics12081831. [PMID: 36010186 PMCID: PMC9406884 DOI: 10.3390/diagnostics12081831] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 07/12/2022] [Accepted: 07/26/2022] [Indexed: 11/16/2022] Open
Abstract
Background: Accurate neonatal pain assessment (NPA) is the key to neonatal pain management, yet it is a challenging task for medical staff. This study aimed to analyze the clinical practicability of the artificial intelligence based NPA (AI-NPA) tool for real-world blood sampling. Method: We performed a prospective study to analyze the consistency of the NPA results given by a self-developed automated NPA system and nurses’ on-site NPAs (OS-NPAs) for 232 newborns during blood sampling in neonatal wards, where the neonatal infant pain scale (NIPS) was used for evaluation. Spearman correlation analysis and the degree of agreement of the pain score and pain grade derived by the NIPS were applied for statistical analysis. Results: Taking the OS-NPA results as the gold standard, the accuracies of the NIPS pain score and pain grade given by the automated NPA system were 88.79% and 95.25%, with kappa values of 0.92 and 0.90 (p < 0.001), respectively. Conclusion: The results of the automated NPA system for real-world neonatal blood sampling are highly consistent with the results of the OS-NPA. Considering the great advantages of automated NPA systems in repeatability, efficiency, and cost, it is worth popularizing the AI technique in NPA for precise and efficient neonatal pain management.
Collapse
Affiliation(s)
- Xiaoying Cheng
- Quality Improvement Office, The Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China;
| | - Huaiyu Zhu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (H.Z.); (Y.Z.)
| | - Linli Mei
- Administration Department of Nosocomial Infection, The Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China;
| | - Feixiang Luo
- Neonatal Intensive Care Unit, The Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China;
| | - Xiaofei Chen
- Gastroenterology Department, The Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China;
| | - Yisheng Zhao
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (H.Z.); (Y.Z.)
| | - Shuohui Chen
- Administration Department of Nosocomial Infection, The Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China;
- Correspondence: (S.C.); (Y.P.)
| | - Yun Pan
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (H.Z.); (Y.Z.)
- Correspondence: (S.C.); (Y.P.)
| |
Collapse
|
5
|
Salekin MS, Mouton PR, Zamzmi G, Patel R, Goldgof D, Kneusel M, Elkins SL, Murray E, Coughlin ME, Maguire D, Ho T, Sun Y. Future roles of artificial intelligence in early pain management of newborns. PAEDIATRIC AND NEONATAL PAIN 2021; 3:134-145. [PMID: 35547946 PMCID: PMC8975206 DOI: 10.1002/pne2.12060] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Revised: 07/07/2021] [Accepted: 07/19/2021] [Indexed: 12/14/2022]
Affiliation(s)
- Md Sirajus Salekin
- Computer Science and Engineering Department University of South Florida Tampa FL USA
| | | | - Ghada Zamzmi
- Computer Science and Engineering Department University of South Florida Tampa FL USA
| | - Raj Patel
- Muma College of Business University of South Florida Tampa FL USA
| | - Dmitry Goldgof
- Computer Science and Engineering Department University of South Florida Tampa FL USA
| | - Marcia Kneusel
- College of Medicine Pediatrics USF Health University of South Florida Tampa FL USA
| | | | | | | | - Denise Maguire
- College of Nursing USF Health University of South Florida Tampa FL USA
| | - Thao Ho
- College of Medicine Pediatrics USF Health University of South Florida Tampa FL USA
| | - Yu Sun
- Computer Science and Engineering Department University of South Florida Tampa FL USA
| |
Collapse
|
6
|
Li C, Pourtaherian A, van Onzenoort L, Ten WETA, de With PHN. Infant Facial Expression Analysis: Towards a Real-Time Video Monitoring System Using R-CNN and HMM. IEEE J Biomed Health Inform 2021; 25:1429-1440. [PMID: 33170787 DOI: 10.1109/jbhi.2020.3037031] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The manual monitoring of young infants suffering from diseases like reflux is significant, since infants can hardly articulate their feelings. In this work, we propose a video-based infant monitoring system for the analysis of infant expressions and states, approaching real-time performance. The expressions of interest consist of discomfort, unhappy, joy and neutral, whereas states include sleep, pacifier and open mouth. Benefiting from the expression analysis, the discomfort moments can also be used and correlated with a symptom-related disease, such as a reflux measurement for the diagnosis of gastroesophageal reflux. The system consists of three components: infant expressions and states detection, object tracking and detection compensation. The proposed system is based on combining expression detection using Fast R-CNN with a compensated detection using analyzing information from the previous frame and utilizing a Hidden Markov Model. The experimental results show a mean average precision of 81.9% and 84.8% for 4 infant expressions and 3 states evaluated with both clinical and daily datasets. Meanwhile, the average precision for discomfort detection achieves up to 90%.
Collapse
|
7
|
Analysis of Facial Information for Healthcare Applications: A Survey on Computer Vision-Based Approaches. INFORMATION 2020. [DOI: 10.3390/info11030128] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
This paper gives an overview of the cutting-edge approaches that perform facial cue analysis in the healthcare area. The document is not limited to global face analysis but it also concentrates on methods related to local cues (e.g., the eyes). A research taxonomy is introduced by dividing the face in its main features: eyes, mouth, muscles, skin, and shape. For each facial feature, the computer vision-based tasks aiming at analyzing it and the related healthcare goals that could be pursued are detailed.
Collapse
|