1
|
Carlini LP, Coutrin GDAS, Ferreira LA, Soares JDCA, Silva GVT, Heiderich TM, Balda RDCX, Barros MCDM, Guinsburg R, Thomaz CE. Human vs machine towards neonatal pain assessment: A comprehensive analysis of the facial features extracted by health professionals, parents, and convolutional neural networks. Artif Intell Med 2024; 147:102724. [PMID: 38184347 DOI: 10.1016/j.artmed.2023.102724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 11/22/2023] [Accepted: 11/23/2023] [Indexed: 01/08/2024]
Abstract
Neonates are not able to verbally communicate pain, hindering the correct identification of this phenomenon. Several clinical scales have been proposed to assess pain, mainly using the facial features of the neonate, but a better comprehension of these features is yet required, since several related works have shown the subjectivity of these scales. Meanwhile, computational methods have been implemented to automate neonatal pain assessment and, although performing accurately, these methods still lack the interpretability of the corresponding decision-making processes. To address this issue, we propose in this work a facial feature extraction framework to gather information and investigate the human and machine neonatal pain assessments, comparing the visual attention of the facial features perceived by health-professionals and parents of neonates with the most relevant ones extracted by eXplainable Artificial Intelligence (XAI) methods, considering the VGG-Face and N-CNN deep learning architectures. Our experimental results show that the information extracted by the computational methods are clinically relevant to neonatal pain assessment, but yet do not agree with the facial visual attention of health-professionals and parents, suggesting that humans and machines can learn from each other to improve their decision-making processes. We believe that these findings might advance our understanding of how humans and machines code and decode neonatal facial responses to pain, enabling further improvements in clinical scales widely used in practical situations and in face-based automatic pain assessment tools as well.
Collapse
Affiliation(s)
- Lucas Pereira Carlini
- Department of Electrical Engineering, University College FEI, Av. Humberto de Alencar Castelo Branco, 3972-B, Sao Bernardo do Campo, 09850-901, Sao Paulo, Brazil.
| | - Gabriel de Almeida Sá Coutrin
- Department of Electrical Engineering, University College FEI, Av. Humberto de Alencar Castelo Branco, 3972-B, Sao Bernardo do Campo, 09850-901, Sao Paulo, Brazil
| | - Leonardo Antunes Ferreira
- Department of Electrical Engineering, University College FEI, Av. Humberto de Alencar Castelo Branco, 3972-B, Sao Bernardo do Campo, 09850-901, Sao Paulo, Brazil
| | | | | | - Tatiany Marcondes Heiderich
- Department of Electrical Engineering, University College FEI, Av. Humberto de Alencar Castelo Branco, 3972-B, Sao Bernardo do Campo, 09850-901, Sao Paulo, Brazil; Department of Paediatrics, Federal University of Sao Paulo, R. Botucatu, 740, Sao Paulo, 04024-002, Sao Paulo, Brazil
| | - Rita de Cássia Xavier Balda
- Department of Paediatrics, Federal University of Sao Paulo, R. Botucatu, 740, Sao Paulo, 04024-002, Sao Paulo, Brazil
| | | | - Ruth Guinsburg
- Department of Paediatrics, Federal University of Sao Paulo, R. Botucatu, 740, Sao Paulo, 04024-002, Sao Paulo, Brazil
| | - Carlos Eduardo Thomaz
- Department of Electrical Engineering, University College FEI, Av. Humberto de Alencar Castelo Branco, 3972-B, Sao Bernardo do Campo, 09850-901, Sao Paulo, Brazil
| |
Collapse
|
2
|
Heiderich TM, Carlini LP, Buzuti LF, Balda RDCX, Barros MCM, Guinsburg R, Thomaz CE. Face-based automatic pain assessment: challenges and perspectives in neonatal intensive care units. J Pediatr (Rio J) 2023; 99:546-560. [PMID: 37331703 PMCID: PMC10594024 DOI: 10.1016/j.jped.2023.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 05/19/2023] [Accepted: 05/22/2023] [Indexed: 06/20/2023] Open
Abstract
OBJECTIVE To describe the challenges and perspectives of the automation of pain assessment in the Neonatal Intensive Care Unit. DATA SOURCES A search for scientific articles published in the last 10 years on automated neonatal pain assessment was conducted in the main Databases of the Health Area and Engineering Journal Portals, using the descriptors: Pain Measurement, Newborn, Artificial Intelligence, Computer Systems, Software, Automated Facial Recognition. SUMMARY OF FINDINGS Fifteen articles were selected and allowed a broad reflection on first, the literature search did not return the various automatic methods that exist to date, and those that exist are not effective enough to replace the human eye; second, computational methods are not yet able to automatically detect pain on partially covered faces and need to be tested during the natural movement of the neonate and with different light intensities; third, for research to advance in this area, databases are needed with more neonatal facial images available for the study of computational methods. CONCLUSION There is still a gap between computational methods developed for automated neonatal pain assessment and a practical application that can be used at the bedside in real-time, that is sensitive, specific, and with good accuracy. The studies reviewed described limitations that could be minimized with the development of a tool that identifies pain by analyzing only free facial regions, and the creation and feasibility of a synthetic database of neonatal facial images that is freely available to researchers.
Collapse
Affiliation(s)
- Tatiany M Heiderich
- Centro Universitário da Fundação Educacional Inaciana (FEI), São Bernardo do Campo, SP, Brazil.
| | - Lucas P Carlini
- Centro Universitário da Fundação Educacional Inaciana (FEI), São Bernardo do Campo, SP, Brazil
| | - Lucas F Buzuti
- Centro Universitário da Fundação Educacional Inaciana (FEI), São Bernardo do Campo, SP, Brazil
| | | | | | - Ruth Guinsburg
- Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil
| | - Carlos E Thomaz
- Centro Universitário da Fundação Educacional Inaciana (FEI), São Bernardo do Campo, SP, Brazil
| |
Collapse
|
3
|
Zhao Y, Zhu H, Chen X, Luo F, Li M, Zhou J, Chen S, Pan Y. Pose-invariant and occlusion-robust neonatal facial pain assessment. Comput Biol Med 2023; 165:107462. [PMID: 37716244 DOI: 10.1016/j.compbiomed.2023.107462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 08/12/2023] [Accepted: 09/04/2023] [Indexed: 09/18/2023]
Abstract
Neonatal Facial Pain Assessment (NFPA) is essential to improve neonatal pain management. Pose variation and occlusion, which can significantly alter the facial appearance, are two major and still unstudied barriers to NFPA. We bridge this gap in terms of method and dataset. Techniques to tackle both challenges in other tasks either expect pose/occlusion-invariant deep learning methods or first generate a normal version of the input image before feature extraction, combining these we argue that it is more effective to jointly perform adversarial learning and end-to-end classification for their mutual benefit. To this end, we propose a Pose-invariant Occlusion-robust Pain Assessment (POPA) framework, with two novelties. We incorporate adversarial learning-based disturbance mitigation for end-to-end pain-level classification and propose a novel composite loss function for facial representation learning; compared to the vanilla discriminator that implicitly determines occlusion and pose conditions, we propose a multi-scale discriminator that determines explicitly, while incorporating local discriminators to enhance the discrimination of key regions. For a comprehensive evaluation, we built the first neonatal pain dataset with disturbance annotation involving 1091 neonates and also applied the proposed POPA to the facial expression recognition task. Extensive qualitative and quantitative experiments prove the superiority of the POPA.
Collapse
Affiliation(s)
- Yisheng Zhao
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China.
| | - Huaiyu Zhu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China.
| | - Xiaofei Chen
- Nursing Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Feixiang Luo
- Nursing Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Mengting Li
- Nursing Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Jinyan Zhou
- Nursing Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Shuohui Chen
- Hospital Infection-Control Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Yun Pan
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China.
| |
Collapse
|
4
|
Fontaine D, Vielzeuf V, Genestier P, Limeux P, Santucci-Sivilotto S, Mory E, Darmon N, Lanteri-Minet M, Mokhtar M, Laine M, Vistoli D. Artificial intelligence to evaluate postoperative pain based on facial expression recognition. Eur J Pain 2022; 26:1282-1291. [PMID: 35352426 DOI: 10.1002/ejp.1948] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 03/25/2022] [Accepted: 03/28/2022] [Indexed: 11/09/2022]
Abstract
BACKGROUND Pain intensity evaluation by self-report is difficult and biased in non-communicating people, which may contribute to inappropriate pain management. The use of artificial intelligence (AI) to evaluate pain intensity based on automated facial expression analysis has not been evaluated in clinical conditions. METHODS We trained and externally validated a deep-learning system (ResNet-18 convolutional neural network) to identify and classify 2810 facial expressions of 1189 patients, captured before and after surgery, according to their self-reported pain intensity using numeric rating scale (NRS, 0-10). AI performances were evaluated by accuracy (concordance between AI prediction and patient-reported pain intensity), sensitivity and specificity to diagnose pain ≥4/10 and ≥7/10. We then confronted AI performances with those of 33 nurses to evaluate pain intensity from facial expression in the same situation. RESULTS In the external testing set (120 face images), the deep learning system was able to predict exactly the pain intensity among the 11 possible scores (0-10) in 53% of the cases with a mean error of 2.4 points. Its sensitivities to detect pain ≥4/10 and ≥7/10 were 89.7% and 77.5%, respectively. Nurses estimated the right NRS pain intensity with a mean accuracy of 14.9% and identified pain ≥4/10 and ≥7/10 with sensitivities of 44.9% and 17.0%. CONCLUSIONS Subject to further improvement of AI performances through further training, these results suggest that AI using facial expression analysis could be used to assist physicians to evaluate pain and detect severe pain, especially in people not able to report appropriately their pain by themselves. SIGNIFICANCE These original findings represent a major step in the development of a fully automated, rapid, standardized and objective method based on facial expression analysis to measure pain and detect severe pain.
Collapse
Affiliation(s)
- D Fontaine
- Department of Neurosurgery, Centre Hospitalier Universitaire de Nice, Nice, France.,FHU INOVPAIN, Centre Hospitalier Universitaire de Nice, Nice, France.,Université Cote d'Azur, UR2CA (Unité de Recherche Clinique Côte d'Azur), Nice, France
| | | | | | | | - S Santucci-Sivilotto
- FHU INOVPAIN, Centre Hospitalier Universitaire de Nice, Nice, France.,Université Cote d'Azur, UR2CA (Unité de Recherche Clinique Côte d'Azur), Nice, France
| | | | - N Darmon
- Université Cote d'Azur, UR2CA (Unité de Recherche Clinique Côte d'Azur), Nice, France.,Department of Psychiatry, Centre Hospitalier Universitaire de Nice, Nice, France
| | - M Lanteri-Minet
- FHU INOVPAIN, Centre Hospitalier Universitaire de Nice, Nice, France.,Université Cote d'Azur, UR2CA (Unité de Recherche Clinique Côte d'Azur), Nice, France.,Pain clinic, Centre Hospitalier Universitaire de Nice, Nice, France.,INSERM/UdA U1107, Neuro-Dol, Auvergne University, France
| | - M Mokhtar
- Université Cote d'Azur, Laboratoire d'Anthropologie et de Psychologie Cognitives et Sociales (LAPCOS), Nice, France
| | - M Laine
- Université Cote d'Azur, Laboratoire d'Anthropologie et de Psychologie Cognitives et Sociales (LAPCOS), Nice, France
| | - D Vistoli
- Université Cote d'Azur, Laboratoire d'Anthropologie et de Psychologie Cognitives et Sociales (LAPCOS), Nice, France
| | | |
Collapse
|
5
|
Prkachin KM, Hammal Z. Computer mediated automatic detection of pain-related behavior: prospect, progress, perils. FRONTIERS IN PAIN RESEARCH 2022; 2. [PMID: 35174358 PMCID: PMC8846566 DOI: 10.3389/fpain.2021.788606] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Pain is often characterized as a fundamentally subjective phenomenon; however, all pain assessment reduces the experience to observables, with strengths and limitations. Most evidence about pain derives from observations of pain-related behavior. There has been considerable progress in articulating the properties of behavioral indices of pain; especially, but not exclusively those based on facial expression. An abundant literature shows that a limited subset of facial actions, with homologs in several non-human species, encode pain intensity across the lifespan. Unfortunately, acquiring such measures remains prohibitively impractical in many settings because it requires trained human observers and is laborious. The advent of the field of affective computing, which applies computer vision and machine learning (CVML) techniques to the recognition of behavior, raised the prospect that advanced technology might overcome some of the constraints limiting behavioral pain assessment in clinical and research settings. Studies have shown that it is indeed possible, through CVML, to develop systems that track facial expressions of pain. There has since been an explosion of research testing models for automated pain assessment. More recently, researchers have explored the feasibility of multimodal measurement of pain-related behaviors. Commercial products that purport to enable automatic, real-time measurement of pain expression have also appeared. Though progress has been made, this field remains in its infancy and there is risk of overpromising on what can be delivered. Insufficient adherence to conventional principles for developing valid measures and drawing appropriate generalizations to identifiable populations could lead to scientifically dubious and clinically risky claims. There is a particular need for the development of databases containing samples from various settings in which pain may or may not occur, meticulously annotated according to standards that would permit sharing, subject to international privacy standards. Researchers and users need to be sensitive to the limitations of the technology (for e.g., the potential reification of biases that are irrelevant to the assessment of pain) and its potentially problematic social implications.
Collapse
Affiliation(s)
- Kenneth M Prkachin
- Department of Psychology, University of Northern British Columbia, Prince George, BC, Canada
| | - Zakia Hammal
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
6
|
Campbell-Yeo M, Eriksson M, Benoit B. Assessment and Management of Pain in Preterm Infants: A Practice Update. CHILDREN (BASEL, SWITZERLAND) 2022; 9:244. [PMID: 35204964 PMCID: PMC8869922 DOI: 10.3390/children9020244] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 01/20/2022] [Accepted: 02/02/2022] [Indexed: 12/11/2022]
Abstract
Infants born preterm are at a high risk for repeated pain exposure in early life. Despite valid tools to assess pain in non-verbal infants and effective interventions to reduce pain associated with medical procedures required as part of their care, many infants receive little to no pain-relieving interventions. Moreover, parents remain significantly underutilized in provision of pain-relieving interventions, despite the known benefit of their involvement. This narrative review provides an overview of the consequences of early exposure to untreated pain in preterm infants, recommendations for a standardized approach to pain assessment in preterm infants, effectiveness of non-pharmacologic and pharmacologic pain-relieving interventions, and suggestions for greater active engagement of parents in the pain care for their preterm infant.
Collapse
Affiliation(s)
- Marsha Campbell-Yeo
- School of Nursing, Faculty of Health, Dalhousie University, Halifax, NS B3H 4R2, Canada
- Department of Pediatrics, Psychology and Neuroscience, Dalhousie University, Halifax, NS B3H 4R2, Canada
- IWK Health, Halifax, NS B3K 6R8, Canada
| | - Mats Eriksson
- School of Health Sciences, Faculty of Medicine and Health, Örebro University, SE-701 82 Örebro, Sweden;
| | - Britney Benoit
- Rankin School of Nursing, St. Francis Xavier University, Antigonish, NS B2G 2N5, Canada;
| |
Collapse
|
7
|
Rathee N, Pahal S, Sheoran P. Pain detection from facial expressions using domain adaptation technique. Pattern Anal Appl 2021. [DOI: 10.1007/s10044-021-01025-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
8
|
Hassan T, Seus D, Wollenberg J, Weitz K, Kunz M, Lautenbacher S, Garbas JU, Schmid U. Automatic Detection of Pain from Facial Expressions: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:1815-1831. [PMID: 31825861 DOI: 10.1109/tpami.2019.2958341] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Pain sensation is essential for survival, since it draws attention to physical threat to the body. Pain assessment is usually done through self-reports. However, self-assessment of pain is not available in the case of noncommunicative patients, and therefore, observer reports should be relied upon. Observer reports of pain could be prone to errors due to subjective biases of observers. Moreover, continuous monitoring by humans is impractical. Therefore, automatic pain detection technology could be deployed to assist human caregivers and complement their service, thereby improving the quality of pain management, especially for noncommunicative patients. Facial expressions are a reliable indicator of pain, and are used in all observer-based pain assessment tools. Following the advancements in automatic facial expression analysis, computer vision researchers have tried to use this technology for developing approaches for automatically detecting pain from facial expressions. This paper surveys the literature published in this field over the past decade, categorizes it, and identifies future research directions. The survey covers the pain datasets used in the reviewed literature, the learning tasks targeted by the approaches, the features extracted from images and image sequences to represent pain-related information, and finally, the machine learning methods used.
Collapse
|
9
|
Rezaei S, Moturu A, Zhao S, Prkachin KM, Hadjistavropoulos T, Taati B. Unobtrusive Pain Monitoring in Older Adults With Dementia Using Pairwise and Contrastive Training. IEEE J Biomed Health Inform 2021; 25:1450-1462. [PMID: 33338024 DOI: 10.1109/jbhi.2020.3045743] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although pain is frequent in old age, older adults are often undertreated for pain. This is especially the case for long-term care residents with moderate to severe dementia who cannot report their pain because of cognitive impairments that accompany dementia. Nursing staff acknowledge the challenges of effectively recognizing and managing pain in long-term care facilities due to lack of human resources and, sometimes, expertise to use validated pain assessment approaches on a regular basis. Vision-based ambient monitoring will allow for frequent automated assessments so care staff could be automatically notified when signs of pain are displayed. However, existing computer vision techniques for pain detection are not validated on faces of older adults or people with dementia, and this population is not represented in existing facial expression datasets of pain. We present the first fully automated vision-based technique validated on a dementia cohort. Our contributions are threefold. First, we develop a deep learning-based computer vision system for detecting painful facial expressions on a video dataset that is collected unobtrusively from older adult participants with and without dementia. Second, we introduce a pairwise comparative inference method that calibrates to each person and is sensitive to changes in facial expression while using training data more efficiently than sequence models. Third, we introduce a fast contrastive training method that improves cross-dataset performance. Our pain estimation model outperforms baselines by a wide margin, especially when evaluated on faces of people with dementia. Pre-trained model and demo code available at https://github.com/TaatiTeam/pain_detection_demo.
Collapse
|
10
|
Xin X, Lin X, Yang S, Zheng X. Pain intensity estimation based on a spatial transformation and attention CNN. PLoS One 2020; 15:e0232412. [PMID: 32822348 PMCID: PMC7444520 DOI: 10.1371/journal.pone.0232412] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 04/14/2020] [Indexed: 02/05/2023] Open
Abstract
Models designed to detect abnormalities that reflect disease from facial structures are an emerging area of research for automated facial analysis, which has important potential value in smart healthcare applications. However, most of the proposed models directly analyze the whole face image containing the background information, and rarely consider the effects of the background and different face regions on the analysis results. Therefore, in view of these effects, we propose an end-to-end attention network with spatial transformation to estimate different pain intensities. In the proposed method, the face image is first provided as input to a spatial transformation network for solving the problem of background interference; then, the attention mechanism is used to adaptively adjust the weights of different face regions of the transformed face image; finally, a convolutional neural network (CNN) containing a Softmax function is utilized to classify the pain levels. The extensive experiments and analysis are conducted on the benchmarking and publicly available database, namely the UNBC-McMaster shoulder pain. More specifically, in order to verify the superiority of our proposed method, the comparisons with the basic CNNs and the-state-of-the-arts are performed, respectively. The experiments show that the introduced spatial transformation and attention mechanism in our method can significantly improve the estimation performances and outperform the-state-of-the-arts.
Collapse
Affiliation(s)
- Xuwu Xin
- The Second Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Xiaoyan Lin
- The Second Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Shengfu Yang
- The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Xin Zheng
- Shantou Chaonan Minsheng Hospital, Shantou, China
| |
Collapse
|
11
|
Brahnam S, Nanni L, McMurtrey S, Lumini A, Brattin R, Slack M, Barrier T. Neonatal pain detection in videos using the iCOPEvid dataset and an ensemble of descriptors extracted from Gaussian of Local Descriptors. APPLIED COMPUTING AND INFORMATICS 2020. [DOI: 10.1016/j.aci.2019.05.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges.
Collapse
|
12
|
Cheng D, Liu D, Philpotts LL, Turner DP, Houle TT, Chen L, Zhang M, Yang J, Zhang W, Deng H. Current state of science in machine learning methods for automatic infant pain evaluation using facial expression information: study protocol of a systematic review and meta-analysis. BMJ Open 2019; 9:e030482. [PMID: 31831532 PMCID: PMC6924806 DOI: 10.1136/bmjopen-2019-030482] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2019] [Revised: 11/18/2019] [Accepted: 11/18/2019] [Indexed: 11/04/2022] Open
Abstract
INTRODUCTION Infants can experience pain similar to adults, and improperly controlled pain stimuli could have a long-term adverse impact on their cognitive and neurological function development. The biggest challenge of achieving good infant pain control is obtaining objective pain assessment when direct communication is lacking. For years, computer scientists have developed many different facial expression-centred machine learning (ML) methods for automatic infant pain assessment. Many of these ML algorithms showed rather satisfactory performance and have demonstrated good potential to be further enhanced for implementation in real-world clinical settings. To date, there is no prior research that has systematically summarised and compared the performance of these ML algorithms. Our proposed meta-analysis will provide the first comprehensive evidence on this topic to guide further ML algorithm development and clinical implementation. METHODS AND ANALYSIS We will search four major public electronic medical and computer science databases including Web of Science, PubMed, Embase and IEEE Xplore Digital Library from January 2008 to present. All the articles will be imported into the Covidence platform for study eligibility screening and inclusion. Study-level extracted data will be stored in the Systematic Review Data Repository online platform. The primary outcome will be the prediction accuracy of the ML model. The secondary outcomes will be model utility measures including generalisability, interpretability and computational efficiency. All extracted outcome data will be imported into RevMan V.5.2.1 software and R V3.3.2 for analysis. Risk of bias will be summarised using the latest Prediction Model Study Risk of Bias Assessment Tool. ETHICS AND DISSEMINATION This systematic review and meta-analysis will only use study-level data from public databases, thus formal ethical approval is not required. The results will be disseminated in the form of an official publication in a peer-reviewed journal and/or presentation at relevant conferences. PROSPERO REGISTRATION NUMBER CRD42019118784.
Collapse
Affiliation(s)
- Dan Cheng
- Department of Anesthesiology, Pain and Perioperative Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Dianbo Liu
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | | | - Dana P Turner
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Timothy T Houle
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Lucy Chen
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Miaomiao Zhang
- Department of Engineering, University of Virginia, Charlottesville, Virginia, USA
| | - Jianjun Yang
- Department of Anesthesiology, Pain and Perioperative Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Wei Zhang
- Department of Anesthesiology, Pain and Perioperative Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Hao Deng
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
- DRPH Program, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland, USA
| |
Collapse
|
13
|
Abstract
Hospitalized newborn infants experience pain that can have negative short- and long-term consequences and thus should be prevented and treated. National and international guidelines state that adequate pain management requires valid pain assessment. Nociceptive signals cause a cascade of physical and behavioral reactions that alone or in combination can be observed and used to assess the presence and intensity of pain. Units that are caring for newborn infants must adopt sufficient pain assessment tools to cover the gestational ages and pain types that occurs in their setting. Pain assessment should be performed on a regular basis and any detection of pain should be acted on. Future research should focus on developing and validating pain assessment tools for specific situations.
Collapse
Affiliation(s)
- Mats Eriksson
- Faculty of Medicine and Health, School of Health Sciences, Örebro University, S-701 85, Örebro, Sweden.
| | - Marsha Campbell-Yeo
- Faculty of Medicine and Health, School of Health Sciences, Örebro University, S-701 85, Örebro, Sweden; School of Nursing, Faculty of Health, Departments of Pediatrics, Psychology & Neuroscience, Dalhousie University, 5850/5890 University Ave, Halifax, NS, B3K 6R8, Canada; Centre for Pediatric Pain Research, IWK Health Centre, Halifax, Canada.
| |
Collapse
|
14
|
A Spatiotemporal Convolutional Neural Network for Automatic Pain Intensity Estimation from Facial Dynamics. Int J Comput Vis 2019. [DOI: 10.1007/s11263-019-01191-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
15
|
Zhi R, Zamzmi GZD, Goldgof D, Ashmeade T, Sun Y. Automatic Infants' Pain Assessment by Dynamic Facial Representation: Effects of Profile View, Gestational Age, Gender, and Race. J Clin Med 2018; 7:E173. [PMID: 29997313 PMCID: PMC6069472 DOI: 10.3390/jcm7070173] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 06/30/2018] [Accepted: 07/03/2018] [Indexed: 12/03/2022] Open
Abstract
Infants' early exposure to painful procedures can have negative short and long-term effects on cognitive, neurological, and brain development. However, infants cannot express their subjective pain experience, as they do not communicate in any language. Facial expression is the most specific pain indicator, which has been effectively employed for automatic pain recognition. In this paper, dynamic pain facial expression representation and fusion scheme for automatic pain assessment in infants is proposed by combining temporal appearance facial features and temporal geometric facial features. We investigate the effects of various factors that influence pain reactivity in infants, such as individual variables of gestational age, gender, and race. Different automatic infant pain assessment models are constructed, depending on influence factors as well as facial profile view, which affect the model ability of pain recognition. It can be concluded that the profile-based infant pain assessment is feasible, as its performance is almost as good as that of the whole face. Moreover, gestational age is the most influencing factor for pain assessment, and it is necessary to construct specific models depending on it. This is mainly because of a lack of behavioral communication ability in infants with low gestational age, due to limited neurological development. To our best knowledge, this is the first study investigating infants' pain recognition, highlighting profile facial views and various individual variables.
Collapse
Affiliation(s)
- Ruicong Zhi
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China.
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China.
| | | | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA.
| | - Terri Ashmeade
- College of Medicine Pediatrics, University of South Florida, Tampa, FL 33620, USA.
| | - Yu Sun
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA.
| |
Collapse
|
16
|
Dawes TR, Eden-Green B, Rosten C, Giles J, Governo R, Marcelline F, Nduka C. Objectively measuring pain using facial expression: is the technology finally ready? Pain Manag 2018; 8:105-113. [PMID: 29468939 DOI: 10.2217/pmt-2017-0049] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Currently, clinicians observe pain-related behaviors and use patient self-report measures in order to determine pain severity. This paper reviews the evidence when facial expression is used as a measure of pain. We review the literature reporting the relevance of facial expression as a diagnostic measure, which facial movements are indicative of pain, and whether such movements can be reliably used to measure pain. We conclude that although the technology for objective pain measurement is not yet ready for use in clinical settings, the potential benefits to patients in improved pain management, combined with the advances being made in sensor technology and artificial intelligence, provide opportunities for research and innovation.
Collapse
Affiliation(s)
- Thomas Richard Dawes
- Department of Anaesthesia, Queen Victoria Hospital, East Grinstead, West Sussex RH19 3DZ, UK
| | - Ben Eden-Green
- Department of Anaesthesia, Queen Victoria Hospital, East Grinstead, West Sussex RH19 3DZ, UK
| | - Claire Rosten
- School of Health Sciences, University of Brighton, Falmer BN1 6PP, UK
| | - Julian Giles
- Department of Anaesthesia, Queen Victoria Hospital, East Grinstead, West Sussex RH19 3DZ, UK
| | - Ricardo Governo
- Brighton & Sussex Medical School, University of Sussex, Brighton BN1 9PX, UK
| | - Francesca Marcelline
- Brighton & Sussex Library & Knowledge Service, Royal Sussex County Hospital, Brighton BN2 5BE, UK
| | - Charles Nduka
- Department of Plastic & Reconstructive Surgery, Queen Victoria Hospital, East Grinstead, West Sussex RH19 3DZ, UK
| |
Collapse
|
17
|
Zamzmi G, Kasturi R, Goldgof D, Zhi R, Ashmeade T, Sun Y. A Review of Automated Pain Assessment in Infants: Features, Classification Tasks, and Databases. IEEE Rev Biomed Eng 2017; 11:77-96. [PMID: 29989992 DOI: 10.1109/rbme.2017.2777907] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Bedside caregivers assess infants' pain at constant intervals by observing specific behavioral and physiological signs of pain. This standard has two main limitations. The first limitation is the intermittent assessment of pain, which might lead to missing pain when the infants are left unattended. Second, it is inconsistent since it depends on the observer's subjective judgment and differs between observers. Intermittent and inconsistent assessment can induce poor treatment and, therefore, cause serious long-term consequences. To mitigate these limitations, the current standard can be augmented by an automated system that monitors infants continuously and provides quantitative and consistent assessment of pain. Several automated methods have been introduced to assess infants' pain automatically based on analysis of behavioral or physiological pain indicators. This paper comprehensively reviews the automated approaches (i.e., approaches to feature extraction) for analyzing infants' pain and the current efforts in automatic pain recognition. In addition, it reviews the databases available to the research community and discusses the current limitations of the automated pain assessment.
Collapse
|
18
|
Thevenot J, Lopez MB, Hadid A. A Survey on Computer Vision for Assistive Medical Diagnosis From Faces. IEEE J Biomed Health Inform 2017; 22:1497-1511. [PMID: 28991753 DOI: 10.1109/jbhi.2017.2754861] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automatic medical diagnosis is an emerging center of interest in computer vision as it provides unobtrusive objective information on a patient's condition. The face, as a mirror of health status, can reveal symptomatic indications of specific diseases. Thus, the detection of facial abnormalities or atypical features is at upmost importance when it comes to medical diagnostics. This survey aims to give an overview of the recent developments in medical diagnostics from facial images based on computer vision methods. Various approaches have been considered to assess facial symptoms and to eventually provide further help to the practitioners. However, the developed tools are still seldom used in clinical practice, since their reliability is still a concern due to the lack of clinical validation of the methodologies and their inadequate applicability. Nonetheless, efforts are being made to provide robust solutions suitable for healthcare environments, by dealing with practical issues such as real-time assessment or patients positioning. This survey provides an updated collection of the most relevant and innovative solutions in facial images analysis. The findings show that with the help of computer vision methods, over 30 medical conditions can be preliminarily diagnosed from the automatic detection of some of their symptoms. Furthermore, future perspectives, such as the need for interdisciplinary collaboration and collecting publicly available databases, are highlighted.
Collapse
|
19
|
Zamzmi G, Pai CY, Goldgof D, Kasturi R, Sun Y, Ashmeade T. Automated Pain Assessment in Neonates. IMAGE ANALYSIS 2017. [DOI: 10.1007/978-3-319-59129-2_30] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
20
|
Hasan MK, Ahsan GMT, Ahamed SI, Love R, Salim R. Pain Level Detection From Facial Image Captured by Smartphone. ACTA ACUST UNITED AC 2016. [DOI: 10.2197/ipsjjip.24.598] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
21
|
Interpretation of appearance: the effect of facial features on first impressions and personality. PLoS One 2014; 9:e107721. [PMID: 25233221 PMCID: PMC4169442 DOI: 10.1371/journal.pone.0107721] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Accepted: 08/20/2014] [Indexed: 12/24/2022] Open
Abstract
Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner.
Collapse
|
22
|
Arneric SP, Laird JM, Chappell AS, Kennedy JD. Tailoring chronic pain treatments for the elderly: are we prepared for the challenge? Drug Discov Today 2014; 19:8-17. [DOI: 10.1016/j.drudis.2013.08.017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2013] [Revised: 08/08/2013] [Accepted: 08/22/2013] [Indexed: 12/21/2022]
|
23
|
|
24
|
SHIH FRANKY, CHENG SHOUXIAN, CHUANG CHAOFA, WANG PATRICKSP. EXTRACTING FACES AND FACIAL FEATURES FROM COLOR IMAGES. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001408006296] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, we present image processing and pattern recognition techniques to extract human faces and facial features from color images. First, we segment a color image into skin and non-skin regions by a Gaussian skin-color model. Then, we apply mathematical morphology and region filling techniques for noise removal and hole filling. We determine whether a skin region is a face candidate by its size and shape. Principle component analysis (PCA) is used to verify face candidates. We create an ellipse model to locate eyes and mouths areas roughly, and apply the support vector machine (SVM) to classify them. Finally, we develop knowledge rules to verify eyes. Experimental results show that our algorithm achieves the accuracy rate of 96.7% in face detection and 90.0% in facial feature extraction.
Collapse
Affiliation(s)
- FRANK Y. SHIH
- College of Computing Sciences, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - SHOUXIAN CHENG
- College of Computing Sciences, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - CHAO-FA CHUANG
- College of Computing Sciences, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - PATRICK S. P. WANG
- College of Computer and Information Science, Northeastern University, Boston, MA 02115, USA
| |
Collapse
|
25
|
Nanni L, Lumini A, Brahnam S. Local binary patterns variants as texture descriptors for medical image analysis. Artif Intell Med 2010; 49:117-25. [PMID: 20338737 DOI: 10.1016/j.artmed.2010.02.006] [Citation(s) in RCA: 153] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2009] [Revised: 02/23/2010] [Accepted: 02/27/2010] [Indexed: 11/28/2022]
Abstract
OBJECTIVE This paper focuses on the use of image-based machine learning techniques in medical image analysis. In particular, we present some variants of local binary patterns (LBP), which are widely considered the state of the art among texture descriptors. After we provide a detailed review of the literature about existing LBP variants and discuss the most salient approaches, along with their pros and cons, we report new experiments using several LBP-based descriptors and propose a set of novel texture descriptors for the representation of biomedical images. The standard LBP operator is defined as a gray-scale invariant texture measure, derived from a general definition of texture in a local neighborhood. Our variants are obtained by considering different shapes for the neighborhood calculation and different encodings for the evaluation of the local gray-scale difference. These sets of features are then used for training a machine-learning classifier (a stand-alone support vector machine). METHODS AND MATERIALS Extensive experiments are conducted using the following three datasets: RESULTS AND CONCLUSION Our results show that the novel variant named elongated quinary patterns (EQP) is a very performing method among those proposed in this work for extracting information from a texture in all the tested datasets. EQP is based on an elliptic neighborhood and a 5 levels scale for encoding the local gray-scale difference. Particularly interesting are the results on the widely studied 2D-HeLa dataset, where, to the best of our knowledge, the proposed descriptor obtains the highest performance among all the several texture descriptors tested in the literature.
Collapse
Affiliation(s)
- Loris Nanni
- Department of Electronic, Informatics and Systems, Università di Bologna, Via Venezia 52, 47023 Cesena, Italy.
| | | | | |
Collapse
|
26
|
Gholami B, Haddad WM, Tannenbaum AR. Relevance vector machine learning for neonate pain intensity assessment using digital imaging. IEEE Trans Biomed Eng 2010; 57:1457-66. [PMID: 20172803 DOI: 10.1109/tbme.2009.2039214] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Pain assessment in patients who are unable to verbally communicate is a challenging problem. The fundamental limitations in pain assessment in neonates stem from subjective assessment criteria, rather than quantifiable and measurable data. This often results in poor quality and inconsistent treatment of patient pain management. Recent advancements in pattern recognition techniques using relevance vector machine (RVM) learning techniques can assist medical staff in assessing pain by constantly monitoring the patient and providing the clinician with quantifiable data for pain management. The RVM classification technique is a Bayesian extension of the support vector machine (SVM) algorithm, which achieves comparable performance to SVM while providing posterior probabilities for class memberships and a sparser model. If classes represent "pure" facial expressions (i.e., extreme expressions that an observer can identify with a high degree of confidence), then the posterior probability of the membership of some intermediate facial expression to a class can provide an estimate of the intensity of such an expression. In this paper, we use the RVM classification technique to distinguish pain from nonpain in neonates as well as assess their pain intensity levels. We also correlate our results with the pain intensity assessed by expert and nonexpert human examiners.
Collapse
Affiliation(s)
- Behnood Gholami
- School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0150, USA.
| | | | | |
Collapse
|
27
|
Wu J, Wang J, Liu L. Feature extraction via KPCA for classification of gait patterns. Hum Mov Sci 2007; 26:393-411. [PMID: 17509708 DOI: 10.1016/j.humov.2007.01.015] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2006] [Revised: 10/17/2006] [Accepted: 01/12/2007] [Indexed: 11/29/2022]
Abstract
Automated recognition of gait pattern change is important in medical diagnostics as well as in the early identification of at-risk gait in the elderly. We evaluated the use of Kernel-based Principal Component Analysis (KPCA) to extract more gait features (i.e., to obtain more significant amounts of information about human movement) and thus to improve the classification of gait patterns. 3D gait data of 24 young and 24 elderly participants were acquired using an OPTOTRAK 3020 motion analysis system during normal walking, and a total of 36 gait spatio-temporal and kinematic variables were extracted from the recorded data. KPCA was used first for nonlinear feature extraction to then evaluate its effect on a subsequent classification in combination with learning algorithms such as support vector machines (SVMs). Cross-validation test results indicated that the proposed technique could allow spreading the information about the gait's kinematic structure into more nonlinear principal components, thus providing additional discriminatory information for the improvement of gait classification performance. The feature extraction ability of KPCA was affected slightly with different kernel functions as polynomial and radial basis function. The combination of KPCA and SVM could identify young-elderly gait patterns with 91% accuracy, resulting in a markedly improved performance compared to the combination of PCA and SVM. These results suggest that nonlinear feature extraction by KPCA improves the classification of young-elderly gait patterns, and holds considerable potential for future applications in direct dimensionality reduction and interpretation of multiple gait signals.
Collapse
Affiliation(s)
- Jianning Wu
- Key Laboratory of Biomedical Information Engineering of Education Ministry, Xi'an Jiaotong University, Xi'an 710049, China.
| | | | | |
Collapse
|
28
|
Introduction to Neonatal Facial Pain Detection Using Common and Advanced Face Classification Techniques. ADVANCED COMPUTATIONAL INTELLIGENCE PARADIGMS IN HEALTHCARE – 1 2007. [DOI: 10.1007/978-3-540-47527-9_9] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|