1
|
Li H, Bu Q, Shi X, Xu X, Li J. Non-invasive medical imaging technology for the diagnosis of burn depth. Int Wound J 2024; 21:e14681. [PMID: 38272799 PMCID: PMC10805628 DOI: 10.1111/iwj.14681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 01/03/2024] [Indexed: 01/27/2024] Open
Abstract
Currently, the clinical diagnosis of burn depth primarily relies on physicians' judgements based on patients' symptoms and physical signs, particularly the morphological characteristics of the wound. This method highly depends on individual doctors' clinical experience, proving challenging for less experienced or primary care physicians, with results often varying from one practitioner to another. Therefore, scholars have been exploring an objective and quantitative auxiliary examination technique to enhance the accuracy and consistency of burn depth diagnosis. Non-invasive medical imaging technology, with its significant advantages in examining tissue surface morphology, blood flow in deep and changes in structure and composition, has become a hot topic in burn diagnostic technology research in recent years. This paper reviews various non-invasive medical imaging technologies that have shown potential in burn depth diagnosis. These technologies are summarized and synthesized in terms of imaging principles, current research status, advantages and limitations, aiming to provide a reference for clinical application or research for burn specialists.
Collapse
Affiliation(s)
- Hang Li
- Department of Burns and Plastic SurgerySecond Affiliated Hospital of Air Force Medical UniversityXi'anP.R. China
| | - Qilong Bu
- Bioinspired Engineering and Biomechanics CenterXi'an Jiaotong UniversityXi'anP.R. China
| | - Xufeng Shi
- Department of Burns and Plastic SurgerySecond Affiliated Hospital of Air Force Medical UniversityXi'anP.R. China
| | - Xiayu Xu
- Bioinspired Engineering and Biomechanics CenterXi'an Jiaotong UniversityXi'anP.R. China
| | - Jing Li
- Department of Burns and Plastic SurgerySecond Affiliated Hospital of Air Force Medical UniversityXi'anP.R. China
| |
Collapse
|
2
|
Taib BG, Karwath A, Wensley K, Minku L, Gkoutos GV, Moiemen N. Artificial intelligence in the management and treatment of burns: A systematic review and meta-analyses. J Plast Reconstr Aesthet Surg 2023; 77:133-161. [PMID: 36571960 DOI: 10.1016/j.bjps.2022.11.049] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 10/17/2022] [Accepted: 11/17/2022] [Indexed: 11/24/2022]
Abstract
INTRODUCTION AND AIM Artificial Intelligence (AI) is already being successfully employed to aid the interpretation of multiple facets of burns care. In the light of the growing influence of AI, this systematic review and diagnostic test accuracy meta-analyses aim to appraise and summarise the current direction of research in this field. METHOD A systematic literature review was conducted of relevant studies published between 1990 and 2021, yielding 35 studies. Twelve studies were suitable for a Diagnostic Test Meta-Analyses. RESULTS The studies generally focussed on burn depth (Accuracy 68.9%-95.4%, Sensitivity 90.8% and Specificity 84.4%), burn segmentation (Accuracy 76.0%-99.4%, Sensitivity 97.9% and specificity 97.6%) and burn related mortality (Accuracy >90%-97.5% Sensitivity 92.9% and specificity 93.4%). Neural networks were the most common machine learning (ML) algorithm utilised in 69% of the studies. The QUADAS-2 tool identified significant heterogeneity between studies. DISCUSSION The potential application of AI in the management of burns patients is promising, especially given its propitious results across a spectrum of dimensions, including burn depth, size, mortality, related sepsis and acute kidney injuries. The accuracy of the results analysed within this study is comparable to current practices in burns care. CONCLUSION The application of AI in the treatment and management of burns patients, as a series of point of care diagnostic adjuncts, is promising. Whilst AI is a potentially valuable tool, a full evaluation of its current utility and potential is limited by significant variations in research methodology and reporting.
Collapse
Affiliation(s)
- Bilal Gani Taib
- Burns and Plastic Surgery Department, Queen Elizabeth Hospital, Mindelsohn Way, Birmingham B15 2TH, United Kingdom.
| | - A Karwath
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham, United Kingdom; Health Data Research UK Midlands Site, Birmingham, United Kingdom; University Hospitals Birmingham NHS Foundation Trust, Edgbaston, Birmingham, United Kingdom
| | - K Wensley
- Burns and Plastic Surgery Department, Queen Elizabeth Hospital, Mindelsohn Way, Birmingham B15 2TH, United Kingdom
| | - L Minku
- School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | - G V Gkoutos
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham, United Kingdom; Health Data Research UK Midlands Site, Birmingham, United Kingdom; University Hospitals Birmingham NHS Foundation Trust, Edgbaston, Birmingham, United Kingdom; NIHR Surgical Reconstruction and Microbiology Research Centre, Birmingham, United Kingdom
| | - N Moiemen
- College of Medical and Dental Sciences, University of Birmingham, United Kingdom; Centre for Conflict Wound Research, Scar Free Foundation, Birmingham, United Kingdom; NIHR Surgical Reconstruction and Microbiology Research Centre, Birmingham, United Kingdom
| |
Collapse
|
3
|
Boissin C, Laflamme L, Jian F, Lundin M, Fredrik H, Lee W, Nikki A, Johan L. Development and evaluation of deep learning algorithms for assessment of acute burns and the need for surgery. Sci Rep 2023; 13:1794. [PMID: 36720894 PMCID: PMC9889389 DOI: 10.1038/s41598-023-28164-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 01/13/2023] [Indexed: 02/02/2023] Open
Abstract
Assessment of burn extent and depth are critical and require very specialized diagnosis. Automated image-based algorithms could assist in performing wound detection and classification. We aimed to develop two deep-learning algorithms that respectively identify burns, and classify whether they require surgery. An additional aim assessed the performances in different Fitzpatrick skin types. Annotated burn (n = 1105) and background (n = 536) images were collected. Using a commercially available platform for deep learning algorithms, two models were trained and validated on 70% of the images and tested on the remaining 30%. Accuracy was measured for each image using the percentage of wound area correctly identified and F1 scores for the wound identifier; and area under the receiver operating characteristic (AUC) curve, sensitivity, and specificity for the wound classifier. The wound identifier algorithm detected an average of 87.2% of the wound areas accurately in the test set. For the wound classifier algorithm, the AUC was 0.885. The wound identifier algorithm was more accurate in patients with darker skin types; the wound classifier was more accurate in patients with lighter skin types. To conclude, image-based algorithms can support the assessment of acute burns with relatively good accuracy although larger and different datasets are needed.
Collapse
Affiliation(s)
- Constance Boissin
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden. .,Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden.
| | - Lucie Laflamme
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden.,Institute for Social and Health Sciences, University of South Africa, Johannesburg, South Africa
| | - Fransén Jian
- Department of Plastic and Maxillofacial Surgery, Burn Center, Uppsala University Hospital, Uppsala, Sweden.,Department of Surgical Sciences, Plastic Surgery, Uppsala University, Uppsala, Sweden
| | - Mikael Lundin
- Institute for Molecular Medicine Finland FIMM, Helsinki Institute for Life Science HiLIFE, University of Helsinki, Helsinki, Finland
| | - Huss Fredrik
- Department of Plastic and Maxillofacial Surgery, Burn Center, Uppsala University Hospital, Uppsala, Sweden.,Department of Surgical Sciences, Plastic Surgery, Uppsala University, Uppsala, Sweden
| | - Wallis Lee
- Division of Emergency Medicine, Faculty of Medicine and Health Sciences, Stellenbosch University, Bellville, South Africa.,Division of Emergency Medicine, University of Cape Town, Cape Town, South Africa
| | - Allorto Nikki
- Pietermaritzburg Burn Service, Department of General Surgery, University of Kwa-Zulu Natal, Pietermaritzburg, South Africa
| | - Lundin Johan
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden.,Institute for Molecular Medicine Finland FIMM, Helsinki Institute for Life Science HiLIFE, University of Helsinki, Helsinki, Finland
| |
Collapse
|
4
|
Anisuzzaman DM, Wang C, Rostami B, Gopalakrishnan S, Niezgoda J, Yu Z. Image-Based Artificial Intelligence in Wound Assessment: A Systematic Review. Adv Wound Care (New Rochelle) 2022; 11:687-709. [PMID: 34544270 DOI: 10.1089/wound.2021.0091] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Significance: Accurately predicting wound healing trajectories is difficult for wound care clinicians due to the complex and dynamic processes involved in wound healing. Wound care teams capture images of wounds during clinical visits generating big datasets over time. Developing novel artificial intelligence (AI) systems can help clinicians diagnose, assess the effectiveness of therapy, and predict healing outcomes. Recent Advances: Rapid developments in computer processing have enabled the development of AI-based systems that can improve the diagnosis and effectiveness of therapy in various clinical specializations. In the past decade, we have witnessed AI revolutionizing all types of medical imaging like X-ray, ultrasound, computed tomography, magnetic resonance imaging, etc., but AI-based systems remain to be developed clinically and computationally for high-quality wound care that can result in better patient outcomes. Critical Issues: In the current standard of care, collecting wound images on every clinical visit, interpreting and archiving the data are cumbersome and time consuming. Commercial platforms are developed to capture images, perform wound measurements, and provide clinicians with a workflow for diagnosis, but AI-based systems are still in their infancy. This systematic review summarizes the breadth and depth of the most recent and relevant work in intelligent image-based data analysis and system developments for wound assessment. Future Directions: With increasing availabilities of massive data (wound images, wound-specific electronic health records, etc.) as well as powerful computing resources, AI-based digital platforms will play a significant role in delivering data-driven care to people suffering from debilitating chronic wounds.
Collapse
Affiliation(s)
- D M Anisuzzaman
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | - Chuanbo Wang
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | - Behrouz Rostami
- Department of Electrical Engineering, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | | | | | - Zeyun Yu
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| |
Collapse
|
5
|
Liang J, Li R, Wang C, Zhang R, Yue K, Li W, Li Y. A Spiking Neural Network Based on Retinal Ganglion Cells for Automatic Burn Image Segmentation. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1526. [PMID: 36359618 PMCID: PMC9689035 DOI: 10.3390/e24111526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 10/17/2022] [Accepted: 10/18/2022] [Indexed: 06/16/2023]
Abstract
Burn is a common traumatic disease. After severe burn injury, the human body will increase catabolism, and burn wounds lead to a large amount of body fluid loss, with a high mortality rate. Therefore, in the early treatment for burn patients, it is essential to calculate the patient's water requirement based on the percentage of the burn wound area in the total body surface area (TBSA%). However, burn wounds are so complex that there is observer variability by the clinicians, making it challenging to locate the burn wounds accurately. Therefore, an objective, accurate location method of burn wounds is very necessary and meaningful. Convolutional neural networks (CNNs) provide feasible means for this requirement. However, although the CNNs continue to improve the accuracy in the semantic segmentation task, they are often limited by the computing resources of edge hardware. For this purpose, a lightweight burn wounds segmentation model is required. In our work, we constructed a burn image dataset and proposed a U-type spiking neural networks (SNNs) based on retinal ganglion cells (RGC) for segmenting burn and non-burn areas. Moreover, a module with cross-layer skip concatenation structure was introduced. Experimental results showed that the pixel accuracy of the proposed reached 92.89%, and our network parameter only needed 16.6 Mbytes. The results showed our model achieved remarkable accuracy while achieving edge hardware affinity.
Collapse
Affiliation(s)
| | - Ruixue Li
- Zhejiang Integrated Circuits and Intelligent Hardware Collaborative Innovation Center, Hangzhou Dianzi University, Hangzhou 310018, China
| | | | | | | | - Wenjun Li
- Zhejiang Integrated Circuits and Intelligent Hardware Collaborative Innovation Center, Hangzhou Dianzi University, Hangzhou 310018, China
| | | |
Collapse
|
6
|
Serrano C, Lazo M, Serrano A, Toledo-Pastrana T, Barros-Tornay R, Acha B. Clinically Inspired Skin Lesion Classification through the Detection of Dermoscopic Criteria for Basal Cell Carcinoma. J Imaging 2022; 8:197. [PMID: 35877641 PMCID: PMC9319034 DOI: 10.3390/jimaging8070197] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 07/05/2022] [Accepted: 07/08/2022] [Indexed: 12/10/2022] Open
Abstract
Background and Objective. Skin cancer is the most common cancer worldwide. One of the most common non-melanoma tumors is basal cell carcinoma (BCC), which accounts for 75% of all skin cancers. There are many benign lesions that can be confused with these types of cancers, leading to unnecessary biopsies. In this paper, a new method to identify the different BCC dermoscopic patterns present in a skin lesion is presented. In addition, this information is applied to classify skin lesions into BCC and non-BCC. Methods. The proposed method combines the information provided by the original dermoscopic image, introduced in a convolutional neural network (CNN), with deep and handcrafted features extracted from color and texture analysis of the image. This color analysis is performed by transforming the image into a uniform color space and into a color appearance model. To demonstrate the validity of the method, a comparison between the classification obtained employing exclusively a CNN with the original image as input and the classification with additional color and texture features is presented. Furthermore, an exhaustive comparison of classification employing different color and texture measures derived from different color spaces is presented. Results. Results show that the classifier with additional color and texture features outperforms a CNN whose input is only the original image. Another important achievement is that a new color cooccurrence matrix, proposed in this paper, improves the results obtained with other texture measures. Finally, sensitivity of 0.99, specificity of 0.94 and accuracy of 0.97 are achieved when lesions are classified into BCC or non-BCC. Conclusions. To the best of our knowledge, this is the first time that a methodology to detect all the possible patterns that can be present in a BCC lesion is proposed. This detection leads to a clinically explainable classification into BCC and non-BCC lesions. In this sense, the classification of the proposed tool is based on the detection of the dermoscopic features that dermatologists employ for their diagnosis.
Collapse
Affiliation(s)
- Carmen Serrano
- Dpto. Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (M.L.); (B.A.)
| | - Manuel Lazo
- Dpto. Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (M.L.); (B.A.)
| | - Amalia Serrano
- Hospital Universitario Virgen Macarena, Calle Dr. Fedriani, 3, 41009 Seville, Spain;
| | - Tomás Toledo-Pastrana
- Hospitales Quironsalud Infanta Luisa y Sagrado Corazón, Calle San Jacinto, 87, 41010 Seville, Spain;
| | | | - Begoña Acha
- Dpto. Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (M.L.); (B.A.)
| |
Collapse
|
7
|
Dagli MM, Rajesh A, Asaad M, Butler CE. The Use of Artificial Intelligence and Machine Learning in Surgery: A Comprehensive Literature Review. Am Surg 2021:31348211065101. [PMID: 34958252 DOI: 10.1177/00031348211065101] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Interest in the use of artificial intelligence (AI) and machine learning (ML) in medicine has grown exponentially over the last few years. With its ability to enhance speed, precision, and efficiency, AI has immense potential, especially in the field of surgery. This article aims to provide a comprehensive literature review of artificial intelligence as it applies to surgery and discuss practical examples, current applications, and challenges to the adoption of this technology. Furthermore, we elaborate on the utility of natural language processing and computer vision in improving surgical outcomes, research, and patient care.
Collapse
Affiliation(s)
| | - Aashish Rajesh
- Department of Surgery, 14742University of Texas Health Science Center, San Antonio, TX, USA
| | - Malke Asaad
- Department of Plastic & Reconstructive Surgery, 571198the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Charles E Butler
- Department of Plastic & Reconstructive Surgery, 571198the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
8
|
Chang CW, Lai F, Christian M, Chen YC, Hsu C, Chen YS, Chang DH, Roan TL, Yu YC. Deep Learning-Assisted Burn Wound Diagnosis: Diagnostic Model Development Study. JMIR Med Inform 2021; 9:e22798. [PMID: 34860674 PMCID: PMC8686480 DOI: 10.2196/22798] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 12/19/2020] [Accepted: 10/15/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Accurate assessment of the percentage total body surface area (%TBSA) of burn wounds is crucial in the management of burn patients. The resuscitation fluid and nutritional needs of burn patients, their need for intensive unit care, and probability of mortality are all directly related to %TBSA. It is difficult to estimate a burn area of irregular shape by inspection. Many articles have reported discrepancies in estimating %TBSA by different doctors. OBJECTIVE We propose a method, based on deep learning, for burn wound detection, segmentation, and calculation of %TBSA on a pixel-to-pixel basis. METHODS A 2-step procedure was used to convert burn wound diagnosis into %TBSA. In the first step, images of burn wounds were collected from medical records and labeled by burn surgeons, and the data set was then input into 2 deep learning architectures, U-Net and Mask R-CNN, each configured with 2 different backbones, to segment the burn wounds. In the second step, we collected and labeled images of hands to create another data set, which was also input into U-Net and Mask R-CNN to segment the hands. The %TBSA of burn wounds was then calculated by comparing the pixels of mask areas on images of the burn wound and hand of the same patient according to the rule of hand, which states that one's hand accounts for 0.8% of TBSA. RESULTS A total of 2591 images of burn wounds were collected and labeled to form the burn wound data set. The data set was randomly split into training, validation, and testing sets in a ratio of 8:1:1. Four hundred images of volar hands were collected and labeled to form the hand data set, which was also split into 3 sets using the same method. For the images of burn wounds, Mask R-CNN with ResNet101 had the best segmentation result with a Dice coefficient (DC) of 0.9496, while U-Net with ResNet101 had a DC of 0.8545. For the hand images, U-Net and Mask R-CNN had similar performance with DC values of 0.9920 and 0.9910, respectively. Lastly, we conducted a test diagnosis in a burn patient. Mask R-CNN with ResNet101 had on average less deviation (0.115% TBSA) from the ground truth than burn surgeons. CONCLUSIONS This is one of the first studies to diagnose all depths of burn wounds and convert the segmentation results into %TBSA using different deep learning models. We aimed to assist medical staff in estimating burn size more accurately, thereby helping to provide precise care to burn victims.
Collapse
Affiliation(s)
- Che Wei Chang
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan.,Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Mesakh Christian
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Yu Chun Chen
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Ching Hsu
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Yo Shen Chen
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Dun Hao Chang
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan.,Department of Information Management, Yuan Ze University, Chung-Li, Taiwan
| | - Tyng Luen Roan
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Yen Che Yu
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| |
Collapse
|
9
|
Zhang B, Zhou J. Multi-feature representation for burn depth classification via burn images. Artif Intell Med 2021; 118:102128. [PMID: 34412845 DOI: 10.1016/j.artmed.2021.102128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 05/13/2021] [Accepted: 06/22/2021] [Indexed: 11/19/2022]
Abstract
Burns are a common and severe problem in public health. Early and timely classification of burn depth is effective for patients to receive targeted treatment, which can save their lives. However, identifying burn depth from burn images requires physicians to have a lot of medical experience. The speed and precision to diagnose the depth of the burn image are not guaranteed due to its high workload and cost for clinicians. Thus, implementing some smart burn depth classification methods is desired at present. In this paper, we propose a computerized method to automatically evaluate the burn depth by using multiple features extracted from burn images. Specifically, color features, texture features and latent features are extracted from burn images, which are then concatenated together and fed to several classifiers, such as random forest to generate the burn level. A standard burn image dataset is evaluated by our proposed method, obtaining an Accuracy of 85.86% and 76.87% by classifying the burn images into two classes and three classes, respectively, outperforming conventional methods in the burn depth identification. The results indicate our approach is effective and has the potential to aid medical experts in identifying different burn depths.
Collapse
Affiliation(s)
- Bob Zhang
- PAMI Research Group, Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau.
| | - Jianhang Zhou
- PAMI Research Group, Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau
| |
Collapse
|
10
|
E Moura FS, Amin K, Ekwobi C. Artificial intelligence in the management and treatment of burns: a systematic review. BURNS & TRAUMA 2021; 9:tkab022. [PMID: 34423054 PMCID: PMC8375569 DOI: 10.1093/burnst/tkab022] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 03/08/2021] [Accepted: 04/30/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Artificial intelligence (AI) is an innovative field with potential for improving burn care. This article provides an updated review on machine learning in burn care and discusses future challenges and the role of healthcare professionals in the successful implementation of AI technologies. METHODS A systematic search was carried out on MEDLINE, Embase and PubMed databases for English-language articles studying machine learning in burns. Articles were reviewed quantitatively and qualitatively for clinical applications, key features, algorithms, outcomes and validation methods. RESULTS A total of 46 observational studies were included for review. Assessment of burn depth (n = 26), support vector machines (n = 19) and 10-fold cross-validation (n = 11) were the most common application, algorithm and validation tool used, respectively. CONCLUSION AI should be incorporated into clinical practice as an adjunct to the experienced burns provider once direct comparative analysis to current gold standards outlining its benefits and risks have been studied. Future considerations must include the development of a burn-specific common framework. Authors should use common validation tools to allow for effective comparisons. Level I/II evidence is required to produce robust proof about clinical and economic impacts.
Collapse
Affiliation(s)
| | - Kavit Amin
- Department of Plastic Surgery, Manchester University NHS Foundation Trust, UK
- Department of Plastic Surgery, Lancashire Teaching Hospitals NHS Foundation Trust, Royal Preston Hospital, Preston, UK
| | - Chidi Ekwobi
- Department of Plastic Surgery, Lancashire Teaching Hospitals NHS Foundation Trust, Royal Preston Hospital, Preston, UK
| |
Collapse
|
11
|
A systematic review of machine learning and automation in burn wound evaluation: A promising but developing frontier. Burns 2021; 47:1691-1704. [PMID: 34419331 DOI: 10.1016/j.burns.2021.07.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 07/09/2021] [Indexed: 12/21/2022]
Abstract
BACKGROUND Visual evaluation is the most common method of evaluating burn wounds. Its subjective nature can lead to inaccurate diagnoses and inappropriate burn center referrals. Machine learning may provide an objective solution. The objective of this study is to summarize the literature on ML in burn wound evaluation. METHODS A systematic review of articles published between January 2000 and January 2021 was performed using PubMed and MEDLINE (OVID). Articles reporting on ML or automation to evaluate burn wounds were included. Keywords included burns, machine/deep learning, artificial intelligence, burn classification technology, and mobile applications. Data were extracted on study design, method of data acquisition, machine learning techniques, and machine learning accuracy. RESULTS Thirty articles were included. Nine studies used machine learning and automation to estimate percent total body surface area (%TBSA) burned, 4 calculated fluid estimations, 19 estimated burn depth, 5 estimated need for surgery, and 2 evaluated scarring. Models calculating %TBSA burned demonstrated accuracies comparable to or better than paper methods. Burn depth classification models achieved accuracies of >83%. CONCLUSION Machine learning provides an objective adjunct that may improve diagnostic accuracy in evaluating burn wound severity. Existing models remain in the early stages with future studies needed to assess their clinical feasibility.
Collapse
|
12
|
Cao X, Chen C, Tian L. Supervised Multidimensional Scaling and its Application in MRI-Based Individual Age Predictions. Neuroinformatics 2021; 19:219-231. [PMID: 32676970 DOI: 10.1007/s12021-020-09476-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
It has been a popular trend to decode individuals' demographic and cognitive variables based on MRI. Features extracted from MRI data are usually of high dimensionality, and dimensionality reduction (DR) is an effective way to deal with these high-dimensional features. Despite many supervised DR techniques for classification purposes, there is still a lack of supervised DR techniques for regression purposes. In this study, we advanced a novel supervised DR technique for regression purposes, namely, supervised multidimensional scaling (SMDS). The implementation of SMDS includes two steps: (1) evaluating pairwise distances among entities based on their labels and constructing a new space through a distance-preserving projection; (2) establishing an explicit linear relationship between the feature space and the new space. Based on this linear relationship, DR for test entities can be performed. We evaluated the performance of SMDS first on a synthetic dataset, and the results indicate that (1) SMDS is relatively robust to Gaussian noise existing in the features and labels; (2) the dimensionality of the new space exerts negligible influences upon SMDS; and (3) when the sample size is small, the performance of SMDS deteriorates with the increase of feature dimension. When applied to features extracted from resting state fMRI data for individual age predictions, SMDS was observed to outperform classic DR techniques, including principal component analysis, locally linear embedding and multidimensional scaling (MDS). Hopefully, SMDS can be widely used in studies on MRI-based predictions. Furthermore, novel supervised DR techniques for regression purposes can easily be developed by replacing MDS with other nonlinear DR techniques.
Collapse
Affiliation(s)
- Xuyu Cao
- Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing, 100044, China.,School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China
| | - Chen Chen
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China
| | - Lixia Tian
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China.
| |
Collapse
|
13
|
Mantelakis A, Assael Y, Sorooshian P, Khajuria A. Machine Learning Demonstrates High Accuracy for Disease Diagnosis and Prognosis in Plastic Surgery. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2021; 9:e3638. [PMID: 34235035 PMCID: PMC8225366 DOI: 10.1097/gox.0000000000003638] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 04/14/2021] [Indexed: 01/25/2023]
Abstract
INTRODUCTION Machine learning (ML) is a set of models and methods that can detect patterns in vast amounts of data and use this information to perform various kinds of decision-making under uncertain conditions. This review explores the current role of this technology in plastic surgery by outlining the applications in clinical practice, diagnostic and prognostic accuracies, and proposed future direction for clinical applications and research. METHODS EMBASE, MEDLINE, CENTRAL and ClinicalTrials.gov were searched from 1990 to 2020. Any clinical studies (including case reports) which present the diagnostic and prognostic accuracies of machine learning models in the clinical setting of plastic surgery were included. Data collected were clinical indication, model utilised, reported accuracies, and comparison with clinical evaluation. RESULTS The database identified 1181 articles, of which 51 articles were included in this review. The clinical utility of these algorithms was to assist clinicians in diagnosis prediction (n=22), outcome prediction (n=21) and pre-operative planning (n=8). The mean accuracy is 88.80%, 86.11% and 80.28% respectively. The most commonly used models were neural networks (n=31), support vector machines (n=13), decision trees/random forests (n=10) and logistic regression (n=9). CONCLUSIONS ML has demonstrated high accuracies in diagnosis and prognostication of burn patients, congenital or acquired facial deformities, and in cosmetic surgery. There are no studies comparing ML to clinician's performance. Future research can be enhanced using larger datasets or utilising data augmentation, employing novel deep learning models, and applying these to other subspecialties of plastic surgery.
Collapse
Affiliation(s)
| | | | | | - Ankur Khajuria
- Kellogg College, University of Oxford
- Department of Surgery and Cancer, Imperial College London, UK
| |
Collapse
|
14
|
Liu H, Yue K, Cheng S, Li W, Fu Z. A Framework for Automatic Burn Image Segmentation and Burn Depth Diagnosis Using Deep Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5514224. [PMID: 33880130 PMCID: PMC8046560 DOI: 10.1155/2021/5514224] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 02/24/2021] [Accepted: 03/22/2021] [Indexed: 12/02/2022]
Abstract
Burn is a common traumatic disease with high morbidity and mortality. The treatment of burns requires accurate and reliable diagnosis of burn wounds and burn depth, which can save lives in some cases. However, due to the complexity of burn wounds, the early diagnosis of burns lacks accuracy and difference. Therefore, we use deep learning technology to automate and standardize burn diagnosis to reduce human errors and improve burn diagnosis. First, the burn dataset with detailed burn area segmentation and burn depth labelling is created. Then, an end-to-end framework based on deep learning method for advanced burn area segmentation and burn depth diagnosis is proposed. The framework is firstly used to segment the burn area in the burn images. On this basis, the calculation of the percentage of the burn area in the total body surface area (TBSA) can be realized by extending the network output structure and the labels of the burn dataset. Then, the framework is used to segment multiple burn depth areas. Finally, the network achieves the best result with IOU of 0.8467 for the segmentation of burn and no burn area. And for multiple burn depth areas segmentation, the best average IOU is 0.5144.
Collapse
Affiliation(s)
- Hao Liu
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Zhejiang, China
| | - Keqiang Yue
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Zhejiang, China
| | - Siyi Cheng
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Zhejiang, China
| | - Wenjun Li
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Zhejiang, China
| | - Zhihui Fu
- The People's Hospital of Jianggan District, Hangzhou, Zhejiang, China
| |
Collapse
|
15
|
Cirillo MD, Mirdell R, Sjöberg F, Pham TD. Improving burn depth assessment for pediatric scalds by AI based on semantic segmentation of polarized light photography images. Burns 2021; 47:1586-1593. [PMID: 33947595 DOI: 10.1016/j.burns.2021.01.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 12/07/2020] [Accepted: 01/25/2021] [Indexed: 10/22/2022]
Abstract
This paper illustrates the efficacy of an artificial intelligence (AI) (a convolutional neural network, based on the U-Net), for the burn-depth assessment using semantic segmentation of polarized high-performance light camera images of burn wounds. The proposed method is evaluated for paediatric scald injuries to differentiate four burn wound depths: superficial partial-thickness (healing in 0-7 days), superficial to intermediate partial-thickness (healing in 8-13 days), intermediate to deep partial-thickness (healing in 14-20 days), deep partial-thickness (healing after 21 days) and full-thickness burns, based on observed healing time. In total 100 burn images were acquired. Seventeen images contained all 4 burn depths and were used to train the network. Leave-one-out cross-validation reports were generated and an accuracy and dice coefficient average of almost 97% was then obtained. After that, the remaining 83 burn-wound images were evaluated using the different network during the cross-validation, achieving an accuracy and dice coefficient, both on average 92%. This technique offers an interesting new automated alternative for clinical decision support to assess and localize burn-depths in 2D digital images. Further training and improvement of the underlying algorithm by e.g., more images, seems feasible and thus promising for the future.
Collapse
Affiliation(s)
- Marco Domenico Cirillo
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden; Centre for Medical Image Science and Visualization, Linköping University, Linköping, Sweden; Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| | - Robin Mirdell
- The Burn Centre, Linköping University Hospital, Linköping, Sweden; Department of Plastic Surgery, Hand Surgery, and Burns, Linköping University, Linköping, Sweden; Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
| | - Folke Sjöberg
- The Burn Centre, Linköping University Hospital, Linköping, Sweden; Department of Plastic Surgery, Hand Surgery, and Burns, Linköping University, Linköping, Sweden; Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
| | - Tuan D Pham
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia.
| |
Collapse
|
16
|
Pabitha C, Vanathi B. Densemask RCNN: A Hybrid Model for Skin Burn Image Classification and Severity Grading. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10387-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
17
|
BPBSAM: Body part-specific burn severity assessment model. Burns 2020; 46:1407-1423. [PMID: 32376068 DOI: 10.1016/j.burns.2020.03.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 02/23/2020] [Accepted: 03/20/2020] [Indexed: 11/23/2022]
Abstract
BACKGROUND AND OBJECTIVE Burns are a serious health problem leading to several thousand deaths annually, and despite the growth of science and technology, automated burns diagnosis still remains a major challenge. Researchers have been exploring visual images-based automated approaches for burn diagnosis. Noting that the impact of a burn on a particular body part can be related to the skin thickness factor, we propose a deep convolutional neural network based body part-specific burns severity assessment model (BPBSAM). METHOD Considering skin anatomy, BPBSAM estimates burn severity using body part-specific support vector machines trained with CNN features extracted from burnt body part images. Thus BPBSAM first identifies the body part of the burn images using a convolutional neural network in training of which the challenge of limited availability of burnt body part images is successfully addressed by using available larger-size datasets of non-burn images of different body parts considered (face, hand, back, and inner forearm). We prepared a rich labelled burn images datasets: BI & UBI and trained several deep learning models with existing models as pipeline for body part classification and feature extraction for severity estimation. RESULTS The proposed novel BPBSAM method classified the severity of burn from color images of burn injury with an overall average F1 score of 77.8% and accuracy of 84.85% for the test BI dataset and 87.2% and 91.53% for the UBI dataset, respectively. For burn images body part classification, the average accuracy of around 93% is achieved, and for burn severity assessment, the proposed BPBSAM outperformed the generic method in terms of overall average accuracy by 10.61%, 4.55%, and 3.03% with pipelines ResNet50, VGG16, and VGG19, respectively. CONCLUSIONS The main contributions of this work along with burn images labelled datasets creation is that the proposed customized body part-specific burn severity assessment model can significantly improve the performance in spite of having small burn images dataset. This highly innovative customized body part-specific approach could also be used to deal with the burn region segmentation problem. Moreover, fine tuning on pre-trained non-burn body part images network has proven to be robust and reliable.
Collapse
|
18
|
Cirillo MD, Mirdell R, Sjöberg F, Pham TD. Time-Independent Prediction of Burn Depth Using Deep Convolutional Neural Networks. J Burn Care Res 2019; 40:857-863. [DOI: 10.1093/jbcr/irz103] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Abstract
We present in this paper the application of deep convolutional neural networks (CNNs), which is a state-of-the-art artificial intelligence (AI) approach in machine learning, for automated time-independent prediction of burn depth. Color images of four types of burn depth injured in first few days, including normal skin and background, acquired by a TiVi camera were trained and tested with four pretrained deep CNNs: VGG-16, GoogleNet, ResNet-50, and ResNet-101. In the end, the best 10-fold cross-validation results obtained from ResNet-101 with an average, minimum, and maximum accuracy are 81.66, 72.06, and 88.06%, respectively; and the average accuracy, sensitivity, and specificity for the four different types of burn depth are 90.54, 74.35, and 94.25%, respectively. The accuracy was compared with the clinical diagnosis obtained after the wound had healed. Hence, application of AI is very promising for prediction of burn depth and, therefore, can be a useful tool to help in guiding clinical decision and initial treatment of burn wounds.
Collapse
Affiliation(s)
- Marco Domenico Cirillo
- Department of Biomedical Engineering, Linköping University, Sweden
- Center for Medical Image Science and Visualization, Linköping University, Sweden
| | - Robin Mirdell
- The Burn Centre, Linköping University Hospital, Sweden
- Department of Plastic Surgery, Hand Surgery, and Burns, Linköping University, Sweden
- Department of Clinical and Experimental Medicine, Linköping University, Sweden
| | - Folke Sjöberg
- The Burn Centre, Linköping University Hospital, Sweden
- Department of Plastic Surgery, Hand Surgery, and Burns, Linköping University, Sweden
- Department of Clinical and Experimental Medicine, Linköping University, Sweden
| | - Tuan D Pham
- Department of Biomedical Engineering, Linköping University, Sweden
- Center for Medical Image Science and Visualization, Linköping University, Sweden
| |
Collapse
|
19
|
Cirillo MD, Mirdell R, Sjöberg F, Pham TD. Tensor Decomposition for Colour Image Segmentation of Burn Wounds. Sci Rep 2019; 9:3291. [PMID: 30824754 PMCID: PMC6397199 DOI: 10.1038/s41598-019-39782-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 01/28/2019] [Indexed: 11/09/2022] Open
Abstract
Research in burns has been a continuing demand over the past few decades, and important advancements are still needed to facilitate more effective patient stabilization and reduce mortality rate. Burn wound assessment, which is an important task for surgical management, largely depends on the accuracy of burn area and burn depth estimates. Automated quantification of these burn parameters plays an essential role for reducing these estimate errors conventionally carried out by clinicians. The task for automated burn area calculation is known as image segmentation. In this paper, a new segmentation method for burn wound images is proposed. The proposed methods utilizes a method of tensor decomposition of colour images, based on which effective texture features can be extracted for classification. Experimental results showed that the proposed method outperforms other methods not only in terms of segmentation accuracy but also computational speed.
Collapse
Affiliation(s)
- Marco D Cirillo
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden.
| | - Robin Mirdell
- The Burn Centre, Department of Plastic Surgery, Hand Surgery, and Burns, Linköping University, Linköping, Sweden
- Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
| | - Folke Sjöberg
- The Burn Centre, Department of Plastic Surgery, Hand Surgery, and Burns, Linköping University, Linköping, Sweden
- Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
| | - Tuan D Pham
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden.
| |
Collapse
|
20
|
Yu Z, Chen H, Liuxs J, You J, Leung H, Han G. Hybrid k -Nearest Neighbor Classifier. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:1263-1275. [PMID: 26126291 DOI: 10.1109/tcyb.2015.2443857] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Conventional k -nearest neighbor (KNN) classification approaches have several limitations when dealing with some problems caused by the special datasets, such as the sparse problem, the imbalance problem, and the noise problem. In this paper, we first perform a brief survey on the recent progress of the KNN classification approaches. Then, the hybrid KNN (HBKNN) classification approach, which takes into account the local and global information of the query sample, is designed to address the problems raised from the special datasets. In the following, the random subspace ensemble framework based on HBKNN (RS-HBKNN) classifier is proposed to perform classification on the datasets with noisy attributes in the high-dimensional space. Finally, the nonparametric tests are proposed to be adopted to compare the proposed method with other classification approaches over multiple datasets. The experiments on the real-world datasets from the Knowledge Extraction based on Evolutionary Learning dataset repository demonstrate that RS-HBKNN works well on real datasets, and outperforms most of the state-of-the-art classification approaches.
Collapse
|
21
|
Serrano C, Boloix-Tortosa R, Gómez-Cía T, Acha B. Features identification for automatic burn classification. Burns 2015; 41:1883-1890. [DOI: 10.1016/j.burns.2015.05.011] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 03/26/2015] [Accepted: 05/17/2015] [Indexed: 12/21/2022]
|
22
|
Liu NT, Salinas J. Machine learning in burn care and research: A systematic review of the literature. Burns 2015; 41:1636-1641. [DOI: 10.1016/j.burns.2015.07.001] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2015] [Accepted: 07/06/2015] [Indexed: 11/26/2022]
|