1
|
Wilson RH, Rowland R, Kennedy GT, Campbell C, Joe VC, Chin TL, Burmeister DM, Christy RJ, Durkin AJ. Review of machine learning for optical imaging of burn wound severity assessment. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:020901. [PMID: 38361506 PMCID: PMC10869118 DOI: 10.1117/1.jbo.29.2.020901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/08/2024] [Accepted: 01/10/2024] [Indexed: 02/17/2024]
Abstract
Significance Over the past decade, machine learning (ML) algorithms have rapidly become much more widespread for numerous biomedical applications, including the diagnosis and categorization of disease and injury. Aim Here, we seek to characterize the recent growth of ML techniques that use imaging data to classify burn wound severity and report on the accuracies of different approaches. Approach To this end, we present a comprehensive literature review of preclinical and clinical studies using ML techniques to classify the severity of burn wounds. Results The majority of these reports used digital color photographs as input data to the classification algorithms, but recently there has been an increasing prevalence of the use of ML approaches using input data from more advanced optical imaging modalities (e.g., multispectral and hyperspectral imaging, optical coherence tomography), in addition to multimodal techniques. The classification accuracy of the different methods is reported; it typically ranges from ∼ 70 % to 90% relative to the current gold standard of clinical judgment. Conclusions The field would benefit from systematic analysis of the effects of different input data modalities, training/testing sets, and ML classifiers on the reported accuracy. Despite this current limitation, ML-based algorithms show significant promise for assisting in objectively classifying burn wound severity.
Collapse
Affiliation(s)
- Robert H. Wilson
- University of California, Irvine, Beckman Laser Institute and Medical Clinic, Irvine, California, United States
- University of California, Irvine, Department of Medicine, Orange, California, United States
- University of California, Irvine, Health Policy Research Institute, Irvine, California, United States
| | - Rebecca Rowland
- University of California, Irvine, Beckman Laser Institute and Medical Clinic, Irvine, California, United States
| | - Gordon T. Kennedy
- University of California, Irvine, Beckman Laser Institute and Medical Clinic, Irvine, California, United States
| | - Chris Campbell
- University of California, Irvine, Beckman Laser Institute and Medical Clinic, Irvine, California, United States
| | - Victor C. Joe
- UC Irvine Health Regional Burn Center, Orange, California, United States
| | | | - David M. Burmeister
- Uniformed Services University of the Health Sciences, School of Medicine, Bethesda, Maryland, United States
| | - Robert J. Christy
- UT Health San Antonio, Military Health Institute, San Antonio, Texas, United States
| | - Anthony J. Durkin
- University of California, Irvine, Beckman Laser Institute and Medical Clinic, Irvine, California, United States
- University of California, Irvine, Department of Biomedical Engineering, Irvine, California, United States
| |
Collapse
|
2
|
Jacobson MJ, Masry ME, Arrubla DC, Tricas MR, Gnyawali SC, Zhang X, Gordillo G, Xue Y, Sen CK, Wachs J. Autonomous Multi-modality Burn Wound Characterization using Artificial Intelligence. Mil Med 2023; 188:674-681. [PMID: 37948279 DOI: 10.1093/milmed/usad301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 04/05/2023] [Accepted: 08/10/2023] [Indexed: 11/12/2023] Open
Abstract
INTRODUCTION Between 5% and 20% of all combat-related casualties are attributed to burn wounds. A decrease in the mortality rate of burns by about 36% can be achieved with early treatment, but this is contingent upon accurate characterization of the burn. Precise burn injury classification is recognized as a crucial aspect of the medical artificial intelligence (AI) field. An autonomous AI system designed to analyze multiple characteristics of burns using modalities including ultrasound and RGB images is described. MATERIALS AND METHODS A two-part dataset is created for the training and validation of the AI: in vivo B-mode ultrasound scans collected from porcine subjects (10,085 frames), and RGB images manually collected from web sources (338 images). The framework in use leverages an explanation system to corroborate and integrate burn expert's knowledge, suggesting new features and ensuring the validity of the model. Through the utilization of this framework, it is discovered that B-mode ultrasound classifiers can be enhanced by supplying textural features. More specifically, it is confirmed that statistical texture features extracted from ultrasound frames can increase the accuracy of the burn depth classifier. RESULTS The system, with all included features selected using explainable AI, is capable of classifying burn depth with accuracy and F1 average above 80%. Additionally, the segmentation module has been found capable of segmenting with a mean global accuracy greater than 84%, and a mean intersection-over-union score over 0.74. CONCLUSIONS This work demonstrates the feasibility of accurate and automated burn characterization for AI and indicates that these systems can be improved with additional features when a human expert is combined with explainable AI. This is demonstrated on real data (human for segmentation and porcine for depth classification) and establishes the groundwork for further deep-learning thrusts in the area of burn analysis.
Collapse
Affiliation(s)
- Maxwell J Jacobson
- Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
| | - Mohamed El Masry
- School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | | | - Maria Romeo Tricas
- Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
| | - Surya C Gnyawali
- School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Xinwei Zhang
- Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
| | - Gayle Gordillo
- School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Yexiang Xue
- Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
| | - Chandan K Sen
- School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Juan Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, IN 47907, USA
| |
Collapse
|
3
|
Craven KA, Luckey-Smith K, Rudy S. Ultrasonography for Skin and Soft Tissue Infections, Noninfectious Cysts, Foreign Bodies, and Burns in the Critical Care Setting. AACN Adv Crit Care 2023; 34:228-239. [PMID: 37644635 DOI: 10.4037/aacnacc2023182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
There are multiple opportunities for the use of ultrasonography in the diagnosis of skin and soft tissue differentials. Ultrasonography is inexpensive, easily reproducible, and able to provide real-time data in situations where condition changes and progression are common. Not only does bedside ultrasonography provide the clinician an in-depth look beyond epidermal structures into body cavities, it remains a safe, nonionizing radiating, effective, cost-efficient, reliable, and accessible tool for the emergency management of life- and limb-threatening integumentary infections. Unnecessary invasive procedures are minimized, providing improved patient outcomes. Integumentary abnormalities secondary to trauma, surgery, and hospitalization are common among critical care patients. This article provides a brief overview and evidence-based recommendations for the use of ultrasonography in the critical care setting for integumentary system conditions, including common skin and soft tissue differentials, foreign bodies, and burn depth assessment.
Collapse
Affiliation(s)
- Kelli A Craven
- Kelli A. Craven is Critical Care Nurse Practitioner Trauma and General Surgery, My Michigan Medical Center Midland, 4000 Wellness Dr, Midland, MI 48670
| | - Kyle Luckey-Smith
- Kyle Luckey-Smith is Flight Nurse, Vanderbilt University Medical Center LifeFlight, Nashville, Tennessee
| | - Susanna Rudy
- Susanna Rudy is Instructor, Vanderbilt University School of Nursing, Emergency Nurse Practitioner, and Critical Care Nurse Practitioner, Nashville, Tennessee
| |
Collapse
|
4
|
Yadav DP, Aljrees T, Kumar D, Kumar A, Singh KU, Singh T. Spatial attention-based residual network for human burn identification and classification. Sci Rep 2023; 13:12516. [PMID: 37532880 PMCID: PMC10397300 DOI: 10.1038/s41598-023-39618-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 07/27/2023] [Indexed: 08/04/2023] Open
Abstract
Diagnosing burns in humans has become critical, as early identification can save lives. The manual process of burn diagnosis is time-consuming and complex, even for experienced doctors. Machine learning (ML) and deep convolutional neural network (CNN) models have emerged as the standard for medical image diagnosis. The ML-based approach typically requires handcrafted features for training, which may result in suboptimal performance. Conversely, DL-based methods automatically extract features, but designing a robust model is challenging. Additionally, shallow DL methods lack long-range feature dependency, decreasing efficiency in various applications. We implemented several deep CNN models, ResNeXt, VGG16, and AlexNet, for human burn diagnosis. The results obtained from these models were found to be less reliable since shallow deep CNN models need improved attention modules to preserve the feature dependencies. Therefore, in the proposed study, the feature map is divided into several categories, and the channel dependencies between any two channel mappings within a given class are highlighted. A spatial attention map is built by considering the links between features and their locations. Our attention-based model BuRnGANeXt50 kernel and convolutional layers are also optimized for human burn diagnosis. The earlier study classified the burn based on depth of graft and non-graft. We first classified the burn based on the degree. Subsequently, it is classified into graft and non-graft. Furthermore, the proposed model performance is evaluated on Burns_BIP_US_database. The sensitivity of the BuRnGANeXt50 is 97.22% and 99.14%, respectively, for classifying burns based on degree and depth. This model may be used for quick screening of burn patients and can be executed in the cloud or on a local machine. The code of the proposed method can be accessed at https://github.com/dhirujis02/Journal.git for the sake of reproducibility.
Collapse
Affiliation(s)
- D P Yadav
- Department of Computer Engineering and Applications, GLA University, Mathura, India
| | - Turki Aljrees
- Department College of Computer Sci. and Eng., University of Hafr Al-Batin, Hafar Al-Batin, 39524, Saudi Arabia
| | - Deepak Kumar
- Department of Computer Science, NIT Meghalaya, Shillong, India
| | - Ankit Kumar
- Department of Computer Engineering and Applications, GLA University, Mathura, India.
| | - Kamred Udham Singh
- School of Computing, Graphic Era Hill University, Dehradun, 248002, India
| | - Teekam Singh
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, 248002, India
| |
Collapse
|
5
|
Li Z, Huang J, Tong X, Zhang C, Lu J, Zhang W, Song A, Ji S. GL-FusionNet: Fusing global and local features to classify deep and superficial partial thickness burn. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:10153-10173. [PMID: 37322927 DOI: 10.3934/mbe.2023445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Burns constitute one of the most common injuries in the world, and they can be very painful for the patient. Especially in the judgment of superficial partial thickness burns and deep partial thickness burns, many inexperienced clinicians are easily confused. Therefore, in order to make burn depth classification automated as well as accurate, we have introduced the deep learning method. This methodology uses a U-Net to segment burn wounds. On this basis, a new thickness burn classification model that fuses global and local features (GL-FusionNet) is proposed. For the thickness burn classification model, we use a ResNet50 to extract local features, use a ResNet101 to extract global features, and finally implement the add method to perform feature fusion and obtain the deep partial or superficial partial thickness burn classification results. Burns images are collected clinically, and they are segmented and labeled by professional physicians. Among the segmentation methods, the U-Net used achieved a Dice score of 85.352 and IoU score of 83.916, which are the best results among all of the comparative experiments. In the classification model, different existing classification networks are mainly used, as well as a fusion strategy and feature extraction method that are adjusted to conduct experiments; the proposed fusion network model also achieved the best results. Our method yielded the following: accuracy of 93.523, recall of 93.67, precision of 93.51, and F1-score of 93.513. In addition, the proposed method can quickly complete the auxiliary diagnosis of the wound in the clinic, which can greatly improve the efficiency of the initial diagnosis of burns and the nursing care of clinical medical staff.
Collapse
Affiliation(s)
- Zhiwei Li
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
| | - Jie Huang
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| | - Xirui Tong
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| | - Chenbei Zhang
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
| | - Jianyu Lu
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| | - Wei Zhang
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| | - Anping Song
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
| | - Shizhao Ji
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| |
Collapse
|
6
|
Wu Y, Barrere V, Han A, Chang EY, Andre MP, Shah SB. Repeatability, Reproducibility and Sources of Variability in the Assessment of Backscatter Coefficient and Texture Parameters from High-Frequency Ultrasound Acquisitions in Human Median Nerve. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:122-135. [PMID: 36283940 DOI: 10.1016/j.ultrasmedbio.2022.08.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 08/01/2022] [Accepted: 08/07/2022] [Indexed: 06/16/2023]
Abstract
Ultrasound (US) is an increasingly prevalent and effective diagnostic modality for neuromuscular imaging. Gray-scale B-mode imaging has been the dominant US approach to evaluating nerves qualitatively or making morphometric measurements of nerves, providing important insights into pathological changes for conditions such as carpal tunnel syndrome. Among more recent ultrasound strategies, high-frequency ultrasound (often defined as >15 MHz for clinical applications), quantitative ultrasound and image textural analysis offer promising enhancements for improved and more objective approaches to nerve imaging. In this study, we evaluated the repeatability and reproducibility of backscatter coefficient (BSC) and imaging texture features extracted by gray-level co-occurrence matrices (GLCMs) in homogeneous tissue-mimicking reference phantoms and in median nerves in the wrists of healthy participants. We also investigated several practical sources of variability in the assessment of quantitative parameters, including influences of operators, and participant-to-participant variability. Overall, BSC- and GLCM-based outcomes are highly repeatable and reproducible after operator training, based on measurement of descriptive statistics, repeatability coefficient (RC) and reproducibility coefficient recommended by Quantitative Imaging Biomarker Alliance (QIBA RDC). GLCM parameters appear more reproducible and repeatable than BSC-based parameters in healthy participants in vivo. However, such variability noted here must be compared with the value ranges and variability of the results in pathological nerves, including median nerves afflicted by trauma, overuse syndromes such as carpal tunnel syndrome and after surgical repair.
Collapse
Affiliation(s)
- Yuanshan Wu
- Department of Bioengineering, University of California, San Diego, California, USA; Research Service, VA San Diego Healthcare System, San Diego, California, USA
| | - Victor Barrere
- Research Service, VA San Diego Healthcare System, San Diego, California, USA; Department of Orthopaedic Surgery, University of California, San Diego, California, USA
| | - Aiguo Han
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Eric Y Chang
- Research Service, VA San Diego Healthcare System, San Diego, California, USA; Department of Radiology, University of California, San Diego, California, USA
| | - Michael P Andre
- Research Service, VA San Diego Healthcare System, San Diego, California, USA; Department of Radiology, University of California, San Diego, California, USA
| | - Sameer B Shah
- Department of Bioengineering, University of California, San Diego, California, USA; Research Service, VA San Diego Healthcare System, San Diego, California, USA; Department of Orthopaedic Surgery, University of California, San Diego, California, USA.
| |
Collapse
|
7
|
Czajkowska J, Borak M. Computer-Aided Diagnosis Methods for High-Frequency Ultrasound Data Analysis: A Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22218326. [PMID: 36366024 PMCID: PMC9653964 DOI: 10.3390/s22218326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/21/2022] [Accepted: 10/25/2022] [Indexed: 05/31/2023]
Abstract
Over the last few decades, computer-aided diagnosis systems have become a part of clinical practice. They have the potential to assist clinicians in daily diagnostic tasks. The image processing techniques are fast, repeatable, and robust, which helps physicians to detect, classify, segment, and measure various structures. The recent rapid development of computer methods for high-frequency ultrasound image analysis opens up new diagnostic paths in dermatology, allergology, cosmetology, and aesthetic medicine. This paper, being the first in this area, presents a research overview of high-frequency ultrasound image processing techniques, which have the potential to be a part of computer-aided diagnosis systems. The reviewed methods are categorized concerning the application, utilized ultrasound device, and image data-processing type. We present the bridge between diagnostic needs and already developed solutions and discuss their limitations and future directions in high-frequency ultrasound image analysis. A search was conducted of the technical literature from 2005 to September 2022, and in total, 31 studies describing image processing methods were reviewed. The quantitative and qualitative analysis included 39 algorithms, which were selected as the most effective in this field. They were completed by 20 medical papers and define the needs and opportunities for high-frequency ultrasound application and CAD development.
Collapse
|
8
|
Mortada H, Al Mazrou F, Alghareeb A, AlEnezi M, Alalawi S, Neel OF. Overview of the role of ultrasound imaging applications in plastic and reconstructive surgery: is ultrasound imaging the stethoscope of a plastic surgeon? A narrative review of the literature. EUROPEAN JOURNAL OF PLASTIC SURGERY 2022. [DOI: 10.1007/s00238-022-01981-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
9
|
Yang R, Wang Z, Li J, Pi X, Wang X, Xu Y, Shi Y, Zhou S. Identification and Verification of Five Potential Biomarkers Related to Skin and Thermal Injury Using Weighted Gene Co-Expression Network Analysis. Front Genet 2022; 12:781589. [PMID: 35047008 PMCID: PMC8762241 DOI: 10.3389/fgene.2021.781589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/22/2021] [Indexed: 12/03/2022] Open
Abstract
Background: Burn injury is a life-threatening disease that does not have ideal biomarkers. Therefore, this study first applied weighted gene co-expression network analysis (WGCNA) and differentially expressed gene (DEG) screening methods to identify pivotal genes and diagnostic biomarkers associated with the skin burn process. Methods: After obtaining transcriptomic datasets of burn patient skin and normal skin from Gene Expression Omnibus (GEO) and performing differential analysis and functional enrichment, WGCNA was used to identify hub gene modules associated with burn skin processes in the burn patient peripheral blood sample dataset and determine the correlation between modules and clinical features. Enrichment analysis was performed to identify the functions and pathways of key module genes. Differential analysis, WGCNA, protein-protein interaction analysis, and enrichment analysis were utilized to screen for hub genes. Hub genes were validated in two other GEO datasets, tested by immunohistochemistry for hub gene expression in burn patients, and receiver operating characteristic curve analysis was performed. Finally, we constructed the specific drug activity, transcription factors, and microRNA regulatory network of the five hub genes. Results: A total of 1,373 DEGs in GSE8056 were obtained, and the top 5 upregulated genes were S100A12, CXCL8, CXCL5, MMP3, and MMP1, whereas the top 5 downregulated genes were SCGB1D2, SCGB2A2, DCD, TSPAN8, and KRT25. DEGs were significantly enriched in the immunity, epidermal development, and skin development processes. In WGCNA, the yellow module was identified as the most closely associated module with tissue damage during the burn process, and the five hub genes (ANXA3, MCEMP1, MMP9, S100A12, and TCN1) were identified as the key genes for burn injury status, which consistently showed high expression in burn patient blood samples in the GSE37069 and GSE13902 datasets. Furthermore, we verified using immunohistochemistry that these five novel hub genes were also significantly elevated in burn patient skin. In addition, MCEMP1, MMP9, and S100A12 showed perfect diagnostic performance in the receiver operating characteristic analysis. Conclusion: In conclusion, we analyzed the changes in genetic processes in the skin during burns and used them to identify five potential novel diagnostic markers in blood samples from burn patients, which are important for burn patient diagnosis. In particular, MCEMP1, MMP9, and S100A12 are three key blood biomarkers that can be used to identify skin damage in burn patients.
Collapse
Affiliation(s)
- Ronghua Yang
- Department of Burn Surgery and Skin Regeneration, The First People's Hospital of Foshan, Foshan, China
| | - Zhengguang Wang
- Department of Orthopedics, The First Affiliated Hospital of China Medical University, Shenyang, China
| | - Jiehua Li
- Department of Dermatology, The First People's Hospital of Foshan, Foshan, China
| | - Xiaobing Pi
- Department of Dermatology, The First People's Hospital of Foshan, Foshan, China
| | - Xiaoxiang Wang
- Department of Burn Surgery and Skin Regeneration, The First People's Hospital of Foshan, Foshan, China
| | - Yang Xu
- Department of Molecular Pharmacology, School of Medicine, Nankai University, Tianjin, China
| | - Yan Shi
- Department of Wound Repair and Institute of Wound Repair, The Second Clinical Medical College, Jinan University (Shenzhen People's Hospital), Shenzhen, China.,The First Affiliated Hospital, Jinan University, Guangzhou, China
| | - Sitong Zhou
- Department of Dermatology, The First People's Hospital of Foshan, Foshan, China
| |
Collapse
|
10
|
Iddins CJ, DiCarlo AL, Ervin MD, Herrera-Reyes E, Goans RE. Cutaneous and local radiation injuries. JOURNAL OF RADIOLOGICAL PROTECTION : OFFICIAL JOURNAL OF THE SOCIETY FOR RADIOLOGICAL PROTECTION 2022; 42:10.1088/1361-6498/ac241a. [PMID: 34488201 PMCID: PMC8785213 DOI: 10.1088/1361-6498/ac241a] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 09/06/2021] [Indexed: 06/13/2023]
Abstract
The threat of a large-scale radiological or nuclear (R/N) incident looms in the present-day climate, as noted most recently in an editorial in Scientific American (March 2021). These large-scale incidents are infrequent but affect large numbers of people. Smaller-scale R/N incidents occur more often, affecting smaller numbers of people. There is more awareness of acute radiation syndrome (ARS) in the medical community; however, ionising radiation-induced injuries to the skin are much less understood. This article will provide an overview of radiation-induced injuries to the skin, deeper tissues, and organs. The history and nomenclature; types and causes of injuries; pathophysiology; evaluation and diagnosis; current medical management; and current research of the evaluation and management are presented. Cutaneous radiation injuries (CRI) or local radiation injuries (LRI) may lead to cutaneous radiation syndrome, a sub-syndrome of ARS. These injuries may occur from exposure to radioactive particles suspended in the environment (air, soil, water) after a nuclear detonation or an improvised nuclear detonation (IND), a nuclear power plant incident, or an encounter with a radioactive dispersal or exposure device. These incidents may also result in a radiation-combined injury; a chemical, thermal, or traumatic injury, with radiation exposure. Skin injuries from medical diagnostic and therapeutic imaging, medical misadministration of nuclear medicine or radiotherapy, occupational exposures (including research) to radioactive sources are more common but are not the focus of this manuscript. Diagnosis and evaluation of injuries are based on the scenario, clinical picture, and dosimetry, and may be assisted through advanced imaging techniques. Research-based multidisciplinary therapies, both in the laboratory and clinical trial environments, hold promise for future medical management. Great progress is being made in recognising the extent of injuries, understanding their pathophysiology, as well as diagnosis and management; however, research gaps still exist.
Collapse
Affiliation(s)
- Carol J Iddins
- Radiation Emergency Assistance Center/Training Site (REAC/TS), Oak Ridge Institute for Science and Education (ORISE), Oak Ridge, TN, United States of America
| | - Andrea L DiCarlo
- Radiation and Nuclear Countermeasures Program (RNCP), National Institute of Allergy and Infectious Diseases (NIAID), National Institutes of Health (NIH), Bethesda, MD, United States of America
| | - Mark D Ervin
- Radiation Emergency Assistance Center/Training Site (REAC/TS), Oak Ridge Institute for Science and Education (ORISE), Oak Ridge, TN, United States of America
| | | | - Ronald E Goans
- Radiation Emergency Assistance Center/Training Site (REAC/TS), Oak Ridge Institute for Science and Education (ORISE), Oak Ridge, TN, United States of America
- MJW Corporation, Buffalo, NY, United States of America
| |
Collapse
|
11
|
Lee S, Rahul, Lukan J, Boyko T, Zelenova K, Makled B, Parsey C, Norfleet J, De S. A deep learning model for burn depth classification using ultrasound imaging. J Mech Behav Biomed Mater 2021; 125:104930. [PMID: 34781225 DOI: 10.1016/j.jmbbm.2021.104930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 10/11/2021] [Accepted: 10/24/2021] [Indexed: 11/28/2022]
Abstract
Identification of burn depth with sufficient accuracy is a challenging problem. This paper presents a deep convolutional neural network to classify burn depth based on altered tissue morphology of burned skin manifested as texture patterns in the ultrasound images. The network first learns a low-dimensional manifold of the unburned skin images using an encoder-decoder architecture that reconstructs it from ultrasound images of burned skin. The encoder is then re-trained to classify burn depths. The encoder-decoder network is trained using a dataset comprised of B-mode ultrasound images of unburned and burned ex vivo porcine skin samples. The classifier is developed using B-mode images of burned in situ skin samples obtained from freshly euthanized postmortem pigs. The performance metrics obtained from 20-fold cross-validation show that the model can identify deep-partial thickness burns, which is the most difficult to diagnose clinically, with 99% accuracy, 98% sensitivity, and 100% specificity. The diagnostic accuracy of the classifier is further illustrated by the high area under the curve values of 0.99 and 0.95, respectively, for the receiver operating characteristic and precision-recall curves. A post hoc explanation indicates that the classifier activates the discriminative textural features in the B-mode images for burn classification. The proposed model has the potential for clinical utility in assisting the clinical assessment of burn depths using a widely available clinical imaging device.
Collapse
Affiliation(s)
- Sangrock Lee
- Center for Modeling, Simulation and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Rahul
- Center for Modeling, Simulation and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA.
| | - James Lukan
- Department of Surgery, University at Buffalo-State University of New York, Buffalo, NY, 14215, USA
| | - Tatiana Boyko
- Department of Surgery, University at Buffalo-State University of New York, Buffalo, NY, 14215, USA
| | - Kateryna Zelenova
- Department of Surgery, University at Buffalo-State University of New York, Buffalo, NY, 14215, USA
| | - Basiel Makled
- U.S. Army Futures Command, Combat Capabilities Development Command Soldier Center STTC, Orlando, FL, 32826, USA
| | - Conner Parsey
- U.S. Army Futures Command, Combat Capabilities Development Command Soldier Center STTC, Orlando, FL, 32826, USA
| | - Jack Norfleet
- U.S. Army Futures Command, Combat Capabilities Development Command Soldier Center STTC, Orlando, FL, 32826, USA
| | - Suvranu De
- Center for Modeling, Simulation and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| |
Collapse
|
12
|
Irfan R, Almazroi AA, Rauf HT, Damaševičius R, Nasr EA, Abdelgawad AE. Dilated Semantic Segmentation for Breast Ultrasonic Lesion Detection Using Parallel Feature Fusion. Diagnostics (Basel) 2021; 11:1212. [PMID: 34359295 PMCID: PMC8304124 DOI: 10.3390/diagnostics11071212] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 04/16/2021] [Accepted: 04/27/2021] [Indexed: 12/15/2022] Open
Abstract
Breast cancer is becoming more dangerous by the day. The death rate in developing countries is rapidly increasing. As a result, early detection of breast cancer is critical, leading to a lower death rate. Several researchers have worked on breast cancer segmentation and classification using various imaging modalities. The ultrasonic imaging modality is one of the most cost-effective imaging techniques, with a higher sensitivity for diagnosis. The proposed study segments ultrasonic breast lesion images using a Dilated Semantic Segmentation Network (Di-CNN) combined with a morphological erosion operation. For feature extraction, we used the deep neural network DenseNet201 with transfer learning. We propose a 24-layer CNN that uses transfer learning-based feature extraction to further validate and ensure the enriched features with target intensity. To classify the nodules, the feature vectors obtained from DenseNet201 and the 24-layer CNN were fused using parallel fusion. The proposed methods were evaluated using a 10-fold cross-validation on various vector combinations. The accuracy of CNN-activated feature vectors and DenseNet201-activated feature vectors combined with the Support Vector Machine (SVM) classifier was 90.11 percent and 98.45 percent, respectively. With 98.9 percent accuracy, the fused version of the feature vector with SVM outperformed other algorithms. When compared to recent algorithms, the proposed algorithm achieves a better breast cancer diagnosis rate.
Collapse
Affiliation(s)
- Rizwana Irfan
- Department of Information Technology, College of Computing and Information Technology at Khulais, University of Jeddah, Jeddah 21959, Saudi Arabia; (R.I.); (A.A.A.)
| | - Abdulwahab Ali Almazroi
- Department of Information Technology, College of Computing and Information Technology at Khulais, University of Jeddah, Jeddah 21959, Saudi Arabia; (R.I.); (A.A.A.)
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland;
| | - Emad Abouel Nasr
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia; (E.A.N.); (A.E.A.)
| | - Abdelatty E. Abdelgawad
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia; (E.A.N.); (A.E.A.)
| |
Collapse
|