1
|
Wongveerasin P, Tongdee T, Saiviroonporn P. Deep learning for tubes and lines detection in critical illness: Generalizability and comparison with residents. Eur J Radiol Open 2024; 13:100593. [PMID: 39175597 PMCID: PMC11338948 DOI: 10.1016/j.ejro.2024.100593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 07/12/2024] [Accepted: 07/23/2024] [Indexed: 08/24/2024] Open
Abstract
Background Artificial intelligence (AI) has been proven useful for the assessment of tubes and lines on chest radiographs of general patients. However, validation on intensive care unit (ICU) patients remains imperative. Methods This retrospective case-control study evaluated the performance of deep learning (DL) models for tubes and lines classification on both an external public dataset and a local dataset comprising 303 films randomly sampled from the ICU database. The endotracheal tubes (ETTs), central venous catheters (CVCs), and nasogastric tubes (NGTs) were classified into "Normal," "Abnormal," or "Borderline" positions by DL models with and without rule-based modification. Their performance was evaluated using an experienced radiologist as the standard reference. Results The algorithm showed decreased performance on the local ICU dataset, compared to that of the external dataset, decreasing from the Area Under the Curve of Receiver (AUC) of 0.967 (95 % CI 0.965-0.973) to the AUC of 0.70 (95 % CI 0.68-0.77). Significant improvement in the ETT classification task was observed after modifications were made to the model to allow the use of the spatial relationship between line tips and reference anatomy with the improvement of the AUC, increasing from 0.71 (95 % CI 0.70 - 0.75) to 0.86 (95 % CI 0.83 - 0.94). Conclusions The externally trained model exhibited limited generalizability on the local ICU dataset. Therefore, evaluating the performance of externally trained AI before integrating it into critical care routine is crucial. Rule-based algorithm may be used in combination with DL to improve results.
Collapse
Affiliation(s)
- Pootipong Wongveerasin
- Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Trongtum Tongdee
- Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Pairash Saiviroonporn
- Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| |
Collapse
|
2
|
Kim DJ, Nam IC, Kim DR, Kim JJ, Hwang IK, Lee JS, Park SE, Kim H. Detection and position evaluation of chest percutaneous drainage catheter on chest radiographs using deep learning. PLoS One 2024; 19:e0305859. [PMID: 39133733 PMCID: PMC11318879 DOI: 10.1371/journal.pone.0305859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 06/04/2024] [Indexed: 08/15/2024] Open
Abstract
PURPOSE This study aimed to develop an algorithm for the automatic detecting chest percutaneous catheter drainage (PCD) and evaluating catheter positions on chest radiographs using deep learning. METHODS This retrospective study included 1,217 chest radiographs (proper positioned: 937; malpositioned: 280) from a total of 960 patients underwent chest PCD from October 2017 to February 2023. The tip location of the chest PCD was annotated using bounding boxes and classified as proper positioned and malpositioned. The radiographs were randomly allocated into the training, validation sets (total: 1,094 radiographs; proper positioned: 853 radiographs; malpositioned: 241 radiographs), and test datasets (total: 123 radiographs; proper positioned: 84 radiographs; malpositioned: 39 radiographs). The selected AI model was used to detect the catheter tip of chest PCD and evaluate the catheter's position using the test dataset to distinguish between properly positioned and malpositioned cases. Its performance in detecting the catheter and assessing its position on chest radiographs was evaluated by per radiographs and per instances. The association between the position and function of the catheter during chest PCD was evaluated. RESULTS In per chest radiographs, the selected model's accuracy was 0.88. The sensitivity and specificity were 0.86 and 0.92, respectively. In per instance, the selected model's the mean Average Precision 50 (mAP50) was 0.86. The precision and recall were 0.90 and 0.79 respectively. Regarding the association between the position and function of the catheter during chest PCD, its sensitivity and specificity were 0.93 and 0.95, respectively. CONCLUSION The artificial intelligence model for the automatic detection and evaluation of catheter position during chest PCD on chest radiographs demonstrated acceptable diagnostic performance and could assist radiologists and clinicians in the early detection of catheter malposition and malfunction during chest percutaneous catheter drainage.
Collapse
Affiliation(s)
- Duk Ju Kim
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - In Chul Nam
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Doo Ri Kim
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Jeong Jae Kim
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Im-kyung Hwang
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Jeong Sub Lee
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Sung Eun Park
- Department of Radiology, Gyeongsang National University School of Medicine and Gyeongsang National University Changwon Hospital, Changwon, Republic of Korea
| | - Hyeonwoo Kim
- Upstage AI, Yongin-si, Gyeonggi-do, Republic of Korea
| |
Collapse
|
3
|
De Rosa S, Bignami E, Bellini V, Battaglini D. The Future of Artificial Intelligence Using Images and Clinical Assessment for Difficult Airway Management. Anesth Analg 2024:00000539-990000000-00808. [PMID: 38557728 DOI: 10.1213/ane.0000000000006969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Artificial intelligence (AI) algorithms, particularly deep learning, are automatic and sophisticated methods that recognize complex patterns in imaging data providing high qualitative assessments. Several machine-learning and deep-learning models using imaging techniques have been recently developed and validated to predict difficult airways. Despite advances in AI modeling. In this review article, we describe the advantages of using AI models. We explore how these methods could impact clinical practice. Finally, we discuss predictive modeling for difficult laryngoscopy using machine-learning and the future approach with intelligent intubation devices.
Collapse
Affiliation(s)
- Silvia De Rosa
- From the Centre for Medical Sciences - CISMed, University of Trento, Trento, Italy
- Anesthesia and Intensive Care, Santa Chiara Regional Hospital, APSS Trento, Trento, Italy
| | - Elena Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Valentina Bellini
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Denise Battaglini
- Anesthesia and Intensive Care, IRCCS Ospedale Policlinico San Martino, Genova, Italy
| |
Collapse
|
4
|
Wang CH, Hwang T, Huang YS, Tay J, Wu CY, Wu MC, Roth HR, Yang D, Zhao C, Wang W, Huang CH. Deep Learning-Based Localization and Detection of Malpositioned Endotracheal Tube on Portable Supine Chest Radiographs in Intensive and Emergency Medicine: A Multicenter Retrospective Study. Crit Care Med 2024; 52:237-247. [PMID: 38095506 PMCID: PMC10793783 DOI: 10.1097/ccm.0000000000006046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
OBJECTIVES We aimed to develop a computer-aided detection (CAD) system to localize and detect the malposition of endotracheal tubes (ETTs) on portable supine chest radiographs (CXRs). DESIGN This was a retrospective diagnostic study. DeepLabv3+ with ResNeSt50 backbone and DenseNet121 served as the model architecture for segmentation and classification tasks, respectively. SETTING Multicenter study. PATIENTS For the training dataset, images meeting the following inclusion criteria were included: 1) patient age greater than or equal to 20 years; 2) portable supine CXR; 3) examination in emergency departments or ICUs; and 4) examination between 2015 and 2019 at National Taiwan University Hospital (NTUH) (NTUH-1519 dataset: 5,767 images). The derived CAD system was tested on images from chronologically (examination during 2020 at NTUH, NTUH-20 dataset: 955 images) or geographically (examination between 2015 and 2020 at NTUH Yunlin Branch [YB], NTUH-YB dataset: 656 images) different datasets. All CXRs were annotated with pixel-level labels of ETT and with image-level labels of ETT presence and malposition. INTERVENTIONS None. MEASUREMENTS AND MAIN RESULTS For the segmentation model, the Dice coefficients indicated that ETT would be delineated accurately (NTUH-20: 0.854; 95% CI, 0.824-0.881 and NTUH-YB: 0.839; 95% CI, 0.820-0.857). For the classification model, the presence of ETT could be accurately detected with high accuracy (area under the receiver operating characteristic curve [AUC]: NTUH-20, 1.000; 95% CI, 0.999-1.000 and NTUH-YB: 0.994; 95% CI, 0.984-1.000). Furthermore, among those images with ETT, ETT malposition could be detected with high accuracy (AUC: NTUH-20, 0.847; 95% CI, 0.671-0.980 and NTUH-YB, 0.734; 95% CI, 0.630-0.833), especially for endobronchial intubation (AUC: NTUH-20, 0.991; 95% CI, 0.969-1.000 and NTUH-YB, 0.966; 95% CI, 0.933-0.991). CONCLUSIONS The derived CAD system could localize ETT and detect ETT malposition with excellent performance, especially for endobronchial intubation, and with favorable potential for external generalizability.
Collapse
Affiliation(s)
- Chih-Hung Wang
- Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Tianyu Hwang
- Mathematics Division, National Center for Theoretical Sciences, National Taiwan University, Taipei, Taiwan
| | - Yu-Sen Huang
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan
| | - Joyce Tay
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Cheng-Yi Wu
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Meng-Che Wu
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | | | | | - Can Zhao
- NVIDIA Corporation, Bethesda, CA
| | - Weichung Wang
- Institute of Applied Mathematical Sciences, National Taiwan University, Taipei, Taiwan
| | - Chien-Hua Huang
- Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
5
|
Grenier PA, Brun AL, Mellot F. [The contribution of artificial intelligence (AI) subsequent to the processing of thoracic imaging]. Rev Mal Respir 2024; 41:110-126. [PMID: 38129269 DOI: 10.1016/j.rmr.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 11/27/2023] [Indexed: 12/23/2023]
Abstract
The contribution of artificial intelligence (AI) to medical imaging is currently the object of widespread experimentation. The development of deep learning (DL) methods, particularly convolution neural networks (CNNs), has led to performance gains often superior to those achieved by conventional methods such as machine learning. Radiomics is an approach aimed at extracting quantitative data not accessible to the human eye from images expressing a disease. The data subsequently feed machine learning models and produce diagnostic or prognostic probabilities. As for the multiple applications of AI methods in thoracic imaging, they are undergoing evaluation. Chest radiography is a practically ideal field for the development of DL algorithms able to automatically interpret X-rays. Current algorithms can detect up to 14 different abnormalities present either in isolation or in combination. Chest CT is another area offering numerous AI applications. Various algorithms have been specifically formed and validated for the detection and characterization of pulmonary nodules and pulmonary embolism, as well as segmentation and quantitative analysis of the extent of diffuse lung diseases (emphysema, infectious pneumonias, interstitial lung disease). In addition, the analysis of medical images can be associated with clinical, biological, and functional data (multi-omics analysis), the objective being to construct predictive approaches regarding disease prognosis and response to treatment.
Collapse
Affiliation(s)
- P A Grenier
- Délégation à la recherche clinique et l'innovation, hôpital Foch, Suresnes, France.
| | - A L Brun
- Service de radiologie, hôpital Foch, Suresnes, France
| | - F Mellot
- Service de radiologie, hôpital Foch, Suresnes, France
| |
Collapse
|
6
|
Sorace L, Raju N, O'Shaughnessy J, Kachel S, Jansz K, Yang N, Lim RP. Assessment of inspiration and technical quality in anteroposterior thoracic radiographs using machine learning. Radiography (Lond) 2024; 30:107-115. [PMID: 37918335 DOI: 10.1016/j.radi.2023.10.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 11/04/2023]
Abstract
INTRODUCTION Chest radiographs are the most performed radiographic procedure, but suboptimal technical factors can impact clinical interpretation. A deep learning model was developed to assess technical and inspiratory adequacy of anteroposterior chest radiographs. METHODS Adult anteroposterior chest radiographs (n = 2375) were assessed for technical adequacy, and if otherwise technically adequate, for adequacy of inspiration. Images were labelled by an experienced radiologist with one of three ground truth labels: inadequate technique (n = 605, 25.5 %), adequate inspiration (n = 900, 37.9 %), and inadequate inspiration (n = 870, 36.6 %). A convolutional neural network was then iteratively trained to predict these labels and evaluated using recall, precision, F1 and micro-F1, and Gradient-weighted Class Activation Mapping analysis on a hold-out test set. Impact of kyphosis on model accuracy was assessed. RESULTS The model performed best for radiographs with adequate technique, and worst for images with inadequate technique. Recall was highest (89 %) for radiographs with both adequate technique and inspiration, with recall of 81 % for images with adequate technique and inadequate inspiration, and 60 % for images with inadequate technique, although precision was highest (85 %) for this category. Per-class F1 was 80 %, 81 % and 70 % for adequate inspiration, inadequate inspiration, and inadequate technique respectively. Weighted F1 and Micro F1 scores were 78 %. Presence or absence of kyphosis had no significant impact on model accuracy in images with adequate technique. CONCLUSION This study explores the promising performance of a machine learning algorithm for assessment of inspiratory adequacy and overall technical adequacy for anteroposterior chest radiograph acquisition. IMPLICATIONS FOR PRACTICE With further refinement, machine learning can contribute to education and quality improvement in radiology departments.
Collapse
Affiliation(s)
- L Sorace
- Department of Radiology, Austin Hospital, Heidelberg, Australia.
| | - N Raju
- Department of Radiology, Austin Hospital, Heidelberg, Australia
| | - J O'Shaughnessy
- Department of Radiology, Austin Hospital, Heidelberg, Australia
| | - S Kachel
- Department of Radiology, Austin Hospital, Heidelberg, Australia; The University of Melbourne, Parkville, Australia; Columbia University, New York, NY, USA
| | - K Jansz
- Department of Radiology, Austin Hospital, Heidelberg, Australia
| | - N Yang
- Department of Radiology, Austin Hospital, Heidelberg, Australia; The University of Melbourne, Parkville, Australia
| | - R P Lim
- Department of Radiology, Austin Hospital, Heidelberg, Australia; The University of Melbourne, Parkville, Australia
| |
Collapse
|
7
|
Liubaskas R, Eisenberg RL, Chakrala NL, Liubauske A, Liberman Y, Oren-Grinberg A, Tridente DM, Litmanovich DE. New Imaging Protocol to Assess Endotracheal Tube Placement: A Case-control Study. J Thorac Imaging 2024; 39:W13-W18. [PMID: 37884356 DOI: 10.1097/rti.0000000000000754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
PURPOSE After intubation, a frontal chest radiograph (CXR) is obtained to assess the endotracheal tube (ETT) position by measuring the ETT tip-to-carina distance. ETT tip location changes with neck position and can be determined by assessing the position of the mandible. As the mandible is typically not visualized on standard CXRs, we developed a new protocol where the mandible is seen on the CXR, hypothesizing that it will improve the accuracy of the ETT position assessment. PATIENTS AND METHODS Two groups of intubated patients studied (February 9, 2021 to May 4, 2021): CXR taken in either standard or new protocol (visible mandible required). Two observers independently assessed the images for the neck position (neutral, flexed, and extended) based on the mandible position relative to the vertebral bodies. With the mandible absent (ie, neck position unknown), we established terms: "gray zone" (difficult to assess the ETT position adequately) and "clear zone" (confident recommendation to retract, advance, or maintain ETT position). We compared the rate of confident assessment of the ETT in the standard versus the new protocol. RESULTS Of 308 patients, 155 had standard CXRs and 153 had the new protocol. Interrater agreements for the distance between the ETT and the carina and mandible height based on vertebral bodies were 0.986 ( P < 0.001) and 0.955 ( P < 0.001), respectively. The mandible was visualized significantly more often ( P < 0.001) with the new protocol (92%; 141/153) than with the standard protocol (21%; 32/155). By visualizing the mandible or the presence of the ETT within the clear zone, a reader could confidently assess the ETT position more often using the new protocol (96.7% vs 51.6%, P < 0.001). CONCLUSIONS Mandible visibility on postintubation CXR is helpful for assessing the ETT position. The new protocol resulted in a significant increase in both visualizing the mandible and accurately determining ETT position on postintubation CXR.
Collapse
Affiliation(s)
| | | | | | | | | | - Achikam Oren-Grinberg
- Department of Anaesthesia, Harvard Medical School Beth Israel Deaconess Medical Center, Boston, MA
| | | | | |
Collapse
|
8
|
Lonsdale H, Gray GM, Ahumada LM, Matava CT. Machine Vision and Image Analysis in Anesthesia: Narrative Review and Future Prospects. Anesth Analg 2023; 137:830-840. [PMID: 37712476 DOI: 10.1213/ane.0000000000006679] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
Machine vision describes the use of artificial intelligence to interpret, analyze, and derive predictions from image or video data. Machine vision-based techniques are already in clinical use in radiology, ophthalmology, and dermatology, where some applications currently equal or exceed the performance of specialty physicians in areas of image interpretation. While machine vision in anesthesia has many potential applications, its development remains in its infancy in our specialty. Early research for machine vision in anesthesia has focused on automated recognition of anatomical structures during ultrasound-guided regional anesthesia or line insertion; recognition of the glottic opening and vocal cords during video laryngoscopy; prediction of the difficult airway using facial images; and clinical alerts for endobronchial intubation detected on chest radiograph. Current machine vision applications measuring the distance between endotracheal tube tip and carina have demonstrated noninferior performance compared to board-certified physicians. The performance and potential uses of machine vision for anesthesia will only grow with the advancement of underlying machine vision algorithm technical performance developed outside of medicine, such as convolutional neural networks and transfer learning. This article summarizes recently published works of interest, provides a brief overview of techniques used to create machine vision applications, explains frequently used terms, and discusses challenges the specialty will encounter as we embrace the advantages that this technology may bring to future clinical practice and patient care. As machine vision emerges onto the clinical stage, it is critically important that anesthesiologists are prepared to confidently assess which of these devices are safe, appropriate, and bring added value to patient care.
Collapse
Affiliation(s)
- Hannah Lonsdale
- From the Division of Pediatric Anesthesiology, Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Geoffrey M Gray
- Center for Pediatric Data Science and Analytics Methodology, Johns Hopkins All Children's Hospital, St Petersburg, Florida
| | - Luis M Ahumada
- Center for Pediatric Data Science and Analytics Methodology, Johns Hopkins All Children's Hospital, St Petersburg, Florida
| | - Clyde T Matava
- Department of Anesthesia and Pain Medicine, The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Anesthesiology and Pain Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Tang CHM, Seah JCY, Ahmad HK, Milne MR, Wardman JB, Buchlak QD, Esmaili N, Lambert JF, Jones CM. Analysis of Line and Tube Detection Performance of a Chest X-ray Deep Learning Model to Evaluate Hidden Stratification. Diagnostics (Basel) 2023; 13:2317. [PMID: 37510062 PMCID: PMC10378683 DOI: 10.3390/diagnostics13142317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/05/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023] Open
Abstract
This retrospective case-control study evaluated the diagnostic performance of a commercially available chest radiography deep convolutional neural network (DCNN) in identifying the presence and position of central venous catheters, enteric tubes, and endotracheal tubes, in addition to a subgroup analysis of different types of lines/tubes. A held-out test dataset of 2568 studies was sourced from community radiology clinics and hospitals in Australia and the USA, and was then ground-truth labelled for the presence, position, and type of line or tube from the consensus of a thoracic specialist radiologist and an intensive care clinician. DCNN model performance for identifying and assessing the positioning of central venous catheters, enteric tubes, and endotracheal tubes over the entire dataset, as well as within each subgroup, was evaluated. The area under the receiver operating characteristic curve (AUC) was assessed. The DCNN algorithm displayed high performance in detecting the presence of lines and tubes in the test dataset with AUCs > 0.99, and good position classification performance over a subpopulation of ground truth positive cases with AUCs of 0.86-0.91. The subgroup analysis showed that model performance was robust across the various subtypes of lines or tubes, although position classification performance of peripherally inserted central catheters was relatively lower. Our findings indicated that the DCNN algorithm performed well in the detection and position classification of lines and tubes, supporting its use as an assistant for clinicians. Further work is required to evaluate performance in rarer scenarios, as well as in less common subgroups.
Collapse
Affiliation(s)
- Cyril H M Tang
- Annalise.ai, Sydney, NSW 2000, Australia
- Intensive Care Unit, Gosford Hospital, Sydney, NSW 2250, Australia
| | - Jarrel C Y Seah
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of Radiology, Alfred Health, Melbourne, VIC 3004, Australia
| | | | | | | | - Quinlan D Buchlak
- Annalise.ai, Sydney, NSW 2000, Australia
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Department of Neurosurgery, Monash Health, Melbourne, VIC 3168, Australia
| | - Nazanin Esmaili
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
| | | | - Catherine M Jones
- Annalise.ai, Sydney, NSW 2000, Australia
- I-MED Radiology Network, Brisbane, QLD 4006, Australia
- School of Public and Preventive Health, Monash University, Clayton, VIC 3800, Australia
- Department of Clinical Imaging Science, University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
10
|
Hsu CC, Ameri R, Lin CW, He JS, Biyari M, Yarahmadi A, Band SS, Lin TK, Fan WL. A robust approach for endotracheal tube localization in chest radiographs. Front Artif Intell 2023; 6:1181812. [PMID: 37251274 PMCID: PMC10219610 DOI: 10.3389/frai.2023.1181812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 04/13/2023] [Indexed: 05/31/2023] Open
Abstract
Precise detection and localization of the Endotracheal tube (ETT) is essential for patients receiving chest radiographs. A robust deep learning model based on U-Net++ architecture is presented for accurate segmentation and localization of the ETT. Different types of loss functions related to distribution and region-based loss functions are evaluated in this paper. Then, various integrations of distribution and region-based loss functions (compound loss function) have been applied to obtain the best intersection over union (IOU) for ETT segmentation. The main purpose of the presented study is to maximize IOU for ETT segmentation, and also minimize the error range that needs to be considered during calculation of distance between the real and predicted ETT by obtaining the best integration of the distribution and region loss functions (compound loss function) for training the U-Net++ model. We analyzed the performance of our model using chest radiograph from the Dalin Tzu Chi Hospital in Taiwan. The results of applying the integration of distribution-based and region-based loss functions on the Dalin Tzu Chi Hospital dataset show enhanced segmentation performance compared to other single loss functions. Moreover, according to the obtained results, the combination of Matthews Correlation Coefficient (MCC) and Tversky loss functions, which is a hybrid loss function, has shown the best performance on ETT segmentation based on its ground truth with an IOU value of 0.8683.
Collapse
Affiliation(s)
- Chung-Chian Hsu
- Department of Information Management, International Graduate School of Artificial Intelligence, National Yunlin University of Science and Technology, Douliu, Taiwan
| | - Rasoul Ameri
- Department of Information Management, National Yunlin University of Science and Technology, Douliu, Taiwan
| | - Chih-Wen Lin
- Buddhist Dalin Tzu Chi Hospital, Chiayi, Taiwan
- School of Medicine, Tzu Chi University, Hualien, Taiwan
| | | | - Meghdad Biyari
- Department of Information Management, National Yunlin University of Science and Technology, Douliu, Taiwan
| | - Atefeh Yarahmadi
- Department of Information Management, National Yunlin University of Science and Technology, Douliu, Taiwan
| | - Shahab S. Band
- Future Technology Research Center, National Yunlin University of Science and Technology, Douliu, Taiwan
- International Graduate School of Artificial Intelligence, National Yunlin University of Science and Technology, Douliu, Taiwan
| | - Tin-Kwang Lin
- Buddhist Dalin Tzu Chi Hospital, Chiayi, Taiwan
- School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Wen-Lin Fan
- Buddhist Dalin Tzu Chi Hospital, Chiayi, Taiwan
- School of Medicine, Tzu Chi University, Hualien, Taiwan
| |
Collapse
|
11
|
Brown MS, Wong KP, Shrestha L, Wahi-Anwar M, Daly M, Foster G, Abtin F, Ruchalski KL, Goldin JG, Enzmann D. Automated Endotracheal Tube Placement Check Using Semantically Embedded Deep Neural Networks. Acad Radiol 2023; 30:412-420. [PMID: 35644754 DOI: 10.1016/j.acra.2022.04.022] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 04/07/2022] [Accepted: 04/22/2022] [Indexed: 01/25/2023]
Abstract
RATIONALE AND OBJECTIVES To develop artificial intelligence (AI) system that assists in checking endotracheal tube (ETT) placement on chest X-rays (CXRs) and evaluate whether it can move into clinical validation as a quality improvement tool. MATERIALS AND METHODS A retrospective data set including 2000 de-identified images from intensive care unit patients was split into 1488 for training and 512 for testing. AI was developed to automatically identify the ETT, trachea, and carina using semantically embedded neural networks that combine a declarative knowledge base with deep neural networks. To check the ETT tip placement, a "safe zone" was computed as the region inside the trachea and 3-7 cm above the carina. Two AI outputs were evaluated: (1) ETT overlay, (2) ETT misplacement alert messages. Clinically relevant performance metrics were compared against prespecified thresholds of >85% overlay accuracy and positive predictive value (PPV) > 30% and negative predictive value NPV > 95% for alerts to move into clinical validation. RESULTS An ETT was present in 285 of 512 test cases. The AI detected 95% (271/285) of ETTs, 233 (86%) of these with accurate tip localization. The system (correctly) did not generate an ETT overlay in 221/227 CXRs where the tube was absent for an overall overlay accuracy of 89% (454/512). The alert messages indicating that either the ETT was misplaced or not detected had a PPV of 83% (265/320) and NPV of 98% (188/192). CONCLUSION The chest X-ray AI met prespecified performance thresholds to move into clinical validation.
Collapse
Affiliation(s)
- Matthew S Brown
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024.
| | - Koon-Pong Wong
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Liza Shrestha
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Muhammad Wahi-Anwar
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Morgan Daly
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - George Foster
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Fereidoun Abtin
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Kathleen L Ruchalski
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Jonathan G Goldin
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Dieter Enzmann
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| |
Collapse
|
12
|
van Huijgevoort NCM, Hoogenboom SAM, Lekkerkerker SJ, Busch OR, Del Chiaro M, Fockens P, Somers I, Verheij J, Voermans RP, Besselink MG, van Hooft JE. Diagnostic accuracy of the AGA, IAP, and European guidelines for detecting advanced neoplasia in intraductal papillary mucinous neoplasm/neoplasia. Pancreatology 2023; 23:251-257. [PMID: 36805049 DOI: 10.1016/j.pan.2023.01.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 12/31/2022] [Accepted: 01/23/2023] [Indexed: 02/23/2023]
Abstract
BACKGROUND Follow-up in patients with intraductal papillary mucinous neoplasm (IPMN) aims to detect advanced neoplasia (high-grade dysplasia/cancer) in an early stage. The 2015 American Gastroenterological Association (AGA), 2017 International Association of Pancreatology (IAP), and the 2018 European Study Group on Cystic tumours of the Pancreas (European) guidelines differ in their recommendations on indications for surgery. However, it remains unclear which guideline is most accurate in predicting advanced neoplasia in IPMN. METHODS Patients who underwent surgery were extracted from a prospective database (January 2006-January 2021). In patients with IPMN, final pathology was compared with the indication for surgery according to the guidelines. ROC-curves were calculated to determine the diagnostic accuracy for each guideline. RESULTS Overall, 247 patients underwent surgery for cystic lesions. In 145 patients with IPMN, 52 had advanced neoplasia, of which the AGA guideline would have advised surgery in 14 (27%), the IAP and European guideline in 49 (94%) and 50 (96%). In 93 patients without advanced neoplasia, the AGA, IAP, and European guidelines would incorrectly have advised surgery in 8 (8.6%), 77 (83%) and 71 (76%). CONCLUSION The European and IAP guidelines are clearly superior in detecting advanced neoplasia in IPMN as compared to the AGA, albeit at the cost of a higher rate of unnecessary surgery. To harmonize care and to avoid confusion caused by conflicting statements, a global evidence-based guideline for PCN in collaboration with the various guidelines groups is required once the current guidelines require an update.
Collapse
Affiliation(s)
- Nadine C M van Huijgevoort
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, the Netherlands
| | - Sanne A M Hoogenboom
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, the Netherlands
| | - Selma J Lekkerkerker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, the Netherlands
| | - Olivier R Busch
- Department of Surgery, Cancer Center Amsterdam, Amsterdam UMC, University of Amsterdam, the Netherlands
| | - Marco Del Chiaro
- Department of Surgical Oncology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, the Netherlands
| | - Inne Somers
- Department of Radiology, Amsterdam UMC, University of Amsterdam, the Netherlands; Department of Radiology, Meander Medical Center, Amersfoort, the Netherlands
| | - Joanne Verheij
- Department of Pathology, Cancer Center Amsterdam, Amsterdam UMC, University of Amsterdam, the Netherlands
| | - Rogier P Voermans
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, the Netherlands
| | - Marc G Besselink
- Department of Surgery, Cancer Center Amsterdam, Amsterdam UMC, University of Amsterdam, the Netherlands
| | - Jeanin E van Hooft
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Leiden, the Netherlands.
| |
Collapse
|
13
|
Elaanba A, Ridouani M, Hassouni L. A Stacked Generalization Chest-X-Ray-Based Framework for Mispositioned Medical Tubes and Catheters Detection. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
14
|
Choe J, Lee SM, Hwang HJ, Lee SM, Yun J, Kim N, Seo JB. Artificial Intelligence in Lung Imaging. Semin Respir Crit Care Med 2022; 43:946-960. [PMID: 36174647 DOI: 10.1055/s-0042-1755571] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Recently, interest and advances in artificial intelligence (AI) including deep learning for medical images have surged. As imaging plays a major role in the assessment of pulmonary diseases, various AI algorithms have been developed for chest imaging. Some of these have been approved by governments and are now commercially available in the marketplace. In the field of chest radiology, there are various tasks and purposes that are suitable for AI: initial evaluation/triage of certain diseases, detection and diagnosis, quantitative assessment of disease severity and monitoring, and prediction for decision support. While AI is a powerful technology that can be applied to medical imaging and is expected to improve our current clinical practice, some obstacles must be addressed for the successful implementation of AI in workflows. Understanding and becoming familiar with the current status and potential clinical applications of AI in chest imaging, as well as remaining challenges, would be essential for radiologists and clinicians in the era of AI. This review introduces the potential clinical applications of AI in chest imaging and also discusses the challenges for the implementation of AI in daily clinical practice and future directions in chest imaging.
Collapse
Affiliation(s)
- Jooae Choe
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Hye Jeon Hwang
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Jihye Yun
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea.,Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| |
Collapse
|
15
|
Mallon DH, McNamara CD, Rahmani GS, O'Regan DP, Amiras DG. Automated detection of enteric tubes misplaced in the respiratory tract on chest radiographs using deep learning with two centre validation. Clin Radiol 2022; 77:e758-e764. [PMID: 35850868 DOI: 10.1016/j.crad.2022.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/25/2022] [Accepted: 06/17/2022] [Indexed: 11/27/2022]
Abstract
AIM To develop and test a model based on a convolutional neural network that can identify enteric tube position accurately on chest radiography. MATERIALS AND METHODS The chest radiographs of adult patients were classified by radiologists based on enteric tube position as either critically misplaced (within the respiratory tract) or not critically misplaced (misplaced within the oesophagus or safely positioned below the diaphragm). A deep-learning model based on the 121-layer DenseNet architecture was developed using a training and validation set of 4,693 chest radiographs. The model was evaluated on an external test data set from a separate institution that consisted of 1,514 consecutive radiographs with a real-world incidence of critically misplaced enteric tubes. RESULTS The receiver operator characteristic area under the curve was 0.90 and 0.92 for the internal validation and external test data sets, respectively. For the external data set with a prevalence of 4.4% of critically misplaced enteric tubes, the model achieved high accuracy (92%), sensitivity (80%), and specificity (92%) for identifying a critically misplaced enteric tube. The negative predictive value (99%) was higher than the positive predictive value (32%). CONCLUSION The present study describes the development and external testing of a model that accurately identifies an enteric tube misplaced within the respiratory tract. This model could help reduce the risk of the catastrophic consequences of feeding through a misplaced enteric tube.
Collapse
Affiliation(s)
- D H Mallon
- Imperial College Healthcare NHS Trust, London, UK; MRC London Institute of Medical Sciences, Imperial College London, London, UK.
| | - C D McNamara
- Imperial College Healthcare NHS Trust, London, UK
| | - G S Rahmani
- Galway University Hospitals, Galway, Ireland
| | - D P O'Regan
- Imperial College Healthcare NHS Trust, London, UK; MRC London Institute of Medical Sciences, Imperial College London, London, UK
| | - D G Amiras
- Imperial College Healthcare NHS Trust, London, UK
| |
Collapse
|
16
|
Moukheiber D, Mahindre S, Moukheiber L, Moukheiber M, Wang S, Ma C, Shih G, Peng Y, Gao M. Few-Shot Learning Geometric Ensemble for Multi-label Classification of Chest X-Rays. DATA AUGMENTATION, LABELLING, AND IMPERFECTIONS : SECOND MICCAI WORKSHOP, DALI 2022, HELD IN CONJUNCTION WITH MICCAI 2022, SINGAPORE, SEPTEMBER 22, 2022, PROCEEDINGS. DALI (WORKSHOP) (2ND : 2022 : SINGAPORE) 2022; 13567:112-122. [PMID: 36383493 PMCID: PMC9652771 DOI: 10.1007/978-3-031-17027-0_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
This paper aims to identify uncommon cardiothoracic diseases and patterns on chest X-ray images. Training a machine learning model to classify rare diseases with multi-label indications is challenging without sufficient labeled training samples. Our model leverages the information from common diseases and adapts to perform on less common mentions. We propose to use multi-label few-shot learning (FSL) schemes including neighborhood component analysis loss, generating additional samples using distribution calibration and fine-tuning based on multi-label classification loss. We utilize the fact that the widely adopted nearest neighbor-based FSL schemes like ProtoNet are Voronoi diagrams in feature space. In our method, the Voronoi diagrams in the features space generated from multi-label schemes are combined into our geometric DeepVoro Multi-label ensemble. The improved performance in multi-label few-shot classification using the multi-label ensemble is demonstrated in our experiments (The code is publicly available at https://github.com/Saurabh7/Few-shot-learning-multilabel-cxray).
Collapse
Affiliation(s)
| | - Saurabh Mahindre
- University at Buffalo, The State University of New York, Buffalo, NY, USA
| | | | | | - Song Wang
- The University of Texas at Austin, Austin, TX, USA
| | - Chunwei Ma
- University at Buffalo, The State University of New York, Buffalo, NY, USA
| | | | - Yifan Peng
- Weill Cornell Medicine, New York, NY, USA
| | - Mingchen Gao
- University at Buffalo, The State University of New York, Buffalo, NY, USA
| |
Collapse
|
17
|
Position Classification of the Endotracheal Tube with Automatic Segmentation of the Trachea and the Tube on Plain Chest Radiography Using Deep Convolutional Neural Network. J Pers Med 2022; 12:jpm12091363. [PMID: 36143148 PMCID: PMC9503144 DOI: 10.3390/jpm12091363] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 08/20/2022] [Accepted: 08/22/2022] [Indexed: 11/16/2022] Open
Abstract
Background: This study aimed to develop an algorithm for multilabel classification according to the distance from carina to endotracheal tube (ETT) tip (absence, shallow > 70 mm, 30 mm ≤ proper ≤ 70 mm, and deep position < 30 mm) with the application of automatic segmentation of the trachea and the ETT on chest radiographs using deep convolutional neural network (CNN). Methods: This study was a retrospective study using plain chest radiographs. We segmented the trachea and the ETT on images and labeled the classification of the ETT position. We proposed models for the classification of the ETT position using EfficientNet B0 with the application of automatic segmentation using Mask R-CNN and ResNet50. Primary outcomes were favorable performance for automatic segmentation and four-label classification through five-fold validation with segmented images and a test with non-segmented images. Results: Of 1985 images, 596 images were manually segmented and consisted of 298 absence, 97 shallow, 100 proper, and 101 deep images according to the ETT position. In five-fold validations with segmented images, Dice coefficients [mean (SD)] between segmented and predicted masks were 0.841 (0.063) for the trachea and 0.893 (0.078) for the ETT, and the accuracy for four-label classification was 0.945 (0.017). In the test for classification with 1389 non-segmented images, overall values were 0.922 for accuracy, 0.843 for precision, 0.843 for sensitivity, 0.922 for specificity, and 0.843 for F1-score. Conclusions: Automatic segmentation of the ETT and trachea images and classification of the ETT position using deep CNN with plain chest radiographs could achieve good performance and improve the physician’s performance in deciding the appropriateness of ETT depth.
Collapse
|
18
|
Detecting Endotracheal Tube and Carina on Portable Supine Chest Radiographs Using One-Stage Detector with a Coarse-to-Fine Attention. Diagnostics (Basel) 2022; 12:diagnostics12081913. [PMID: 36010263 PMCID: PMC9406505 DOI: 10.3390/diagnostics12081913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/04/2022] [Accepted: 08/05/2022] [Indexed: 11/19/2022] Open
Abstract
In intensive care units (ICUs), after endotracheal intubation, the position of the endotracheal tube (ETT) should be checked to avoid complications. The malposition can be detected by the distance between the ETT tip and the Carina (ETT–Carina distance). However, it struggles with a limited performance for two major problems, i.e., occlusion by external machine, and the posture and machine of taking chest radiographs. While previous studies addressed these problems, they always suffered from the requirements of manual intervention. Therefore, the purpose of this paper is to locate the ETT tip and the Carina more accurately for detecting the malposition without manual intervention. The proposed architecture is composed of FCOS: Fully Convolutional One-Stage Object Detection, an attention mechanism named Coarse-to-Fine Attention (CTFA), and a segmentation branch. Moreover, a post-process algorithm is adopted to select the final location of the ETT tip and the Carina. Three metrics were used to evaluate the performance of the proposed method. With the dataset provided by National Cheng Kung University Hospital, the accuracy of the malposition detected by the proposed method achieves 88.82% and the ETT–Carina distance errors are less than 5.333±6.240 mm.
Collapse
|
19
|
Fang Z, Ye B, Yuan B, Wang T, Zhong S, Li S, Zheng J. Angle prediction model when the imaging plane is tilted about z-axis. THE JOURNAL OF SUPERCOMPUTING 2022; 78:18598-18615. [PMID: 35692867 PMCID: PMC9175174 DOI: 10.1007/s11227-022-04595-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/08/2022] [Indexed: 06/15/2023]
Abstract
Computer Tomography (CT) is a complicated imaging system, requiring highly geometric positioning. We found a special artifact caused by detection plane tilted around z-axis. In short scan cone-beam reconstruction, this kind of geometric deviation result in half circle shaped fuzzy around highlighted particles in reconstructed slices. This artifact is distinct near the slice periphery, but deficient around the slice center. We generated mathematical models, and InceptionV3-R deep network to study the slice artifact features to estimate the detector z-axis tilt angle. The testing results are: mean absolute error of 0.08819 degree, the Root mean square error of 0.15221 degree and R-square of 0.99944. A geometric deviation recover formula was deduced, which can eliminate this artifact efficiently. This research enlarges the CT artifact knowledge hierarchy, and verifies the capability of machine learning in CT geometric deviation artifact recoveries.
Collapse
Affiliation(s)
- Zheng Fang
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Bichao Ye
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Bingan Yuan
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Tingjun Wang
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Shuo Zhong
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| | - Shunren Li
- ASR Technology (Xiamen) Co., Ltd, Xiamen, China
| | - Jianyi Zheng
- School of Aerospace Engineering, Xiamen University, Xiamen, 361102 China
| |
Collapse
|
20
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 113] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
21
|
Overview of Deep Learning Models in Biomedical Domain with the Help of R Statistical Software. SERBIAN JOURNAL OF EXPERIMENTAL AND CLINICAL RESEARCH 2022. [DOI: 10.2478/sjecr-2018-0063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Abstract
With the increase in volume of data and presence of structured and unstructured data in the biomedical filed, there is a need for building models which can handle complex & non-linear relations in the data and also predict and classify outcomes with higher accuracy. Deep learning models are one of such models which can handle complex and nonlinear data and are being increasingly used in the biomedical filed in the recent years. Deep learning methodology evolved from artificial neural networks which process the input data through multiple hidden layers with higher level of abstraction. Deep Learning networks are used in various fields such as image processing, speech recognition, fraud deduction, classification and prediction. Objectives of this paper is to provide an overview of Deep Learning Models and its application in the biomedical domain using R Statistical software Deep Learning concepts are illustrated by using the R statistical software package. X-ray Images from NIH datasets used to explain the prediction accuracy of the deep learning models. Deep Learning models helped to classify the outcomes under study with 91% accuracy. The paper provided an overview of Deep Learning Models, its types, its application in biomedical domain. - is paper has shown the effect of deep learning network in classifying images into normal and disease with 91% accuracy with help of the R statistical package.
Collapse
|
22
|
AI MSK clinical applications: orthopedic implants. Skeletal Radiol 2022; 51:305-313. [PMID: 34350476 DOI: 10.1007/s00256-021-03879-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 07/15/2021] [Accepted: 07/22/2021] [Indexed: 02/02/2023]
Abstract
Artificial intelligence (AI) and deep learning have multiple potential uses in aiding the musculoskeletal radiologist in the radiological evaluation of orthopedic implants. These include identification of implants, characterization of implants according to anatomic type, identification of specific implant models, and evaluation of implants for positioning and complications. In addition, natural language processing (NLP) can aid in the acquisition of clinical information from the medical record that can help with tasks like prepopulating radiology reports. Several proof-of-concept works have been published in the literature describing the application of deep learning toward these various tasks, with performance comparable to that of expert musculoskeletal radiologists. Although much work remains to bring these proof-of-concept algorithms into clinical deployment, AI has tremendous potential toward automating these tasks, thereby augmenting the musculoskeletal radiologist.
Collapse
|
23
|
Current and emerging artificial intelligence applications in chest imaging: a pediatric perspective. Pediatr Radiol 2022; 52:2120-2130. [PMID: 34471961 PMCID: PMC8409695 DOI: 10.1007/s00247-021-05146-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 05/22/2021] [Accepted: 06/28/2021] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) applications for chest radiography and chest CT are among the most developed applications in radiology. More than 40 certified AI products are available for chest radiography or chest CT. These AI products cover a wide range of abnormalities, including pneumonia, pneumothorax and lung cancer. Most applications are aimed at detecting disease, complemented by products that characterize or quantify tissue. At present, none of the thoracic AI products is specifically designed for the pediatric population. However, some products developed to detect tuberculosis in adults are also applicable to children. Software is under development to detect early changes of cystic fibrosis on chest CT, which could be an interesting application for pediatric radiology. In this review, we give an overview of current AI products in thoracic radiology and cover recent literature about AI in chest radiography, with a focus on pediatric radiology. We also discuss possible pediatric applications.
Collapse
|
24
|
Zeng K, Hua Y, Xu J, Zhang T, Wang Z, Jiang Y, Han J, Yang M, Shen J, Cai Z. Multicentre Study Using Machine Learning Methods in Clinical Diagnosis of Knee Osteoarthritis. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:1765404. [PMID: 34900177 PMCID: PMC8664510 DOI: 10.1155/2021/1765404] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/14/2021] [Accepted: 11/15/2021] [Indexed: 01/10/2023]
Abstract
Knee osteoarthritis (OA) is one of the most common musculoskeletal disorders. OA diagnosis is currently conducted by assessing symptoms and evaluating plain radiographs, but this process suffers from the subjectivity of doctors. In this study, we retrospectively compared five commonly used machine learning methods, especially the CNN network, to predict the real-world X-ray imaging data of knee joints from two different hospitals using Kellgren-Lawrence (K-L) grade of knee OA to help doctors choose proper auxiliary tools. Furthermore, we present attention maps of CNN to highlight the radiological features affecting the network decision. Such information makes the decision process transparent for practitioners, which builds better trust towards such automatic methods and, moreover, reduces the workload of clinicians, especially for remote areas without enough medical staff.
Collapse
Affiliation(s)
- Ke Zeng
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
- Department of Orthopedics, Wuxi No. 2 People's Hospital, Nanjing Medical University, Wuxi, Jiangsu 214000, China
| | - Yingqi Hua
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
| | - Jing Xu
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
| | - Tao Zhang
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
| | - Zhuoying Wang
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
| | - Yafei Jiang
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
| | - Jing Han
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
| | - Mengkai Yang
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
| | - Jiakang Shen
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
| | - Zhengdong Cai
- Department of Orthopedics, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai Bone Tumor Institution, Shanghai 200080, China
| |
Collapse
|
25
|
Vieira PA, Magalhães DMV, Carvalho-Filho AO, Veras RMS, Rabêlo RAL, Silva RRV. Classification of COVID-19 in X-ray images with Genetic Fine-tuning. COMPUTERS & ELECTRICAL ENGINEERING : AN INTERNATIONAL JOURNAL 2021; 96:107467. [PMID: 34584299 PMCID: PMC8461268 DOI: 10.1016/j.compeleceng.2021.107467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 08/27/2021] [Accepted: 09/16/2021] [Indexed: 05/06/2023]
Abstract
New and more transmissible SARS-COV-2 variants aggravated the SARS-COV-2 emergence. Lung X-ray images stand out as an alternative to support case screening. The latest computer-aided diagnosis systems have been using Deep Learning (DL) to detect pulmonary diseases. In this context, our work investigates different types of pneumonia detection, including COVID-19, based on X-ray image processing and DL techniques. Our methodology comprehends a pre-processing step including data-augmentation, contrast enhancement, and resizing method to overcome the challenge of heterogeneous and few samples of public datasets. Additionally, we propose a new Genetic Fine-Tuning method to automatically define an optimal set of hyper-parameters of ResNet50 and VGG16 architectures. Our results are encouraging; we achieve an accuracy of 97% considering three classes: COVID-19, other pneumonia, and healthy. Thus, our methodology could assist in classifying COVID-19 pneumonia, which could reduce costs by making the process faster and more efficient.
Collapse
Affiliation(s)
- Pablo A Vieira
- Electrical Engineering, Federal University of Piauí, Teresina, Brazil
| | - Deborah M V Magalhães
- Electrical Engineering, Federal University of Piauí, Teresina, Brazil
- Information Systems, Federal University of Piauí, Picos, Brazil
| | - Antonio O Carvalho-Filho
- Electrical Engineering, Federal University of Piauí, Teresina, Brazil
- Information Systems, Federal University of Piauí, Picos, Brazil
| | | | - Ricardo A L Rabêlo
- Electrical Engineering, Federal University of Piauí, Teresina, Brazil
- Computer Science, Federal University of Piauí, Picos, Brazil
| | - Romuere R V Silva
- Electrical Engineering, Federal University of Piauí, Teresina, Brazil
- Information Systems, Federal University of Piauí, Picos, Brazil
- Computer Science, Federal University of Piauí, Picos, Brazil
| |
Collapse
|
26
|
Vieira P, Sousa O, Magalhães D, Rabêlo R, Silva R. Detecting pulmonary diseases using deep features in X-ray images. PATTERN RECOGNITION 2021; 119:108081. [PMID: 34149099 PMCID: PMC8193974 DOI: 10.1016/j.patcog.2021.108081] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 04/29/2021] [Accepted: 05/27/2021] [Indexed: 05/09/2023]
Abstract
COVID-19 leads to radiological evidence of lower respiratory tract lesions, which support analysis to screen this disease using chest X-ray. In this scenario, deep learning techniques are applied to detect COVID-19 pneumonia in X-ray images, aiding a fast and precise diagnosis. Here, we investigate seven deep learning architectures associated with data augmentation and transfer learning techniques to detect different pneumonia types. We also propose an image resizing method with the maximum window function that preserves anatomical structures of the chest. The results are promising, reaching an accuracy of 99.8% considering COVID-19, normal, and viral and bacterial pneumonia classes. The differentiation between viral pneumonia and COVID-19 achieved an accuracy of 99.8%, and 99.9% of accuracy between COVID-19 and bacterial pneumonia. We also evaluated the impact of the proposed image resizing method on classification performance comparing with the bilinear interpolation; this pre-processing increased the classification rate regardless of the deep learning architectures used. We c ompared our results with ten related works in the state-of-the-art using eight sets of experiments, which showed that the proposed method outperformed them in most cases. Therefore, we demonstrate that deep learning models trained with pre-processed X-ray images could precisely assist the specialist in COVID-19 detection.
Collapse
Affiliation(s)
- Pablo Vieira
- Electrical Engineering Department, Federal University of Piau, Picos, Brazil
- Development and Research, Maida.Health, Piau, Teresina, Brazil
| | - Orrana Sousa
- Electrical Engineering Department, Federal University of Piau, Picos, Brazil
| | - Deborah Magalhães
- Information Systems Department, Federal University of Piau, Picos, Brazil
| | - Ricardo Rabêlo
- Computer Science Department, Federal University of Piau, Teresina, Brazil
| | - Romuere Silva
- Electrical Engineering Department, Federal University of Piau, Picos, Brazil
- Information Systems Department, Federal University of Piau, Picos, Brazil
- Computer Science Department, Federal University of Piau, Teresina, Brazil
| |
Collapse
|
27
|
Lee S, Summers RM. Clinical Artificial Intelligence Applications in Radiology: Chest and Abdomen. Radiol Clin North Am 2021; 59:987-1002. [PMID: 34689882 DOI: 10.1016/j.rcl.2021.07.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Organ segmentation, chest radiograph classification, and lung and liver nodule detections are some of the popular artificial intelligence (AI) tasks in chest and abdominal radiology due to the wide availability of public datasets. AI algorithms have achieved performance comparable to humans in less time for several organ segmentation tasks, and some lesion detection and classification tasks. This article introduces the current published articles of AI applied to chest and abdominal radiology, including organ segmentation, lesion detection, classification, and predicting prognosis.
Collapse
Affiliation(s)
- Sungwon Lee
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA.
| |
Collapse
|
28
|
Yuan KC, Tsai LW, Lai KS, Teng ST, Lo YS, Peng SJ. Using Transfer Learning Method to Develop an Artificial Intelligence Assisted Triaging for Endotracheal Tube Position on Chest X-ray. Diagnostics (Basel) 2021; 11:diagnostics11101844. [PMID: 34679542 PMCID: PMC8534985 DOI: 10.3390/diagnostics11101844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 09/21/2021] [Accepted: 09/28/2021] [Indexed: 11/16/2022] Open
Abstract
Endotracheal tubes (ETTs) provide a vital connection between the ventilator and patient; however, improper placement can hinder ventilation efficiency or injure the patient. Chest X-ray (CXR) is the most common approach to confirming ETT placement; however, technicians require considerable expertise in the interpretation of CXRs, and formal reports are often delayed. In this study, we developed an artificial intelligence-based triage system to enable the automated assessment of ETT placement in CXRs. Three intensivists performed a review of 4293 CXRs obtained from 2568 ICU patients. The CXRs were labeled "CORRECT" or "INCORRECT" in accordance with ETT placement. A region of interest (ROI) was also cropped out, including the bilateral head of the clavicle, the carina, and the tip of the ETT. Transfer learning was used to train four pre-trained models (VGG16, INCEPTION_V3, RESNET, and DENSENET169) and two models developed in the current study (VGG16_Tensor Projection Layer and CNN_Tensor Projection Layer) with the aim of differentiating the placement of ETTs. Only VGG16 based on ROI images presented acceptable performance (AUROC = 92%, F1 score = 0.87). The results obtained in this study demonstrate the feasibility of using the transfer learning method in the development of AI models by which to assess the placement of ETTs in CXRs.
Collapse
Affiliation(s)
- Kuo-Ching Yuan
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 10675, Taiwan;
- Department of Surgery, DA CHIEN General Hospital, Miaoli 36052, Taiwan
| | - Lung-Wen Tsai
- Department of Medicine Education, Taipei Medical University Hospital, Taipei 110301, Taiwan;
| | - Kevin S. Lai
- Division of Critical Care Medicine, Department of Emergency and Critical Care Medicine, Taipei Medical University Hospital, Taipei 110301, Taiwan; (K.S.L.); (S.-T.T.)
| | - Sing-Teck Teng
- Division of Critical Care Medicine, Department of Emergency and Critical Care Medicine, Taipei Medical University Hospital, Taipei 110301, Taiwan; (K.S.L.); (S.-T.T.)
| | - Yu-Sheng Lo
- Institute of Biomedical Informatics, Taipei Medical University, Taipei 110301, Taiwan
- Correspondence: (Y.-S.L.); (S.-J.P.); Tel.: +886-2-66382736 (Y.-S.L. & S.-J.P.); Fax: +886-2-87320395 (Y.-S.L.); +886-2-27321956 (S.-J.P.)
| | - Syu-Jyun Peng
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 10675, Taiwan;
- Correspondence: (Y.-S.L.); (S.-J.P.); Tel.: +886-2-66382736 (Y.-S.L. & S.-J.P.); Fax: +886-2-87320395 (Y.-S.L.); +886-2-27321956 (S.-J.P.)
| |
Collapse
|
29
|
Shim JG, Ryu KH, Lee SH, Cho EA, Lee S, Ahn JH. Machine learning model for predicting the optimal depth of tracheal tube insertion in pediatric patients: A retrospective cohort study. PLoS One 2021; 16:e0257069. [PMID: 34473775 PMCID: PMC8412312 DOI: 10.1371/journal.pone.0257069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 07/17/2021] [Indexed: 12/01/2022] Open
Abstract
Objective To construct a prediction model for optimal tracheal tube depth in pediatric patients using machine learning. Methods Pediatric patients aged <7 years who received post-operative ventilation after undergoing surgery between January 2015 and December 2018 were investigated in this retrospective study. The optimal location of the tracheal tube was defined as the median of the distance between the upper margin of the first thoracic(T1) vertebral body and the lower margin of the third thoracic(T3) vertebral body. We applied four machine learning models: random forest, elastic net, support vector machine, and artificial neural network and compared their prediction accuracy to three formula-based methods, which were based on age, height, and tracheal tube internal diameter(ID). Results For each method, the percentage with optimal tracheal tube depth predictions in the test set was calculated as follows: 79.0 (95% confidence interval [CI], 73.5 to 83.6) for random forest, 77.4 (95% CI, 71.8 to 82.2; P = 0.719) for elastic net, 77.0 (95% CI, 71.4 to 81.8; P = 0.486) for support vector machine, 76.6 (95% CI, 71.0 to 81.5; P = 1.0) for artificial neural network, 66.9 (95% CI, 60.9 to 72.5; P < 0.001) for the age-based formula, 58.5 (95% CI, 52.3 to 64.4; P< 0.001) for the tube ID-based formula, and 44.4 (95% CI, 38.3 to 50.6; P < 0.001) for the height-based formula. Conclusions In this study, the machine learning models predicted the optimal tracheal tube tip location for pediatric patients more accurately than the formula-based methods. Machine learning models using biometric variables may help clinicians make decisions regarding optimal tracheal tube depth in pediatric patients.
Collapse
Affiliation(s)
- Jae-Geum Shim
- Department of Anesthesiology and Pain Medicine, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Kyoung-Ho Ryu
- Department of Anesthesiology and Pain Medicine, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Sung Hyun Lee
- Department of Anesthesiology and Pain Medicine, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Eun-Ah Cho
- Department of Anesthesiology and Pain Medicine, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Sungho Lee
- Department of Anesthesiology and Pain Medicine, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Jin Hee Ahn
- Department of Anesthesiology and Pain Medicine, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
30
|
Liu TYA, Wei J, Zhu H, Subramanian PS, Myung D, Yi PH, Hui FK, Unberath M, Ting DSW, Miller NR. Detection of Optic Disc Abnormalities in Color Fundus Photographs Using Deep Learning. J Neuroophthalmol 2021; 41:368-374. [PMID: 34415271 PMCID: PMC10637344 DOI: 10.1097/wno.0000000000001358] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
BACKGROUND To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.
Collapse
Affiliation(s)
- T Y Alvin Liu
- Department of Ophthalmology (TYAL, NRM), Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland; Department of Biomedical Engineering (JW), Johns Hopkins University, Baltimore, Maryland; Malone Center for Engineering in Healthcare (HZ, MU), Johns Hopkins University, Baltimore, Maryland; Department of Radiology (PHY, FKH), Johns Hopkins University, Baltimore, Maryland; Singapore Eye Research Institute (DSWT), Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore ; Department of Ophthalmology (PSS), University of Colorado School of Medicine, Aurora, Colorado; and Department of Ophthalmology (DM), Byers Eye Institute, Stanford University, Palo Alto, California
| | | | | | | | | | | | | | | | | | | |
Collapse
|
31
|
Hybrid Transfer Learning for Classification of Uterine Cervix Images for Cervical Cancer Screening. J Digit Imaging 2021; 33:619-631. [PMID: 31848896 DOI: 10.1007/s10278-019-00269-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Transfer learning using deep pre-trained convolutional neural networks is increasingly used to solve a large number of problems in the medical field. In spite of being trained using images with entirely different domain, these networks are flexible to adapt to solve a problem in a different domain too. Transfer learning involves fine-tuning a pre-trained network with optimal values of hyperparameters such as learning rate, batch size, and number of training epochs. The process of training the network identifies the relevant features for solving a specific problem. Adapting the pre-trained network to solve a different problem requires fine-tuning until relevant features are obtained. This is facilitated through the use of large number of filters present in the convolutional layers of pre-trained network. A very few features out of these features are useful for solving the problem in a different domain, while others are irrelevant, use of which may only reduce the efficacy of the network. However, by minimizing the number of filters required to solve the problem, the efficiency of the training the network can be improved. In this study, we consider identification of relevant filters using the pre-trained networks namely AlexNet and VGG-16 net to detect cervical cancer from cervix images. This paper presents a novel hybrid transfer learning technique, in which a CNN is built and trained from scratch, with initial weights of only those filters which were identified as relevant using AlexNet and VGG-16 net. This study used 2198 cervix images with 1090 belonging to negative class and 1108 to positive class. Our experiment using hybrid transfer learning achieved an accuracy of 91.46%.
Collapse
|
32
|
Kara S, Akers JY, Chang PD. Identification and Localization of Endotracheal Tube on Chest Radiographs Using a Cascaded Convolutional Neural Network Approach. J Digit Imaging 2021; 34:898-904. [PMID: 34027589 PMCID: PMC8455772 DOI: 10.1007/s10278-021-00463-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 03/02/2021] [Accepted: 05/06/2021] [Indexed: 11/27/2022] Open
Abstract
Rapid and accurate assessment of endotracheal tube (ETT) location is essential in the intensive care unit (ICU) setting, where timely identification of a mispositioned support device may prevent significant patient morbidity and mortality. This study proposes a series of deep learning-based algorithms which together iteratively identify and localize the position of an ETT relative to the carina on chest radiographs. Using the open-source MIMIC Chest X-Ray (MIMIC-CXR) dataset, a total of 16,000 patients were identified (8000 patients with an ETT and 8000 patients without an ETT). Three different convolutional neural network (CNN) algorithms were created. First, a regression loss function CNN was trained to estimate the coordinate location of the carina, which was then used to crop the original radiograph to the distal trachea and proximal bronchi. Second, a classifier CNN was trained using the cropped inputs to determine the presence or absence of an ETT. Finally, for radiographs containing an ETT, a third regression CNN was trained to both refine the coordinate location of the carina and identify the location of the distal ETT tip. Model accuracy was assessed by comparing the absolute distance of prediction and ground-truth coordinates as well as CNN predictions relative to measurements documented in original radiologic reports. Upon five-fold cross validation, binary classification for the presence or absence of ETT demonstrated an accuracy, sensitivity, specificity, PPV, NPV, and AUC of 97.14%, 97.37%, 96.89%, 97.12%, 97.15%, and 99.58% respectively. CNN predicted coordinate location of the carina, and distal ETT tip was estimated within a median error of 0.46 cm and 0.60 cm from ground-truth annotations respectively. Overall final CNN assessment of distance between the carina and distal ETT tip was predicted within a median error of 0.60 cm from manual ground-truth annotations, and a median error of 0.66 cm from measurements documented in the original radiology reports. A serial cascaded CNN approach demonstrates high accuracy for both identification and localization of ETT tip and carina on chest radiographs. High performance of the proposed multi-step strategy is in part related to iterative refinement of coordinate localization as well as explicit image cropping which focuses algorithm attention to key anatomic regions of interest.
Collapse
Affiliation(s)
- Su Kara
- Capistrano Valley High School, Mission Viejo, CA, 92692, USA.
| | - Jake Y Akers
- University of California, Irvine, CA, 92697, USA
| | | |
Collapse
|
33
|
Harris RJ, Baginski SG, Bronstein Y, Kim S, Lohr J, Towey S, Velichkovich Z, Kabachenko T, Driscoll I, Baker B. Measurement of Endotracheal Tube Positioning on Chest X-Ray Using Object Detection. J Digit Imaging 2021; 34:846-852. [PMID: 34322753 DOI: 10.1007/s10278-021-00495-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 06/24/2021] [Accepted: 07/05/2021] [Indexed: 11/30/2022] Open
Abstract
Patients who are intubated with endotracheal tubes often receive chest x-ray (CXR) imaging to determine whether the tube is correctly positioned. When these CXRs are interpreted by a radiologist, they evaluate whether the tube needs to be repositioned and typically provide a measurement in centimeters between the endotracheal tube tip and carina. In this project, a large dataset of endotracheal tube and carina bounding boxes was annotated on CXRs, and a machine-learning model was trained to generate these boxes on new CXRs and to calculate a distance measurement between the tube and carina. This model was applied to a gold standard annotated dataset, as well as to all prospective data passing through our radiology system for two weeks. Inter-radiologist variability was also measured on a test dataset. The distance measurements for both the gold standard dataset (mean error = 0.70 cm) and prospective dataset (mean error = 0.68 cm) were noninferior to inter-radiologist variability (mean error = 0.70 cm) within an equivalence bound of 0.1 cm. This suggests that this model performs at an accuracy similar to human measurements, and these distance calculations can be used for clinical report auto-population and/or worklist prioritization of severely malpositioned tubes.
Collapse
Affiliation(s)
| | | | | | - Shwan Kim
- Virtual Radiologic, Eden Prairie, MN, USA
| | - Jerry Lohr
- Virtual Radiologic, Eden Prairie, MN, USA
| | | | | | | | | | | |
Collapse
|
34
|
Deep Learning and Transfer Learning for Optic Disc Laterality Detection: Implications for Machine Learning in Neuro-Ophthalmology. J Neuroophthalmol 2021; 40:178-184. [PMID: 31453913 DOI: 10.1097/wno.0000000000000827] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
BACKGROUND Deep learning (DL) has demonstrated human expert levels of performance for medical image classification in a wide array of medical fields, including ophthalmology. In this article, we present the results of our DL system designed to determine optic disc laterality, right eye vs left eye, in the presence of both normal and abnormal optic discs. METHODS Using transfer learning, we modified the ResNet-152 deep convolutional neural network (DCNN), pretrained on ImageNet, to determine the optic disc laterality. After a 5-fold cross-validation, we generated receiver operating characteristic curves and corresponding area under the curve (AUC) values to evaluate performance. The data set consisted of 576 color fundus photographs (51% right and 49% left). Both 30° photographs centered on the optic disc (63%) and photographs with varying degree of optic disc centration and/or wider field of view (37%) were included. Both normal (27%) and abnormal (73%) optic discs were included. Various neuro-ophthalmological diseases were represented, such as, but not limited to, atrophy, anterior ischemic optic neuropathy, hypoplasia, and papilledema. RESULTS Using 5-fold cross-validation (70% training; 10% validation; 20% testing), our DCNN for classifying right vs left optic disc achieved an average AUC of 0.999 (±0.002) with optimal threshold values, yielding an average accuracy of 98.78% (±1.52%), sensitivity of 98.60% (±1.72%), and specificity of 98.97% (±1.38%). When tested against a separate data set for external validation, our 5-fold cross-validation model achieved the following average performance: AUC 0.996 (±0.005), accuracy 97.2% (±2.0%), sensitivity 96.4% (±4.3%), and specificity 98.0% (±2.2%). CONCLUSIONS Small data sets can be used to develop high-performing DL systems for semantic labeling of neuro-ophthalmology images, specifically in distinguishing between right and left optic discs, even in the presence of neuro-ophthalmological pathologies. Although this may seem like an elementary task, this study demonstrates the power of transfer learning and provides an example of a DCNN that can help curate large medical image databases for machine-learning purposes and facilitate ophthalmologist workflow by automatically labeling images according to laterality.
Collapse
|
35
|
Henderson RDE, Yi X, Adams SJ, Babyn P. Automatic Detection and Classification of Multiple Catheters in Neonatal Radiographs with Deep Learning. J Digit Imaging 2021; 34:888-897. [PMID: 34173089 DOI: 10.1007/s10278-021-00473-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 06/01/2021] [Accepted: 06/09/2021] [Indexed: 12/18/2022] Open
Abstract
We develop and evaluate a deep learning algorithm to classify multiple catheters on neonatal chest and abdominal radiographs. A convolutional neural network (CNN) was trained using a dataset of 777 neonatal chest and abdominal radiographs, with a split of 81%-9%-10% for training-validation-testing, respectively. We employed ResNet-50 (a CNN), pre-trained on ImageNet. Ground truth labelling was limited to tagging each image to indicate the presence or absence of endotracheal tubes (ETTs), nasogastric tubes (NGTs), and umbilical arterial and venous catheters (UACs, UVCs). The dataset included 561 images containing two or more catheters, 167 images with only one, and 49 with none. Performance was measured with average precision (AP), calculated from the area under the precision-recall curve. On our test data, the algorithm achieved an overall AP (95% confidence interval) of 0.977 (0.679-0.999) for NGTs, 0.989 (0.751-1.000) for ETTs, 0.979 (0.873-0.997) for UACs, and 0.937 (0.785-0.984) for UVCs. Performance was similar for the set of 58 test images consisting of two or more catheters, with an AP of 0.975 (0.255-1.000) for NGTs, 0.997 (0.009-1.000) for ETTs, 0.981 (0.797-0.998) for UACs, and 0.937 (0.689-0.990) for UVCs. Our network thus achieves strong performance in the simultaneous detection of these four catheter types. Radiologists may use such an algorithm as a time-saving mechanism to automate reporting of catheters on radiographs.
Collapse
Affiliation(s)
- Robert D E Henderson
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Room 1566, Saskatoon, SK, S7N 0W8, Canada.
| | - Xin Yi
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Room 1566, Saskatoon, SK, S7N 0W8, Canada
| | - Scott J Adams
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Room 1566, Saskatoon, SK, S7N 0W8, Canada
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Room 1566, Saskatoon, SK, S7N 0W8, Canada
| |
Collapse
|
36
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 106] [Impact Index Per Article: 35.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|
37
|
Hayasaka T, Kawano K, Kurihara K, Suzuki H, Nakane M, Kawamae K. Creation of an artificial intelligence model for intubation difficulty classification by deep learning (convolutional neural network) using face images: an observational study. J Intensive Care 2021; 9:38. [PMID: 33952341 PMCID: PMC8101256 DOI: 10.1186/s40560-021-00551-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Accepted: 04/26/2021] [Indexed: 12/24/2022] Open
Abstract
Background Tracheal intubation is the gold standard for securing the airway, and it is not uncommon to encounter intubation difficulties in intensive care units and emergency rooms. Currently, there is a need for an objective measure to assess intubation difficulties in emergency situations by physicians, residents, and paramedics who are unfamiliar with tracheal intubation. Artificial intelligence (AI) is currently used in medical imaging owing to advanced performance. We aimed to create an AI model to classify intubation difficulties from the patient’s facial image using a convolutional neural network (CNN), which links the facial image with the actual difficulty of intubation. Methods Patients scheduled for surgery at Yamagata University Hospital between April and August 2020 were enrolled. Patients who underwent surgery with altered facial appearance, surgery with altered range of motion in the neck, or intubation performed by a physician with less than 3 years of anesthesia experience were excluded. Sixteen different facial images were obtained from the patients since the day after surgery. All images were judged as “Easy”/“Difficult” by an anesthesiologist, and an AI classification model was created using deep learning by linking the patient’s facial image and the intubation difficulty. Receiver operating characteristic curves of actual intubation difficulty and AI model were developed, and sensitivity, specificity, and area under the curve (AUC) were calculated; median AUC was used as the result. Class activation heat maps were used to visualize how the AI model classifies intubation difficulties. Results The best AI model for classifying intubation difficulties from 16 different images was generated in the supine-side-closed mouth-base position. The accuracy was 80.5%; sensitivity, 81.8%; specificity, 83.3%; AUC, 0.864; and 95% confidence interval, [0.731-0.969], indicating that the class activation heat map was concentrated around the neck regardless of the background; the AI model recognized facial contours and identified intubation difficulties. Conclusion This is the first study to apply deep learning (CNN) to classify intubation difficulties using an AI model. We could create an AI model with an AUC of 0.864. Our AI model may be useful for tracheal intubation performed by inexperienced medical staff in emergency situations or under general anesthesia.
Collapse
Affiliation(s)
- Tatsuya Hayasaka
- Department of Anesthesiology, Yamagata University Hospital, Yamagata City, Japan.
| | - Kazuharu Kawano
- Department of Medicine, Yamagata University School of Medicine, Yamagata City, Japan
| | - Kazuki Kurihara
- Department of Anesthesiology, Yamagata University Hospital, Yamagata City, Japan
| | - Hiroto Suzuki
- Critical Care Center, Yamagata University Hospital, Yamagata City, Japan
| | - Masaki Nakane
- Department of Emergency and Critical Care Medicine, Yamagata University Hospital, Yamagata City, Japan
| | - Kaneyuki Kawamae
- Department of Anesthesiology, Yamagata University Hospital, Yamagata City, Japan
| |
Collapse
|
38
|
Laroia AT, Donnelly EF, Henry TS, Berry MF, Boiselle PM, Colletti PM, Kuzniewski CT, Maldonado F, Olsen KM, Raptis CA, Shim K, Wu CC, Kanne JP. ACR Appropriateness Criteria® Intensive Care Unit Patients. J Am Coll Radiol 2021; 18:S62-S72. [PMID: 33958119 DOI: 10.1016/j.jacr.2021.01.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 01/18/2021] [Indexed: 02/07/2023]
Abstract
Chest radiography is the most frequent and primary imaging modality in the intensive care unit (ICU), given its portability, rapid image acquisition, and availability of immediate information on the bedside preview. Due to the severity of underlying disease and frequent need of placement of monitoring devices, ICU patients are very likely to develop complications related to underlying disease process and interventions. Portable chest radiography in the ICU is an essential tool to monitor the disease process and the complications from interventions; however, it is subject to overuse especially in stable patients. Restricting the use of chest radiographs in the ICU to only when indicated has not been shown to cause harm. The emerging role of bedside point-of-care lung ultrasound performed by the clinicians is noted in the recent literature. The bedside lung ultrasound appears promising but needs cautious evaluation in the future to determine its role in ICU patients. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision include an extensive analysis of current medical literature from peer reviewed journals and the application of well-established methodologies (RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development, and Evaluation or GRADE) to rate the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where evidence is lacking or equivocal, expert opinion may supplement the available evidence to recommend imaging or treatment.
Collapse
Affiliation(s)
| | - Edwin F Donnelly
- Panel Chair, Vanderbilt University Medical Center, Nashville, Tennessee. Chief, Division of Thoracic Radiology, Department of Radiology, Ohio State University Wexner Medical Center
| | - Travis S Henry
- Panel Vice-Chair, University of California San Francisco, San Francisco, California
| | - Mark F Berry
- Stanford University Medical Center, Stanford, California, The Society of Thoracic Surgeons
| | - Phillip M Boiselle
- Schmidt College of Medicine, Florida Atlantic University, Boca Raton, Florida
| | | | | | - Fabien Maldonado
- Vanderbilt University Medical Center, Nashville, Tennessee, American College of Chest Physicians
| | | | | | - Kyungran Shim
- John H. Stroger, Jr. Hospital of Cook County, Chicago, Illinois, American College of Physicians
| | - Carol C Wu
- University of Texas MD Anderson Cancer Center, Houston, Texas, Chair of Thoracic Use Case Panel of ACR DSI, Deputy Chair ad interim, Department of Thoracic Imaging, University of Texas MD Anderson Cancer Center
| | - Jeffrey P Kanne
- Specialty Chair, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| |
Collapse
|
39
|
Kim TK, Yi PH, Wei J, Shin JW, Hager G, Hui FK, Sair HI, Lin CT. Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs. J Digit Imaging 2021; 32:925-930. [PMID: 30972585 DOI: 10.1007/s10278-019-00208-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN's performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.
Collapse
Affiliation(s)
- Tae Kyung Kim
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Jinchi Wei
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Ji Won Shin
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Gregory Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Ferdinand K Hui
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Cheng Ting Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA. .,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA.
| |
Collapse
|
40
|
Chudow JJ, Jones D, Weinreich M, Zaremski L, Lee S, Weinreich B, Krumerman A, Fisher JD, Ferrick KJ. A Head-to Head Comparison of Machine Learning Algorithms for Identification of Implanted Cardiac Devices. Am J Cardiol 2021; 144:77-82. [PMID: 33383004 DOI: 10.1016/j.amjcard.2020.12.067] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 12/18/2020] [Accepted: 12/22/2020] [Indexed: 01/02/2023]
Abstract
Application of artificial intelligence techniques in medicine has rapidly expanded in recent years. Two algorithms for identification of cardiac implantable electronic devices using chest radiography were recently developed: The PacemakerID algorithm, available as a mobile phone application (PIDa) and a web platform (PIDw) and The Pacemaker Identification with Neural Networks (PPMnn), available via web platform. In this study, we assessed the relative accuracy of these algorithms. The machine learning algorithms (PIDa, PIDw, PPMnn) were used to predict device manufacturer using chest X-rays for patients with implanted devices. Each prediction was considered correct if predicted certainty was >75%. For comparative purposes, accuracy of each prediction was compared to the result using the CARDIA-X algorithm. 500 X-rays were included from a convenience sample. Raw accuracy was PIDa 89%, PIDw 73%, PPMnn 71% and CARDIA-X 85%. In conclusion, machine learning algorithms for identification of cardiac devices are accurate at determining device manufacturer, have capacity for improved accuracy with additional training sets and can utilize simple user interfaces. These algorithms have clinical utility in limiting potential infectious exposures and facilitate rapid identification of devices as needed for device reprogramming.
Collapse
Affiliation(s)
- Jay J Chudow
- Division of Cardiology, Department of Medicine, Montefiore Medical Center, Bronx, New York
| | - Davis Jones
- Division of Cardiology, Department of Medicine, Montefiore Medical Center, Bronx, New York
| | - Michael Weinreich
- Division of Cardiology, Department of Medicine, Montefiore Medical Center, Bronx, New York
| | - Lynn Zaremski
- Division of Cardiology, Department of Medicine, Montefiore Medical Center, Bronx, New York
| | - Suegene Lee
- Division of Cardiology, Department of Medicine, Montefiore Medical Center, Bronx, New York
| | - Brian Weinreich
- Division of Cardiology, Department of Medicine, Montefiore Medical Center, Bronx, New York
| | - Andrew Krumerman
- Division of Cardiology, Department of Medicine, Montefiore Medical Center, Bronx, New York
| | - John D Fisher
- Division of Cardiology, Department of Medicine, Montefiore Medical Center, Bronx, New York
| | - Kevin J Ferrick
- Division of Cardiology, Department of Medicine, Montefiore Medical Center, Bronx, New York.
| |
Collapse
|
41
|
Toba S, Mitani Y, Yodoya N, Ohashi H, Sawada H, Hayakawa H, Hirayama M, Futsuki A, Yamamoto N, Ito H, Konuma T, Shimpo H, Takao M. Prediction of Pulmonary to Systemic Flow Ratio in Patients With Congenital Heart Disease Using Deep Learning-Based Analysis of Chest Radiographs. JAMA Cardiol 2021; 5:449-457. [PMID: 31968049 DOI: 10.1001/jamacardio.2019.5620] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Importance Chest radiography is a useful noninvasive modality to evaluate pulmonary blood flow status in patients with congenital heart disease. However, the predictive value of chest radiography is limited by the subjective and qualitive nature of the interpretation. Recently, deep learning has been used to analyze various images, but it has not been applied to analyzing chest radiographs in such patients. Objective To develop and validate a quantitative method to predict the pulmonary to systemic flow ratio from chest radiographs using deep learning. Design, Setting, and Participants This retrospective observational study included 1031 cardiac catheterizations performed for 657 patients from January 1, 2005, to April 30, 2019, at a tertiary center. Catheterizations without the Fick-derived pulmonary to systemic flow ratio or chest radiography performed within 1 month before catheterization were excluded. Seventy-eight patients (100 catheterizations) were randomly assigned for evaluation. A deep learning model that predicts the pulmonary to systemic flow ratio from chest radiographs was developed using the method of transfer learning. Main Outcomes and Measures Whether the model can predict the pulmonary to systemic flow ratio from chest radiographs was evaluated using the intraclass correlation coefficient and Bland-Altman analysis. The diagnostic concordance rate was compared with 3 certified pediatric cardiologists. The diagnostic performance for a high pulmonary to systemic flow ratio of 2.0 or more was evaluated using cross tabulation and a receiver operating characteristic curve. Results The study included 1031 catheterizations in 657 patients (522 males [51%]; median age, 3.4 years [interquartile range, 1.2-8.6 years]), in whom the mean (SD) Fick-derived pulmonary to systemic flow ratio was 1.43 (0.95). Diagnosis included congenital heart disease in 1008 catheterizations (98%). The intraclass correlation coefficient for the Fick-derived and deep learning-derived pulmonary to systemic flow ratio was 0.68, the log-transformed bias was 0.02, and the log-transformed precision was 0.12. The diagnostic concordance rate of the deep learning model was significantly higher than that of the experts (correctly classified 64 of 100 vs 49 of 100 chest radiographs; P = .02 [McNemar test]). For detecting a high pulmonary to systemic flow ratio, the sensitivity of the deep learning model was 0.47, the specificity was 0.95, and the area under the receiver operating curve was 0.88. Conclusions and Relevance The present investigation demonstrated that deep learning-based analysis of chest radiographs predicted the pulmonary to systemic flow ratio in patients with congenital heart disease. These findings suggest that the deep learning-based approach may confer an objective and quantitative evaluation of chest radiographs in the congenital heart disease clinic.
Collapse
Affiliation(s)
- Shuhei Toba
- Department of Thoracic and Cardiovascular Surgery, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Yoshihide Mitani
- Department of Pediatrics, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Noriko Yodoya
- Department of Pediatrics, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Hiroyuki Ohashi
- Department of Pediatrics, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Hirofumi Sawada
- Department of Pediatrics, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Hidetoshi Hayakawa
- Department of Pediatrics, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Masahiro Hirayama
- Department of Pediatrics, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Ayano Futsuki
- Department of Thoracic and Cardiovascular Surgery, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Naoki Yamamoto
- Department of Thoracic and Cardiovascular Surgery, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Hisato Ito
- Department of Thoracic and Cardiovascular Surgery, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Takeshi Konuma
- Department of Thoracic and Cardiovascular Surgery, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| | - Hideto Shimpo
- Department of Thoracic and Cardiovascular Surgery, Mie University Graduate School of Medicine, Tsu, Mie, Japan.,Mie Prefectural General Medical Center, Yokkaichi, Mie, Japan
| | - Motoshi Takao
- Department of Thoracic and Cardiovascular Surgery, Mie University Graduate School of Medicine, Tsu, Mie, Japan
| |
Collapse
|
42
|
Lakhani P, Flanders A, Gorniak R. Endotracheal Tube Position Assessment on Chest Radiographs Using Deep Learning. Radiol Artif Intell 2021; 3:e200026. [PMID: 33937852 PMCID: PMC8082365 DOI: 10.1148/ryai.2020200026] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 09/03/2020] [Accepted: 09/25/2020] [Indexed: 05/08/2023]
Abstract
PURPOSE To determine the efficacy of deep learning in assessing endotracheal tube (ETT) position on radiographs. MATERIALS AND METHODS In this retrospective study, 22 960 de-identified frontal chest radiographs from 11 153 patients (average age, 60.2 years ± 19.9 [standard deviation], 55.6% men) between 2010 and 2018 containing an ETT were placed into 12 categories, including bronchial insertion and distance from the carina at 1.0-cm intervals (0.0-0.9 cm, 1.0-1.9 cm, etc), and greater than 10 cm. Images were split into training (80%, 18 368 images), validation (10%, 2296 images), and internal test (10%, 2296 images), derived from the same institution as the training data. One hundred external test radiographs were also obtained from a different hospital. The Inception V3 deep neural network was used to predict ETT-carina distance. ETT-carina distances and intraclass correlation coefficients (ICCs) for the radiologists and artificial intelligence (AI) system were calculated on a subset of 100 random internal and 100 external test images. Sensitivity and specificity were calculated for low and high ETT position thresholds. RESULTS On the internal and external test images, respectively, the ICCs of AI and radiologists were 0.84 (95% CI: 0.78, 0.92) and 0.89 (95% CI: 0.77, 0.94); the ICCs of the radiologists were 0.93 (95% CI: 0.90, 0.95) and 0.84 (95% CI: 0.71, 0.90). The AI model was 93.9% sensitive (95% CI: 90.0, 96.7) and 97.7% specific (95% CI: 96.9, 98.3) for detecting ETT-carina distance less than 1 cm. CONCLUSION Deep learning predicted ETT-carina distance within 1 cm in most cases and showed excellent interrater agreement compared with radiologists. The model was sensitive and specific in detecting low ETT positions.© RSNA, 2020.
Collapse
|
43
|
Yi PH, Lin A, Wei J, Yu AC, Sair HI, Hui FK, Hager GD, Harvey SC. Deep-Learning-Based Semantic Labeling for 2D Mammography and Comparison of Complexity for Machine Learning Tasks. J Digit Imaging 2020; 32:565-570. [PMID: 31197559 PMCID: PMC6646449 DOI: 10.1007/s10278-019-00244-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Machine learning has several potential uses in medical imaging for semantic labeling of images to improve radiologist workflow and to triage studies for review. The purpose of this study was to (1) develop deep convolutional neural networks (DCNNs) for automated classification of 2D mammography views, determination of breast laterality, and assessment and of breast tissue density; and (2) compare the performance of DCNNs on these tasks of varying complexity to each other. We obtained 3034 2D-mammographic images from the Digital Database for Screening Mammography, annotated with mammographic view, image laterality, and breast tissue density. These images were used to train a DCNN to classify images for these three tasks. The DCNN trained to classify mammographic view achieved receiver-operating-characteristic (ROC) area under the curve (AUC) of 1. The DCNN trained to classify breast image laterality initially misclassified right and left breasts (AUC 0.75); however, after discontinuing horizontal flips during data augmentation, AUC improved to 0.93 (p < 0.0001). Breast density classification proved more difficult, with the DCNN achieving 68% accuracy. Automated semantic labeling of 2D mammography is feasible using DCNNs and can be performed with small datasets. However, automated classification of differences in breast density is more difficult, likely requiring larger datasets.
Collapse
Affiliation(s)
- Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA. .,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA.
| | - Abigail Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA
| | - Jinchi Wei
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Alice C Yu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Ferdinand K Hui
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Gregory D Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Susan C Harvey
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA
| |
Collapse
|
44
|
Singh V, Danda V, Gorniak R, Flanders A, Lakhani P. Assessment of Critical Feeding Tube Malpositions on Radiographs Using Deep Learning. J Digit Imaging 2020; 32:651-655. [PMID: 31073816 PMCID: PMC6646608 DOI: 10.1007/s10278-019-00229-9] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Assess the efficacy of deep convolutional neural networks (DCNNs) in detection of critical enteric feeding tube malpositions on radiographs. 5475 de-identified HIPAA compliant frontal view chest and abdominal radiographs were obtained, consisting of 174 x-rays of bronchial insertions and 5301 non-critical radiographs, including normal course, normal chest, and normal abdominal x-rays. The ground-truth classification for enteric feeding tube placement was performed by two board-certified radiologists. Untrained and pretrained deep convolutional neural network models for Inception V3, ResNet50, and DenseNet 121 were each employed. The radiographs were fed into each deep convolutional neural network, which included untrained and pretrained models. The Tensorflow framework was used for Inception V3, ResNet50, and DenseNet. Images were split into training (4745), validation (630), and test (100). Both real-time and preprocessing image augmentation strategies were performed. Receiver operating characteristic (ROC) and area under the curve (AUC) on the test data were used to assess the models. Statistical differences among the AUCs were obtained. p < 0.05 was considered statistically significant. The pretrained Inception V3, which had an AUC of 0.87 (95 CI; 0.80–0.94), performed statistically significantly better (p < .001) than the untrained Inception V3, with an AUC of 0.60 (95 CI; 0.52–0.68). The pretrained Inception V3 also had the highest AUC overall, as compared with ResNet50 and DenseNet121, with AUC values ranging from 0.82 to 0.85. Each pretrained network outperformed its untrained counterpart. (p < 0.05). Deep learning demonstrates promise in differentiating critical vs. non-critical placement with an AUC of 0.87. Pretrained networks outperformed untrained ones in all cases. DCNNs may allow for more rapid identification and communication of critical feeding tube malpositions.
Collapse
Affiliation(s)
- Varun Singh
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, 19107, USA. .,, Philadelphia, USA.
| | - Varun Danda
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, 19107, USA
| | - Richard Gorniak
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, 19107, USA
| | - Adam Flanders
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, 19107, USA
| | - Paras Lakhani
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, 19107, USA
| |
Collapse
|
45
|
Yi PH, Kim TK, Wei J, Li X, Hager GD, Sair HI, Fritz J. Automated detection and classification of shoulder arthroplasty models using deep learning. Skeletal Radiol 2020; 49:1623-1632. [PMID: 32415371 DOI: 10.1007/s00256-020-03463-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 05/03/2020] [Accepted: 05/04/2020] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To develop and evaluate the performance of deep convolutional neural networks (DCNN) to detect and identify specific total shoulder arthroplasty (TSA) models. MATERIALS AND METHODS We included 482 radiography studies obtained from publicly available image repositories with native shoulders, reverse TSA (RTSA) implants, and five different TSA models. We trained separate ResNet DCNN-based binary classifiers to (1) detect the presence of shoulder arthroplasty implants, (2) differentiate between TSA and RTSA, and (3) differentiate between the five TSA models, using five individual classifiers for each model, respectively. Datasets were divided into training, validation, and test datasets. Training and validation datasets were 20-fold augmented. Test performances were assessed with area under the receiver-operating characteristic curves (AUC-ROC) analyses. Class activation mapping was used to identify distinguishing imaging features used for DCNN classification decisions. RESULTS The DCNN for the detection of the presence of shoulder arthroplasty implants achieved an AUC-ROC of 1.0, whereas the AUC-ROC for differentiation between TSA and RTSA was 0.97. Class activation map analysis demonstrated the emphasis on the characteristic arthroplasty components in decision-making. DCNNs trained to distinguish between the five TSA models achieved AUC-ROCs ranging from 0.86 for Stryker Solar to 1.0 for Zimmer Bigliani-Flatow with class activation map analysis demonstrating an emphasis on unique implant design features. CONCLUSION DCNNs can accurately identify the presence of and distinguish between TSA & RTSA, and classify five specific TSA models with high accuracy. The proof of concept of these DCNNs may set the foundation for an automated arthroplasty atlas for rapid and comprehensive model identification.
Collapse
Affiliation(s)
- Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Tae Kyung Kim
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Jinchi Wei
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Xinning Li
- Department of Orthopaedic Surgery, Boston University School of Medicine, Boston, MA, USA
| | - Gregory D Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Jan Fritz
- Department of Radiology, Division of Musculoskeletal Radiology, New York University Grossman School of Medicine, 660 1st Ave, 3rd Floor, Rm #313, New York, NY, 10016, USA.
| |
Collapse
|
46
|
Kim TK, Yi PH, Hager GD, Lin CT. Refining dataset curation methods for deep learning-based automated tuberculosis screening. J Thorac Dis 2020; 12:5078-5085. [PMID: 33145084 PMCID: PMC7578485 DOI: 10.21037/jtd.2019.08.34] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Background The study objective was to determine whether unlabeled datasets can be used to further train and improve the accuracy of a deep learning system (DLS) for the detection of tuberculosis (TB) on chest radiographs (CXRs) using a two-stage semi-supervised approach. Methods A total of 111,622 CXRs from the National Institute of Health ChestX-ray14 database were collected. A cardiothoracic radiologist reviewed a subset of 11,000 CXRs and dichotomously labeled each for the presence or absence of potential TB findings; these interpretations were used to train a deep convolutional neural network (DCNN) to identify CXRs with possible TB (Phase I). The best performing algorithm was then used to label the remaining database consisting of 100,622 radiographs; subsequently, these newly-labeled images were used to train a second DCNN (phase II). The best-performing algorithm from phase II (TBNet) was then tested against CXRs obtained from 3 separate sites (2 from the USA, 1 from China) with clinically confirmed cases of TB. Receiver operating characteristic (ROC) curves were generated with area under the curve (AUC) calculated. Results The phase I algorithm trained using 11,000 expert-labelled radiographs achieved an AUC of 0.88. The phase II algorithm trained on images labeled by the phase I algorithm achieved an AUC of 0.91 testing against a TB dataset obtained from Shenzhen, China and Montgomery County, USA. The algorithm generalized well to radiographs obtained from a tertiary care hospital, achieving an AUC of 0.87; TBNet’s sensitivity, specificity, positive predictive value, and negative predictive value were 85%, 76%, 0.64, and 0.9, respectively. When TBNet was used to arbitrate discrepancies between 2 radiologists, the overall sensitivity reached 94% and negative predictive value reached 0.96, demonstrating a synergistic effect between the algorithm’s output and radiologists’ interpretations. Conclusions Using semi-supervised learning, we trained a deep learning algorithm that detected TB at a high accuracy and demonstrated value as a CAD tool by identifying relevant CXR findings, especially in cases that were misinterpreted by radiologists. When dataset labels are noisy or absent, the described methods can significantly reduce the required amount of curated data to build clinically-relevant deep learning models, which will play an important role in the era of precision medicine.
Collapse
Affiliation(s)
- Tae Kyung Kim
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Johns Hopkins Malone Center for Engineering in Healthcare, Baltimore, MD, USA
| | - Paul H Yi
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Johns Hopkins Malone Center for Engineering in Healthcare, Baltimore, MD, USA
| | - Gregory D Hager
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Johns Hopkins Malone Center for Engineering in Healthcare, Baltimore, MD, USA
| | - Cheng Ting Lin
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Johns Hopkins Malone Center for Engineering in Healthcare, Baltimore, MD, USA
| |
Collapse
|
47
|
Hwang EJ, Park CM. Clinical Implementation of Deep Learning in Thoracic Radiology: Potential Applications and Challenges. Korean J Radiol 2020; 21:511-525. [PMID: 32323497 PMCID: PMC7183830 DOI: 10.3348/kjr.2019.0821] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Accepted: 01/31/2020] [Indexed: 12/25/2022] Open
Abstract
Chest X-ray radiography and computed tomography, the two mainstay modalities in thoracic radiology, are under active investigation with deep learning technology, which has shown promising performance in various tasks, including detection, classification, segmentation, and image synthesis, outperforming conventional methods and suggesting its potential for clinical implementation. However, the implementation of deep learning in daily clinical practice is in its infancy and facing several challenges, such as its limited ability to explain the output results, uncertain benefits regarding patient outcomes, and incomplete integration in daily workflow. In this review article, we will introduce the potential clinical applications of deep learning technology in thoracic radiology and discuss several challenges for its implementation in daily clinical practice.
Collapse
Affiliation(s)
- Eui Jin Hwang
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.
| |
Collapse
|
48
|
Li D, Deng L, Cai Z. Research on image classification method based on convolutional neural network. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04930-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
49
|
Yi PH, Wei J, Kim TK, Sair HI, Hui FK, Hager GD, Fritz J, Oni JK. Automated detection & classification of knee arthroplasty using deep learning. Knee 2020; 27:535-542. [PMID: 31883760 DOI: 10.1016/j.knee.2019.11.020] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 10/25/2019] [Accepted: 11/26/2019] [Indexed: 02/02/2023]
Abstract
BACKGROUND Preoperative identification of knee arthroplasty is important for planning revision surgery. However, up to 10% of implants are not identified prior to surgery. The purposes of this study were to develop and test the performance of a deep learning system (DLS) for the automated radiographic 1) identification of the presence or absence of a total knee arthroplasty (TKA); 2) classification of TKA vs. unicompartmental knee arthroplasty (UKA); and 3) differentiation between two different primary TKA models. METHOD We collected 237 anteroposterior (AP) knee radiographs with equal proportions of native knees, TKA, and UKA and 274 AP knee radiographs with equal proportions of two TKA models. Data augmentation was used to increase the number of images for deep convolutional neural network (DCNN) training. A DLS based on DCNNs was trained on these images. Receiver operating characteristic (ROC) curves with area under the curve (AUC) were generated. Heatmaps were created using class activation mapping (CAM) to identify image features most important for DCNN decision-making. RESULTS DCNNs trained to detect TKA and distinguish between TKA and UKA both achieved AUC of 1. Heatmaps demonstrated appropriate emphasis of arthroplasty components in decision-making. The DCNN trained to distinguish between the two TKA models achieved AUC of 1. Heatmaps showed emphasis of specific unique features of the TKA model designs, such as the femoral component anterior flange shape. CONCLUSIONS DCNNs can accurately identify presence of TKA and distinguish between specific arthroplasty designs. This proof-of-concept could be applied towards identifying other prosthesis models and prosthesis-related complications.
Collapse
Affiliation(s)
- Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N Caroline St, Room 4223, Baltimore, MD 21287, United States of America; Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, 3400 N Charles St, Baltimore, MD 21218, United States of America.
| | - Jinchi Wei
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, 3400 N Charles St, Baltimore, MD 21218, United States of America.
| | - Tae Kyung Kim
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N Caroline St, Room 4223, Baltimore, MD 21287, United States of America; Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, 3400 N Charles St, Baltimore, MD 21218, United States of America.
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N Caroline St, Room 4223, Baltimore, MD 21287, United States of America; Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, 3400 N Charles St, Baltimore, MD 21218, United States of America.
| | - Ferdinand K Hui
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N Caroline St, Room 4223, Baltimore, MD 21287, United States of America; Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, 3400 N Charles St, Baltimore, MD 21218, United States of America.
| | - Gregory D Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, 3400 N Charles St, Baltimore, MD 21218, United States of America.
| | - Jan Fritz
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N Caroline St, Room 4223, Baltimore, MD 21287, United States of America; Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, 3400 N Charles St, Baltimore, MD 21218, United States of America.
| | - Julius K Oni
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, 4940 Eastern Avenue Building A 6th Floor, Baltimore, MD 21224, United States of America
| |
Collapse
|
50
|
Yi X, Adams SJ, Henderson RDE, Babyn P. Computer-aided Assessment of Catheters and Tubes on Radiographs: How Good Is Artificial Intelligence for Assessment? Radiol Artif Intell 2020; 2:e190082. [PMID: 33937813 DOI: 10.1148/ryai.2020190082] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 10/11/2019] [Accepted: 10/31/2019] [Indexed: 12/23/2022]
Abstract
Catheters are the second most common abnormal finding on radiographs. The position of catheters must be assessed on all radiographs because serious complications can arise if catheters are malpositioned. However, due to the large number of radiographs obtained each day, there can be substantial delays between the time a radiograph is obtained and when it is interpreted by a radiologist. Computer-aided approaches hold the potential to assist in prioritizing radiographs with potentially malpositioned catheters for interpretation and automatically insert text indicating the placement of catheters in radiology reports, thereby improving radiologists' efficiency. After 50 years of research in computer-aided diagnosis, there is still a paucity of study in this area. With the development of deep learning approaches, the problem of catheter assessment is far more solvable. This review provides an overview of current algorithms and identifies key challenges in building a reliable computer-aided diagnosis system for assessment of catheters on radiographs. This review may serve to further the development of machine learning approaches for this important use case. Supplemental material is available for this article. © RSNA, 2020.
Collapse
Affiliation(s)
- Xin Yi
- Department of Medical Imaging (X.Y., S.J.A., P.B.) and College of Medicine (R.D.E.H.), University of Saskatchewan, 103 Hospital Drive, Saskatoon, SK, Canada S7N 0W8
| | - Scott J Adams
- Department of Medical Imaging (X.Y., S.J.A., P.B.) and College of Medicine (R.D.E.H.), University of Saskatchewan, 103 Hospital Drive, Saskatoon, SK, Canada S7N 0W8
| | - Robert D E Henderson
- Department of Medical Imaging (X.Y., S.J.A., P.B.) and College of Medicine (R.D.E.H.), University of Saskatchewan, 103 Hospital Drive, Saskatoon, SK, Canada S7N 0W8
| | - Paul Babyn
- Department of Medical Imaging (X.Y., S.J.A., P.B.) and College of Medicine (R.D.E.H.), University of Saskatchewan, 103 Hospital Drive, Saskatoon, SK, Canada S7N 0W8
| |
Collapse
|