1
|
Zhao T, Meng X, Wang Z, Hu Y, Fan H, Han J, Zhu N, Niu F. Diagnostic evaluation of blunt chest trauma by imaging-based application of artificial intelligence: A review. Am J Emerg Med 2024; 85:35-43. [PMID: 39213808 DOI: 10.1016/j.ajem.2024.08.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024] Open
Abstract
Artificial intelligence (AI) is becoming increasingly integral in clinical practice, such as during imaging tasks associated with the diagnosis and evaluation of blunt chest trauma (BCT). Due to significant advances in imaging-based deep learning, recent studies have demonstrated the efficacy of AI in the diagnosis of BCT, with a focus on rib fractures, pulmonary contusion, hemopneumothorax and others, demonstrating significant clinical progress. However, the complicated nature of BCT presents challenges in providing a comprehensive diagnosis and prognostic evaluation, and current deep learning research concentrates on specific clinical contexts, limiting its utility in addressing BCT intricacies. Here, we provide a review of the available evidence surrounding the potential utility of AI in BCT, and additionally identify the challenges impeding its development. This review offers insights on how to optimize the role of AI in the diagnostic evaluation of BCT, which can ultimately enhance patient care and outcomes in this critical clinical domain.
Collapse
Affiliation(s)
- Tingting Zhao
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China
| | - Xianghong Meng
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China.
| | - Zhi Wang
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China.
| | - Yongcheng Hu
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China
| | - Hongxing Fan
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin Medical University, Tianjin, China
| | - Jun Han
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China
| | - Nana Zhu
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin Medical University, Tianjin, China
| | - Feige Niu
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin Medical University, Tianjin, China
| |
Collapse
|
2
|
Chutia U, Tewari AS, Singh JP, Raj VK. Classification of Lung Diseases Using an Attention-Based Modified DenseNet Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1625-1641. [PMID: 38467955 DOI: 10.1007/s10278-024-01005-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 12/19/2023] [Accepted: 12/22/2023] [Indexed: 03/13/2024]
Abstract
Lung diseases represent a significant global health threat, impacting both well-being and mortality rates. Diagnostic procedures such as Computed Tomography (CT) scans and X-ray imaging play a pivotal role in identifying these conditions. X-rays, due to their easy accessibility and affordability, serve as a convenient and cost-effective option for diagnosing lung diseases. Our proposed method utilized the Contrast-Limited Adaptive Histogram Equalization (CLAHE) enhancement technique on X-ray images to highlight the key feature maps related to lung diseases using DenseNet201. We have augmented the existing Densenet201 model with a hybrid pooling and channel attention mechanism. The experimental results demonstrate the superiority of our model over well-known pre-trained models, such as VGG16, VGG19, InceptionV3, Xception, ResNet50, ResNet152, ResNet50V2, ResNet152V2, MobileNetV2, DenseNet121, DenseNet169, and DenseNet201. Our model achieves impressive accuracy, precision, recall, and F1-scores of 95.34%, 97%, 96%, and 96%, respectively. We also provide visual insights into our model's decision-making process using Gradient-weighted Class Activation Mapping (Grad-CAM) to identify normal, pneumothorax, and atelectasis cases. The experimental results of our model in terms of heatmap may help radiologists improve their diagnostic abilities and labelling processes.
Collapse
Affiliation(s)
- Upasana Chutia
- Department of Computer Science and Engineering, National Institute of Technology Patna, Patna, 800005, Bihar, India
| | - Anand Shanker Tewari
- Department of Computer Science and Engineering, National Institute of Technology Patna, Patna, 800005, Bihar, India
| | - Jyoti Prakash Singh
- Department of Computer Science and Engineering, National Institute of Technology Patna, Patna, 800005, Bihar, India.
| | - Vikash Kumar Raj
- National Institute of Technology Patna, Patna, 800005, Bihar, India
| |
Collapse
|
3
|
Yi PH, Garner HW, Hirschmann A, Jacobson JA, Omoumi P, Oh K, Zech JR, Lee YH. Clinical Applications, Challenges, and Recommendations for Artificial Intelligence in Musculoskeletal and Soft-Tissue Ultrasound: AJR Expert Panel Narrative Review. AJR Am J Roentgenol 2024; 222:e2329530. [PMID: 37436032 DOI: 10.2214/ajr.23.29530] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023]
Abstract
Artificial intelligence (AI) is increasingly used in clinical practice for musculoskeletal imaging tasks, such as disease diagnosis and image reconstruction. AI applications in musculoskeletal imaging have focused primarily on radiography, CT, and MRI. Although musculoskeletal ultrasound stands to benefit from AI in similar ways, such applications have been relatively underdeveloped. In comparison with other modalities, ultrasound has unique advantages and disadvantages that must be considered in AI algorithm development and clinical translation. Challenges in developing AI for musculoskeletal ultrasound involve both clinical aspects of image acquisition and practical limitations in image processing and annotation. Solutions from other radiology subspecialties (e.g., crowdsourced annotations coordinated by professional societies), along with use cases (most commonly rotator cuff tendon tears and palpable soft-tissue masses), can be applied to musculoskeletal ultrasound to help develop AI. To facilitate creation of high-quality imaging datasets for AI model development, technologists and radiologists should focus on increasing uniformity in musculoskeletal ultrasound performance and increasing annotations of images for specific anatomic regions. This Expert Panel Narrative Review summarizes available evidence regarding AI's potential utility in musculoskeletal ultrasound and challenges facing its development. Recommendations for future AI advancement and clinical translation in musculoskeletal ultrasound are discussed.
Collapse
Affiliation(s)
- Paul H Yi
- University of Maryland Medical Intelligent Imaging Center, University of Maryland School of Medicine, Baltimore, MD
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD
| | | | - Anna Hirschmann
- Imamed Radiology Nordwest, Basel, Switzerland
- Department of Radiology, University of Basel, Basel, Switzerland
| | - Jon A Jacobson
- Lenox Hill Radiology, New York, NY
- Department of Radiology, University of California, San Diego Medical Center, San Diego, CA
| | - Patrick Omoumi
- Department of Radiology, Lausanne University Hospital, Lausanne, Switzerland
- Department of Radiology, University of Lausanne, Lausanne, Switzerland
| | - Kangrok Oh
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea
| | - John R Zech
- Department of Radiology, Columbia University Irving Medical Center, New York-Presbyterian Hospital, New York, NY
| | - Young Han Lee
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea
| |
Collapse
|
4
|
Wang CH, Lin T, Chen G, Lee MR, Tay J, Wu CY, Wu MC, Roth HR, Yang D, Zhao C, Wang W, Huang CH. Deep Learning-based Diagnosis and Localization of Pneumothorax on Portable Supine Chest X-ray in Intensive and Emergency Medicine: A Retrospective Study. J Med Syst 2023; 48:1. [PMID: 38048012 PMCID: PMC10695857 DOI: 10.1007/s10916-023-02023-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 11/25/2023] [Indexed: 12/05/2023]
Abstract
PURPOSE To develop two deep learning-based systems for diagnosing and localizing pneumothorax on portable supine chest X-rays (SCXRs). METHODS For this retrospective study, images meeting the following inclusion criteria were included: (1) patient age ≥ 20 years; (2) portable SCXR; (3) imaging obtained in the emergency department or intensive care unit. Included images were temporally split into training (1571 images, between January 2015 and December 2019) and testing (1071 images, between January 2020 to December 2020) datasets. All images were annotated using pixel-level labels. Object detection and image segmentation were adopted to develop separate systems. For the detection-based system, EfficientNet-B2, DneseNet-121, and Inception-v3 were the architecture for the classification model; Deformable DETR, TOOD, and VFNet were the architecture for the localization model. Both classification and localization models of the segmentation-based system shared the UNet architecture. RESULTS In diagnosing pneumothorax, performance was excellent for both detection-based (Area under receiver operating characteristics curve [AUC]: 0.940, 95% confidence interval [CI]: 0.907-0.967) and segmentation-based (AUC: 0.979, 95% CI: 0.963-0.991) systems. For images with both predicted and ground-truth pneumothorax, lesion localization was highly accurate (detection-based Dice coefficient: 0.758, 95% CI: 0.707-0.806; segmentation-based Dice coefficient: 0.681, 95% CI: 0.642-0.721). The performance of the two deep learning-based systems declined as pneumothorax size diminished. Nonetheless, both systems were similar or better than human readers in diagnosis or localization performance across all sizes of pneumothorax. CONCLUSIONS Both deep learning-based systems excelled when tested in a temporally different dataset with differing patient or image characteristics, showing favourable potential for external generalizability.
Collapse
Affiliation(s)
- Chih-Hung Wang
- Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan
| | - Tzuching Lin
- Institute of Applied Mathematical Sciences, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd., Taipei, 106, Taiwan
| | - Guanru Chen
- Institute of Applied Mathematical Sciences, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd., Taipei, 106, Taiwan
| | - Meng-Rui Lee
- Department of internal medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Joyce Tay
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan
| | - Cheng-Yi Wu
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan
| | - Meng-Che Wu
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan
| | | | | | - Can Zhao
- NVIDIA Corporation, Bethesda, USA
| | - Weichung Wang
- Institute of Applied Mathematical Sciences, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd., Taipei, 106, Taiwan.
| | - Chien-Hua Huang
- Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan.
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan.
| |
Collapse
|
5
|
Liu Z, Lv Q, Yang Z, Li Y, Lee CH, Shen L. Recent progress in transformer-based medical image analysis. Comput Biol Med 2023; 164:107268. [PMID: 37494821 DOI: 10.1016/j.compbiomed.2023.107268] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/30/2023] [Accepted: 07/16/2023] [Indexed: 07/28/2023]
Abstract
The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Ziduo Yang
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Yifan Li
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore.
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| |
Collapse
|
6
|
Kumar VD, Rajesh P, Geman O, Craciun MD, Arif M, Filip R. “Quo Vadis Diagnosis”: Application of Informatics in Early Detection of Pneumothorax. Diagnostics (Basel) 2023; 13:diagnostics13071305. [PMID: 37046523 PMCID: PMC10093601 DOI: 10.3390/diagnostics13071305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/22/2023] [Accepted: 03/28/2023] [Indexed: 04/03/2023] Open
Abstract
A pneumothorax is a condition that occurs in the lung region when air enters the pleural space—the area between the lung and chest wall—causing the lung to collapse and making it difficult to breathe. This can happen spontaneously or as a result of an injury. The symptoms of a pneumothorax may include chest pain, shortness of breath, and rapid breathing. Although chest X-rays are commonly used to detect a pneumothorax, locating the affected area visually in X-ray images can be time-consuming and prone to errors. Existing computer technology for detecting this disease from X-rays is limited by three major issues, including class disparity, which causes overfitting, difficulty in detecting dark portions of the images, and vanishing gradient. To address these issues, we propose an ensemble deep learning model called PneumoNet, which uses synthetic images from data augmentation to address the class disparity issue and a segmentation system to identify dark areas. Finally, the issue of the vanishing gradient, which becomes very small during back propagation, can be addressed by hyperparameter optimization techniques that prevent the model from slowly converging and poorly performing. Our model achieved an accuracy of 98.41% on the Society for Imaging Informatics in Medicine pneumothorax dataset, outperforming other deep learning models and reducing the computation complexities in detecting the disease.
Collapse
Affiliation(s)
- V. Dhilip Kumar
- School of Computing, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India; (V.D.K.); (P.R.)
| | - P. Rajesh
- School of Computing, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India; (V.D.K.); (P.R.)
| | - Oana Geman
- Department of Computers, Electronics and Automation, Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, 720229 Suceava, Romania
- Correspondence: (O.G.); (M.D.C.)
| | - Maria Daniela Craciun
- Interdisciplinary Research Centre in Motricity Sciences and Human Health, Ştefan cel Mare University of Suceava, 720229 Suceava, Romania
- Correspondence: (O.G.); (M.D.C.)
| | - Muhammad Arif
- Department of Computer Science, Superior University, Lahore 54000, Pakistan;
| | - Roxana Filip
- Faculty of Medicine and Biological Sciences, Stefan cel Mare University of Suceava, 720229 Suceava, Romania;
- Suceava Emergency County Hospital, 720224 Suceava, Romania
| |
Collapse
|
7
|
Lakhani P, Mongan J, Singhal C, Zhou Q, Andriole KP, Auffermann WF, Prasanna PM, Pham TX, Peterson M, Bergquist PJ, Cook TS, Ferraciolli SF, Corradi GCA, Takahashi MS, Workman CS, Parekh M, Kamel SI, Galant J, Mas-Sanchez A, Benítez EC, Sánchez-Valverde M, Jaques L, Panadero M, Vidal M, Culiañez-Casas M, Angulo-Gonzalez D, Langer SG, de la Iglesia-Vayá M, Shih G. The 2021 SIIM-FISABIO-RSNA Machine Learning COVID-19 Challenge: Annotation and Standard Exam Classification of COVID-19 Chest Radiographs. J Digit Imaging 2023; 36:365-372. [PMID: 36171520 PMCID: PMC9518934 DOI: 10.1007/s10278-022-00706-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 09/15/2022] [Accepted: 09/16/2022] [Indexed: 11/30/2022] Open
Abstract
We describe the curation, annotation methodology, and characteristics of the dataset used in an artificial intelligence challenge for detection and localization of COVID-19 on chest radiographs. The chest radiographs were annotated by an international group of radiologists into four mutually exclusive categories, including "typical," "indeterminate," and "atypical appearance" for COVID-19, or "negative for pneumonia," adapted from previously published guidelines, and bounding boxes were placed on airspace opacities. This dataset and respective annotations are available to researchers for academic and noncommercial use.
Collapse
Affiliation(s)
- Paras Lakhani
- Department of Radiology, Thomas Jefferson University, Sidney Kimmel Jefferson Medical College, 111 S 11th St, Philadelphia, PA, 19107, USA.
| | - J Mongan
- University of California San Francisco, San Francisco, CA, USA
| | | | | | - K P Andriole
- Mass General Brigham and Harvard Medical School, Boston, MA, USA
| | | | - P M Prasanna
- University of Utah Health, Salt Lake City, UT, USA
| | - T X Pham
- University of Utah Health, Salt Lake City, UT, USA
| | | | - P J Bergquist
- Medstar Georgetown University Hospital, Washington DC, USA
| | - T S Cook
- University of Pennsylvania, Philadelphia, PA, USA
| | | | | | | | - C S Workman
- Vanderbilt University Medical Center, Nashville TN, USA
| | - M Parekh
- Department of Radiology, Thomas Jefferson University, Sidney Kimmel Jefferson Medical College, 111 S 11th St, Philadelphia, PA, 19107, USA
| | - S I Kamel
- Department of Radiology, Thomas Jefferson University, Sidney Kimmel Jefferson Medical College, 111 S 11th St, Philadelphia, PA, 19107, USA
| | - J Galant
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - A Mas-Sanchez
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - E C Benítez
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - M Sánchez-Valverde
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - L Jaques
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - M Panadero
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - M Vidal
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - M Culiañez-Casas
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | | | | | - María de la Iglesia-Vayá
- The Foundation for the Promotion of Health and Biomedical Research of Valencia Region, Valencia, Spain
| | - G Shih
- Weill Cornell Medicine, New York, NY, USA
| |
Collapse
|
8
|
Nguyen T, Vo TM, Nguyen TV, Pham HH, Nguyen HQ. Learning to diagnose common thorax diseases on chest radiographs from radiology reports in Vietnamese. PLoS One 2022; 17:e0276545. [PMID: 36315483 PMCID: PMC9621405 DOI: 10.1371/journal.pone.0276545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Deep learning, in recent times, has made remarkable strides when it comes to impressive performance for many tasks, including medical image processing. One of the contributing factors to these advancements is the emergence of large medical image datasets. However, it is exceedingly expensive and time-consuming to construct a large and trustworthy medical dataset; hence, there has been multiple research leveraging medical reports to automatically extract labels for data. The majority of this labor, however, is performed in English. In this work, we propose a data collecting and annotation pipeline that extracts information from Vietnamese radiology reports to provide accurate labels for chest X-ray (CXR) images. This can benefit Vietnamese radiologists and clinicians by annotating data that closely match their endemic diagnosis categories which may vary from country to country. To assess the efficacy of the proposed labeling technique, we built a CXR dataset containing 9,752 studies and evaluated our pipeline using a subset of this dataset. With an F1-score of at least 0.9923, the evaluation demonstrates that our labeling tool performs precisely and consistently across all classes. After building the dataset, we train deep learning models that leverage knowledge transferred from large public CXR datasets. We employ a variety of loss functions to overcome the curse of imbalanced multi-label datasets and conduct experiments with various model architectures to select the one that delivers the best performance. Our best model (CheXpert-pretrained EfficientNet-B2) yields an F1-score of 0.6989 (95% CI 0.6740, 0.7240), AUC of 0.7912, sensitivity of 0.7064 and specificity of 0.8760 for the abnormal diagnosis in general. Finally, we demonstrate that our coarse classification (based on five specific locations of abnormalities) yields comparable results to fine classification (twelve pathologies) on the benchmark CheXpert dataset for general anomaly detection while delivering better performance in terms of the average performance of all classes.
Collapse
Affiliation(s)
- Thao Nguyen
- Smart Health Center, VinBigData JSC, Hanoi, Vietnam
| | - Tam M. Vo
- Smart Health Center, VinBigData JSC, Hanoi, Vietnam
| | | | - Hieu H. Pham
- Smart Health Center, VinBigData JSC, Hanoi, Vietnam
- College of Engineering and Computer Science, VinUniversity, Hanoi, Vietnam
- VinUni-Illinois Smart Health Center, Hanoi, Vietnam
| | - Ha Q. Nguyen
- Smart Health Center, VinBigData JSC, Hanoi, Vietnam
- College of Engineering and Computer Science, VinUniversity, Hanoi, Vietnam
| |
Collapse
|
9
|
Gu H, Wang H, Qin P, Wang J. Chest L-Transformer: Local Features With Position Attention for Weakly Supervised Chest Radiograph Segmentation and Classification. Front Med (Lausanne) 2022; 9:923456. [PMID: 35721071 PMCID: PMC9201450 DOI: 10.3389/fmed.2022.923456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 05/12/2022] [Indexed: 11/13/2022] Open
Abstract
We consider the problem of weakly supervised segmentation on chest radiographs. The chest radiograph is the most common means of screening and diagnosing thoracic diseases. Weakly supervised deep learning models have gained increasing popularity in medical image segmentation. However, these models are not suitable for the critical characteristics presented in chest radiographs: the global symmetry of chest radiographs and dependencies between lesions and their positions. These models extract global features from the whole image to make the image-level decision. The global symmetry can lead these models to misclassification of symmetrical positions of the lesions. Thoracic diseases often have special disease prone areas in chest radiographs. There is a relationship between the lesions and their positions. In this study, we propose a weakly supervised model, called Chest L-Transformer, to take these characteristics into account. Chest L-Transformer classifies an image based on local features to avoid the misclassification caused by the global symmetry. Moreover, associated with Transformer attention mechanism, Chest L-Transformer models the dependencies between the lesions and their positions and pays more attention to the disease prone areas. Chest L-Transformer is only trained with image-level annotations for lesion segmentation. Thus, Log-Sum-Exp voting and its variant are proposed to unify the pixel-level prediction with the image-level prediction. We demonstrate a significant segmentation performance improvement over the current state-of-the-art while achieving competitive classification performance.
Collapse
Affiliation(s)
- Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
10
|
The internet of medical things and artificial intelligence: trends, challenges, and opportunities. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.05.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
11
|
Alyasseri ZAA, Al‐Betar MA, Doush IA, Awadallah MA, Abasi AK, Makhadmeh SN, Alomari OA, Abdulkareem KH, Adam A, Damasevicius R, Mohammed MA, Zitar RA. Review on COVID-19 diagnosis models based on machine learning and deep learning approaches. EXPERT SYSTEMS 2022; 39:e12759. [PMID: 34511689 PMCID: PMC8420483 DOI: 10.1111/exsy.12759] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 05/17/2021] [Accepted: 06/07/2021] [Indexed: 05/02/2023]
Abstract
COVID-19 is the disease evoked by a new breed of coronavirus called the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Recently, COVID-19 has become a pandemic by infecting more than 152 million people in over 216 countries and territories. The exponential increase in the number of infections has rendered traditional diagnosis techniques inefficient. Therefore, many researchers have developed several intelligent techniques, such as deep learning (DL) and machine learning (ML), which can assist the healthcare sector in providing quick and precise COVID-19 diagnosis. Therefore, this paper provides a comprehensive review of the most recent DL and ML techniques for COVID-19 diagnosis. The studies are published from December 2019 until April 2021. In general, this paper includes more than 200 studies that have been carefully selected from several publishers, such as IEEE, Springer and Elsevier. We classify the research tracks into two categories: DL and ML and present COVID-19 public datasets established and extracted from different countries. The measures used to evaluate diagnosis methods are comparatively analysed and proper discussion is provided. In conclusion, for COVID-19 diagnosing and outbreak prediction, SVM is the most widely used machine learning mechanism, and CNN is the most widely used deep learning mechanism. Accuracy, sensitivity, and specificity are the most widely used measurements in previous studies. Finally, this review paper will guide the research community on the upcoming development of machine learning for COVID-19 and inspire their works for future development. This review paper will guide the research community on the upcoming development of ML and DL for COVID-19 and inspire their works for future development.
Collapse
Affiliation(s)
- Zaid Abdi Alkareem Alyasseri
- Center for Artificial Intelligence Technology, Faculty of Information Science and TechnologyUniversiti Kebangsaan MalaysiaBangiMalaysia
- ECE Department‐Faculty of EngineeringUniversity of KufaNajafIraq
| | - Mohammed Azmi Al‐Betar
- Artificial Intelligence Research Center (AIRC)Ajman UniversityAjmanUnited Arab Emirates
- Department of Information TechnologyAl‐Huson University College, Al‐Balqa Applied UniversityIrbidJordan
| | - Iyad Abu Doush
- Computing Department, College of Engineering and Applied SciencesAmerican University of KuwaitSalmiyaKuwait
- Computer Science DepartmentYarmouk UniversityIrbidJordan
| | - Mohammed A. Awadallah
- Artificial Intelligence Research Center (AIRC)Ajman UniversityAjmanUnited Arab Emirates
- Department of Computer ScienceAl‐Aqsa UniversityGazaPalestine
| | - Ammar Kamal Abasi
- Artificial Intelligence Research Center (AIRC)Ajman UniversityAjmanUnited Arab Emirates
- School of Computer SciencesUniversiti Sains MalaysiaPenangMalaysia
| | - Sharif Naser Makhadmeh
- Artificial Intelligence Research Center (AIRC)Ajman UniversityAjmanUnited Arab Emirates
- Faculty of Information TechnologyMiddle East UniversityAmmanJordan
| | | | | | - Afzan Adam
- Center for Artificial Intelligence Technology, Faculty of Information Science and TechnologyUniversiti Kebangsaan MalaysiaBangiMalaysia
| | | | - Mazin Abed Mohammed
- College of Computer Science and Information TechnologyUniversity of AnbarAnbarIraq
| | - Raed Abu Zitar
- Sorbonne Center of Artificial IntelligenceSorbonne University‐Abu DhabiAbu DhabiUnited Arab Emirates
| |
Collapse
|
12
|
Wang H, Gu H, Qin P, Wang J. U-shaped GAN for Semi-Supervised Learning and Unsupervised Domain Adaptation in High Resolution Chest Radiograph Segmentation. Front Med (Lausanne) 2022; 8:782664. [PMID: 35096877 PMCID: PMC8792862 DOI: 10.3389/fmed.2021.782664] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/14/2021] [Indexed: 01/03/2023] Open
Abstract
Deep learning has achieved considerable success in medical image segmentation. However, applying deep learning in clinical environments often involves two problems: (1) scarcity of annotated data as data annotation is time-consuming and (2) varying attributes of different datasets due to domain shift. To address these problems, we propose an improved generative adversarial network (GAN) segmentation model, called U-shaped GAN, for limited-annotated chest radiograph datasets. The semi-supervised learning approach and unsupervised domain adaptation (UDA) approach are modeled into a unified framework for effective segmentation. We improve GAN by replacing the traditional discriminator with a U-shaped net, which predicts each pixel a label. The proposed U-shaped net is designed with high resolution radiographs (1,024 × 1,024) for effective segmentation while taking computational burden into account. The pointwise convolution is applied to U-shaped GAN for dimensionality reduction, which decreases the number of feature maps while retaining their salient features. Moreover, we design the U-shaped net with a pretrained ResNet-50 as an encoder to reduce the computational burden of training the encoder from scratch. A semi-supervised learning approach is proposed learning from limited annotated data while exploiting additional unannotated data with a pixel-level loss. U-shaped GAN is extended to UDA by taking the source and target domain data as the annotated data and the unannotated data in the semi-supervised learning approach, respectively. Compared to the previous models dealing with the aforementioned problems separately, U-shaped GAN is compatible with varying data distributions of multiple medical centers, with efficient training and optimizing performance. U-shaped GAN can be generalized to chest radiograph segmentation for clinical deployment. We evaluate U-shaped GAN with two chest radiograph datasets. U-shaped GAN is shown to significantly outperform the state-of-the-art models.
Collapse
Affiliation(s)
- Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
13
|
Feng S, Azzollini D, Kim JS, Jin CK, Gordon SP, Yeoh J, Kim E, Han M, Lee A, Patel A, Wu J, Urschler M, Fong A, Simmers C, Tarr GP, Barnard S, Wilson B. Curation of the CANDID-PTX Dataset with Free-Text Reports. Radiol Artif Intell 2021; 3:e210136. [PMID: 34870223 DOI: 10.1148/ryai.2021210136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 09/20/2021] [Accepted: 09/27/2021] [Indexed: 12/22/2022]
Abstract
Supplemental material is available for this article. Keywords: Conventional Radiography, Thorax, Trauma, Ribs, Catheters, Segmentation, Diagnosis, Classification, Supervised Learning, Machine Learning © RSNA, 2021.
Collapse
Affiliation(s)
- Sijing Feng
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Damian Azzollini
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Ji Soo Kim
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Cheng-Kai Jin
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Simon P Gordon
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Jason Yeoh
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Eve Kim
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Mina Han
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Andrew Lee
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Aakash Patel
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Joy Wu
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Martin Urschler
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Amy Fong
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Cameron Simmers
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Gregory P Tarr
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Stuart Barnard
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| | - Ben Wilson
- Department of Radiology, Dunedin Hospital, 201 Great King St, Dunedin Central, Dunedin, Otago 9016, New Zealand (S.F., A.F., C.S., B.W.); Eastern Health, Melbourne, Victoria, Australia (D.A.); Auckland District Health Board, Auckland, New Zealand (J.S.K., J.Y., E.K., M.H.); Waitemata District Health Board, Auckland, New Zealand (C.K.J.); Waikato District Health Board, Hamilton, New Zealand (S.P.G.); The University of Auckland Faculty of Medical and Health Sciences, Auckland, Auckland, New Zealand (A.L.); University of Otago Medical School, Dunedin, Otago, New Zealand (A.P.); IBM Almaden Research Center, San Jose, Calif (J.W.); School of Computer Science, University of Auckland, Auckland, New Zealand (M.U.); Department of Radiology, Auckland City Hospital, Auckland, New Zealand (G.P.T.); and Department of Radiology, Middlemore Hospital, Auckland, New Zealand (S.B.)
| |
Collapse
|
14
|
Detection of the location of pneumothorax in chest X-rays using small artificial neural networks and a simple training process. Sci Rep 2021; 11:13054. [PMID: 34158562 PMCID: PMC8219779 DOI: 10.1038/s41598-021-92523-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/11/2021] [Indexed: 02/06/2023] Open
Abstract
The purpose of this study was to evaluate the diagnostic performance achieved by using fully-connected small artificial neural networks (ANNs) and a simple training process, the Kim-Monte Carlo algorithm, to detect the location of pneumothorax in chest X-rays. A total of 1,000 chest X-ray images with pneumothorax were taken randomly from NIH (the National Institutes of Health) public image database and used as the training and test sets. Each X-ray image with pneumothorax was divided into 49 boxes for pneumothorax localization. For each of the boxes in the chest X-ray images contained in the test set, the area under the receiver operating characteristic (ROC) curve (AUC) was 0.882, and the sensitivity and specificity were 80.6% and 83.0%, respectively. In addition, a common currently used deep-learning method for image recognition, the convolution neural network (CNN), was also applied to the same dataset for comparison purposes. The performance of the fully-connected small ANN was better than that of the CNN. Regarding the diagnostic performances of the CNN with different activation functions, the CNN with a sigmoid activation function for fully-connected hidden nodes was better than the CNN with the rectified linear unit (RELU) activation function. This study showed that our approach can accurately detect the location of pneumothorax in chest X-rays, significantly reduce the time delay incurred when diagnosing urgent diseases such as pneumothorax, and increase the effectiveness of clinical practice and patient care.
Collapse
|
15
|
Law M, Seah J, Shih G. Artificial intelligence and medical imaging: applications, challenges and solutions. Med J Aust 2021; 214:450-452.e1. [PMID: 33987848 DOI: 10.5694/mja2.51077] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Affiliation(s)
- Meng Law
- Alfred Health, Melbourne, VIC.,Monash University, Melbourne, VIC
| | | | | |
Collapse
|
16
|
Tolkachev A, Sirazitdinov I, Kholiavchenko M, Mustafaev T, Ibragimov B. Deep Learning for Diagnosis and Segmentation of Pneumothorax: The Results on the Kaggle Competition and Validation Against Radiologists. IEEE J Biomed Health Inform 2021; 25:1660-1672. [PMID: 32956067 DOI: 10.1109/jbhi.2020.3023476] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Pneumothorax is potentially a life-threatening disease that requires urgent diagnosis and treatment. The chest X-ray is the diagnostic modality of choice when pneumothorax is suspected. The computer-aided diagnosis of pneumothorax has received a dramatic boost in the last few years due to deep learning advances and the first public pneumothorax diagnosis competition with 15257 chest X-rays manually annotated by a team of 19 radiologists. This paper describes one of the top frameworks that participated in the competition. The framework investigates the benefits of combining the Unet convolutional neural network with various backbones, namely ResNet34, SE-ResNext50, SE-ResNext101, and DenseNet121. The paper presents a step-by-step instruction for the framework application, including data augmentation, and different pre- and post-processing steps. The performance of the framework was of 0.8574 measured in terms of the Dice coefficient. The second contribution of the paper is the comparison of the deep learning framework against three experienced radiologists on the pneumothorax detection and segmentation on challenging X-rays. We also evaluated how diagnostic confidence of radiologists affects the accuracy of the diagnosis and observed that the deep learning framework and radiologists find the same X-rays to be easy/difficult to analyze (p-value <1e4). Finally, the methodology of all top-performing teams from the competition leaderboard was analyzed to find the consistent methodological patterns of accurate pneumothorax detection and segmentation.
Collapse
|
17
|
Rueckel J, Huemmer C, Fieselmann A, Ghesu FC, Mansoor A, Schachtner B, Wesp P, Trappmann L, Munawwar B, Ricke J, Ingrisch M, Sabel BO. Pneumothorax detection in chest radiographs: optimizing artificial intelligence system for accuracy and confounding bias reduction using in-image annotations in algorithm training. Eur Radiol 2021; 31:7888-7900. [PMID: 33774722 PMCID: PMC8452588 DOI: 10.1007/s00330-021-07833-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2020] [Revised: 01/06/2021] [Accepted: 02/24/2021] [Indexed: 12/15/2022]
Abstract
OBJECTIVES Diagnostic accuracy of artificial intelligence (AI) pneumothorax (PTX) detection in chest radiographs (CXR) is limited by the noisy annotation quality of public training data and confounding thoracic tubes (TT). We hypothesize that in-image annotations of the dehiscent visceral pleura for algorithm training boosts algorithm's performance and suppresses confounders. METHODS Our single-center evaluation cohort of 3062 supine CXRs includes 760 PTX-positive cases with radiological annotations of PTX size and inserted TTs. Three step-by-step improved algorithms (differing in algorithm architecture, training data from public datasets/clinical sites, and in-image annotations included in algorithm training) were characterized by area under the receiver operating characteristics (AUROC) in detailed subgroup analyses and referenced to the well-established "CheXNet" algorithm. RESULTS Performances of established algorithms exclusively trained on publicly available data without in-image annotations are limited to AUROCs of 0.778 and strongly biased towards TTs that can completely eliminate algorithm's discriminative power in individual subgroups. Contrarily, our final "algorithm 2" which was trained on a lower number of images but additionally with in-image annotations of the dehiscent pleura achieved an overall AUROC of 0.877 for unilateral PTX detection with a significantly reduced TT-related confounding bias. CONCLUSIONS We demonstrated strong limitations of an established PTX-detecting AI algorithm that can be significantly reduced by designing an AI system capable of learning to both classify and localize PTX. Our results are aimed at drawing attention to the necessity of high-quality in-image localization in training data to reduce the risks of unintentionally biasing the training process of pathology-detecting AI algorithms. KEY POINTS • Established pneumothorax-detecting artificial intelligence algorithms trained on public training data are strongly limited and biased by confounding thoracic tubes. • We used high-quality in-image annotated training data to effectively boost algorithm performance and suppress the impact of confounding thoracic tubes. • Based on our results, we hypothesize that even hidden confounders might be effectively addressed by in-image annotations of pathology-related image features.
Collapse
Affiliation(s)
- Johannes Rueckel
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany.
| | | | | | | | - Awais Mansoor
- Digital Technology and Innovation, Siemens Healthineers, Princeton, NJ, USA
| | - Balthasar Schachtner
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
- Comprehensive Pneumology Center (CPC-M), Member of the German Center for Lung Research (DZL), Munich, Germany
| | - Philipp Wesp
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Lena Trappmann
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Basel Munawwar
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Michael Ingrisch
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Bastian O Sabel
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| |
Collapse
|
18
|
Chang K, Beers AL, Brink L, Patel JB, Singh P, Arun NT, Hoebel KV, Gaw N, Shah M, Pisano ED, Tilkin M, Coombs LP, Dreyer KJ, Allen B, Agarwal S, Kalpathy-Cramer J. Multi-Institutional Assessment and Crowdsourcing Evaluation of Deep Learning for Automated Classification of Breast Density. J Am Coll Radiol 2020; 17:1653-1662. [PMID: 32592660 PMCID: PMC10757768 DOI: 10.1016/j.jacr.2020.05.015] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 05/05/2020] [Accepted: 05/07/2020] [Indexed: 12/11/2022]
Abstract
OBJECTIVE We developed deep learning algorithms to automatically assess BI-RADS breast density. METHODS Using a large multi-institution patient cohort of 108,230 digital screening mammograms from the Digital Mammographic Imaging Screening Trial, we investigated the effect of data, model, and training parameters on overall model performance and provided crowdsourcing evaluation from the attendees of the ACR 2019 Annual Meeting. RESULTS Our best-performing algorithm achieved good agreement with radiologists who were qualified interpreters of mammograms, with a four-class κ of 0.667. When training was performed with randomly sampled images from the data set versus sampling equal number of images from each density category, the model predictions were biased away from the low-prevalence categories such as extremely dense breasts. The net result was an increase in sensitivity and a decrease in specificity for predicting dense breasts for equal class compared with random sampling. We also found that the performance of the model degrades when we evaluate on digital mammography data formats that differ from the one that we trained on, emphasizing the importance of multi-institutional training sets. Lastly, we showed that crowdsourced annotations, including those from attendees who routinely read mammograms, had higher agreement with our algorithm than with the original interpreting radiologists. CONCLUSION We demonstrated the possible parameters that can influence the performance of the model and how crowdsourcing can be used for evaluation. This study was performed in tandem with the development of the ACR AI-LAB, a platform for democratizing artificial intelligence.
Collapse
Affiliation(s)
- Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Andrew L Beers
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Laura Brink
- American College of Radiology, Reston, Virginia
| | - Jay B Patel
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Nishanth T Arun
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Katharina V Hoebel
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Nathan Gaw
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Meesam Shah
- American College of Radiology, Reston, Virginia
| | - Etta D Pisano
- Chief Research Officer (ACR), Reston, Virginia; Professor in Residence, Beth Israel Lahey/Harvard Medical School, Boston, Massachusetts
| | - Mike Tilkin
- Chief Information Officer and EVP for Technology (ACR), Reston, Virginia
| | | | - Keith J Dreyer
- Chief Data Science Officer, Chief Imaging Information Officer, Massachussetts General Hospital and Brigham Women's Hospital (MGH & BWH), Chief Executive, MGH & BWH Center for Clinical Data Science; Vice Chairman of Radiology - Informatics, MGH & BWH, Boston, Massachusetts; Associate Professor of Radiology,Harvard Medical School, Boston, Massachusetts; Chief Science Officer, ACR Data Science Institute, Reston, Virginia
| | - Bibb Allen
- Chief Medical Office, ACR Data Science Institute, Reston, Virginia; Secretary General, International Society of Radiology, Reston, Virginia; Partner, Grandview Medical Center, Birmingham, Alabama
| | | | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; Scientific Director (CCDS), Director (QTIM lab and the Center for Machine Learning), Associate Professor of Radiology, MGH/Harvard Medical School, Boston, Massachusetts.
| |
Collapse
|