1
|
Dai T, Zhang R, Hong F, Yao J, Zhang Y, Wang Y. UniChest: Conquer-and-Divide Pre-Training for Multi-Source Chest X-Ray Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2901-2912. [PMID: 38526891 DOI: 10.1109/tmi.2024.3381123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Vision-Language Pre-training (VLP) that utilizes the multi-modal information to promote the training efficiency and effectiveness, has achieved great success in vision recognition of natural domains and shown promise in medical imaging diagnosis for the Chest X-Rays (CXRs). However, current works mainly pay attention to the exploration on single dataset of CXRs, which locks the potential of this powerful paradigm on larger hybrid of multi-source CXRs datasets. We identify that although blending samples from the diverse sources offers the advantages to improve the model generalization, it is still challenging to maintain the consistent superiority for the task of each source due to the existing heterogeneity among sources. To handle this dilemma, we design a Conquer-and-Divide pre-training framework, termed as UniChest, aiming to make full use of the collaboration benefit of multiple sources of CXRs while reducing the negative influence of the source heterogeneity. Specially, the "Conquer" stage in UniChest encourages the model to sufficiently capture multi-source common patterns, and the "Divide" stage helps squeeze personalized patterns into different small experts (query networks). We conduct thorough experiments on many benchmarks, e.g., ChestX-ray14, CheXpert, Vindr-CXR, Shenzhen, Open-I and SIIM-ACR Pneumothorax, verifying the effectiveness of UniChest over a range of baselines, and release our codes and pre-training models at https://github.com/Elfenreigen/UniChest.
Collapse
|
2
|
Lee M, Lee H, Lee D, Cho H, Choi J, Cha BK, Kim K. Framework for dual-energy-like chest radiography image synthesis from single-energy computed tomography based on cycle-consistent generative adversarial network. Med Phys 2024; 51:1509-1530. [PMID: 36846955 DOI: 10.1002/mp.16329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 01/26/2023] [Accepted: 02/12/2023] [Indexed: 03/01/2023] Open
Abstract
BACKGROUND Dual-energy (DE) chest radiography (CXR) enables the selective imaging of two relevant materials, namely, soft tissue and bone structures, to better characterize various chest pathologies (i.e., lung nodule, bony lesions, etc.) and potentially improve CXR-based diagnosis. Recently, deep-learning-based image synthesis techniques have attracted considerable attention as alternatives to existing DE methods (i.e., dual-exposure-based and sandwich-detector-based methods) because software-based bone-only and bone-suppression images in CXR could be useful. PURPOSE The objective of this study was to develop a new framework for DE-like CXR image synthesis from single-energy computed tomography (CT) based on a cycle-consistent generative adversarial network. METHODS The core techniques of the proposed framework are divided into three categories: (1) data configuration from the generation of pseudo CXR from single energy CT, (2) learning of the developed network architecture using pseudo CXR and pseudo-DE imaging using a single-energy CT, and (3) inference of the trained network on real single-energy CXR. We performed a visual inspection and comparative evaluation using various metrics and introduced a figure of image quality (FIQ) to consider the effects of our framework on the spatial resolution and noise in terms of a single index through various test cases. RESULTS Our results indicate that the proposed framework is effective and exhibits potential synthetic imaging ability for two relevant materials: soft tissue and bone structures. Its effectiveness was validated, and its ability to overcome the limitations associated with DE imaging techniques (e.g., increase in exposure dose owing to the requirement of two acquisitions, and emphasis on noise characteristics) via an artificial intelligence technique was presented. CONCLUSIONS The developed framework addresses X-ray dose issues in the field of radiation imaging and enables pseudo-DE imaging with single exposure.
Collapse
Affiliation(s)
- Minjae Lee
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Republic of Korea
| | - Hunwoo Lee
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Republic of Korea
| | - Dongyeon Lee
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Republic of Korea
| | - Hyosung Cho
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Republic of Korea
| | - Jaegu Choi
- Electro-Medical Device Research Center, Korea Electrotechnology Research Institute (KERI), Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, Republic of Korea
| | - Bo Kyung Cha
- Electro-Medical Device Research Center, Korea Electrotechnology Research Institute (KERI), Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, Republic of Korea
| | - Kyuseok Kim
- Department of Integrative Medicine, Major in Digital Healthcare, Yonsei University College of Medicine, Gangman-gu, Unju-ro, Republic of Korea
| |
Collapse
|
3
|
Gómez Ó, Mesejo P, Ibáñez Ó, Valsecchi A, Bermejo E, Cerezo A, Pérez J, Alemán I, Kahana T, Damas S, Cordón Ó. Evaluating artificial intelligence for comparative radiography. Int J Legal Med 2024; 138:307-327. [PMID: 37801115 DOI: 10.1007/s00414-023-03080-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 08/23/2023] [Indexed: 10/07/2023]
Abstract
INTRODUCTION Comparative radiography is a forensic identification and shortlisting technique based on the comparison of skeletal structures in ante-mortem and post-mortem images. The images (e.g., 2D radiographs or 3D computed tomographies) are manually superimposed and visually compared by a forensic practitioner. It requires a significant amount of time per comparison, limiting its utility in large comparison scenarios. METHODS We propose and validate a novel framework for automating the shortlisting of candidates using artificial intelligence. It is composed of (1) a segmentation method to delimit skeletal structures' silhouettes in radiographs, (2) a superposition method to generate the best simulated "radiographs" from 3D images according to the segmented radiographs, and (3) a decision-making method for shortlisting all candidates ranked according to a similarity metric. MATERIAL The dataset is composed of 180 computed tomographies and 180 radiographs where the frontal sinuses are visible. Frontal sinuses are the skeletal structure analyzed due to their high individualization capability. RESULTS Firstly, we validate two deep learning-based techniques for segmenting the frontal sinuses in radiographs, obtaining high-quality results. Secondly, we study the framework's shortlisting capability using both automatic segmentations and superimpositions. The obtained superimpositions, based only on the superimposition metric, allowed us to filter out 40% of the possible candidates in a completely automatic manner. Thirdly, we perform a reliability study by comparing 180 radiographs against 180 computed tomographies using manual segmentations. The results allowed us to filter out 73% of the possible candidates. Furthermore, the results are robust to inter- and intra-expert-related errors.
Collapse
Affiliation(s)
- Óscar Gómez
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain.
| | - Pablo Mesejo
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
| | - Óscar Ibáñez
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
- Faculty of Computer Science, CITIC, University of A Coruña, A Coruña, Spain
| | - Andrea Valsecchi
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
| | - Enrique Bermejo
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
| | - Andrea Cerezo
- Department of Legal Medicine, Toxicology and Physical Anthropology, University of Granada, Granada, Spain
| | - José Pérez
- Department of Legal Medicine, Toxicology and Physical Anthropology, University of Granada, Granada, Spain
| | - Inmaculada Alemán
- Department of Legal Medicine, Toxicology and Physical Anthropology, University of Granada, Granada, Spain
| | - Tzipi Kahana
- Faculty of Criminology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Sergio Damas
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Software Engineering, University of Granada, Granada, Spain
| | - Óscar Cordón
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
| |
Collapse
|
4
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
5
|
Gopatoti A, Vijayalakshmi P. MTMC-AUR2CNet: Multi-textural multi-class attention recurrent residual convolutional neural network for COVID-19 classification using chest X-ray images. Biomed Signal Process Control 2023; 85:104857. [PMID: 36968651 PMCID: PMC10027978 DOI: 10.1016/j.bspc.2023.104857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 02/13/2023] [Accepted: 03/11/2023] [Indexed: 03/24/2023]
Abstract
Coronavirus disease (COVID-19) has infected over 603 million confirmed cases as of September 2022, and its rapid spread has raised concerns worldwide. More than 6.4 million fatalities in confirmed patients have been reported. According to reports, the COVID-19 virus causes lung damage and rapidly mutates before the patient receives any diagnosis-specific medicine. Daily increasing COVID-19 cases and the limited number of diagnosis tool kits encourage the use of deep learning (DL) models to assist health care practitioners using chest X-ray (CXR) images. The CXR is a low radiation radiography tool available in hospitals to diagnose COVID-19 and combat this spread. We propose a Multi-Textural Multi-Class (MTMC) UNet-based Recurrent Residual Convolutional Neural Network (MTMC-UR2CNet) and MTMC-UR2CNet with attention mechanism (MTMC-AUR2CNet) for multi-class lung lobe segmentation of CXR images. The lung lobe segmentation output of MTMC-UR2CNet and MTMC-AUR2CNet are mapped individually with their input CXRs to generate the region of interest (ROI). The multi-textural features are extracted from the ROI of each proposed MTMC network. The extracted multi-textural features from ROI are fused and are trained to the Whale optimization algorithm (WOA) based DeepCNN classifier on classifying the CXR images into normal (healthy), COVID-19, viral pneumonia, and lung opacity. The experimental result shows that the MTMC-AUR2CNet has superior performance in multi-class lung lobe segmentation of CXR images with an accuracy of 99.47%, followed by MTMC-UR2CNet with an accuracy of 98.39%. Also, MTMC-AUR2CNet improves the multi-textural multi-class classification accuracy of the WOA-based DeepCNN classifier to 97.60% compared to MTMC-UR2CNet.
Collapse
Affiliation(s)
- Anandbabu Gopatoti
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
- Centre for Research, Anna University, Chennai, Tamil Nadu, India
| | - P Vijayalakshmi
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
6
|
Dai D, Dong C, Li Z, Xu S. MS-Net: Learning to assess the malignant status of a lung nodule by a radiologist and her peers. J Appl Clin Med Phys 2023:e13964. [PMID: 36929569 DOI: 10.1002/acm2.13964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2022] [Revised: 01/04/2023] [Accepted: 02/27/2023] [Indexed: 03/18/2023] Open
Abstract
BACKGROUND Automatically assessing the malignant status of lung nodules based on CTscan images can help reduce the workload of radiologists while improving their diagnostic accuracy. PURPOSE Despite remarkable progress in the automatic diagnosis of pulmonary nodules by deep learning technologies, two significant problems remain outstanding. First, end-to-end deep learning solutions tend to neglect the empirical (semantic) features accumulated by radiologists and only rely on automatic features discovered by neural networks to provide the final diagnostic results, leading to questionable reliability, and interpretability. Second, inconsistent diagnosis between radiologists, a widely acknowledged phenomenon in clinical settings, is rarely examined and quantitatively explored by existing machine learning approaches. This paper solves these problems. METHODS We propose a novel deep neural network called MS-Net, which comprises two sequential modules: A feature derivation and initial diagnosis module (FDID), followed by a diagnosis refinement module (DR). Specifically, to take advantage of accumulated empirical features and discovered automatic features, the FDID model of MS-Net first derives a range of perceptible features and provides two initial diagnoses for lung nodules; then, these results are fed to the subsequent DR module to refine the diagnoses further. In addition, to fully consider the individual and panel diagnosis opinions, we propose a new loss function called collaborative loss, which can collaboratively optimize the individual and her peers' opinions to provide a more accurate diagnosis. RESULTS We evaluate the performance of the proposed MS-Net on the Lung Image Database Consortium image collection (LIDC-IDRI). It achieves 92.4% of accuracy, 92.9% of sensitivity, and 92.0% of specificity when panel labels are the ground truth, which is superior to other state-of-the-art diagnosis models. As a byproduct, the MS-Net can automatically derive a range of semantic features of lung nodules, increasing the interpretability of the final diagnoses. CONCLUSIONS The proposed MS-Net can provide an automatic and accurate diagnosis of lung nodules, meeting the need for a reliable computer-aided diagnosis system in clinical practice.
Collapse
Affiliation(s)
- Duwei Dai
- Institute of Medical Artificial Intelligence, The Second Affiliated Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Caixia Dong
- Institute of Medical Artificial Intelligence, The Second Affiliated Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Zongfang Li
- Institute of Medical Artificial Intelligence, The Second Affiliated Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Songhua Xu
- Institute of Medical Artificial Intelligence, The Second Affiliated Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| |
Collapse
|
7
|
Zhang D, Wang H, Deng J, Wang T, Shen C, Feng J. CAMS-Net: An attention-guided feature selection network for rib segmentation in chest X-rays. Comput Biol Med 2023. [DOI: 10.1016/j.compbiomed.2023.106702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
8
|
Kang M, An TJ, Han D, Seo W, Cho K, Kim S, Myong JP, Han SW. Development of a multipotent diagnostic tool for chest X-rays by multi-object detection method. Sci Rep 2022; 12:19130. [PMID: 36352008 PMCID: PMC9646869 DOI: 10.1038/s41598-022-21841-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 10/04/2022] [Indexed: 11/11/2022] Open
Abstract
The computer-aided diagnosis (CAD) for chest X-rays was developed more than 50 years ago. However, there are still unmet needs for its versatile use in our medical fields. We planned this study to develop a multipotent CAD model suitable for general use including in primary care areas. We planned this study to solve the problem by using computed tomography (CT) scan with its one-to-one matched chest X-ray dataset. The data was extracted and preprocessed by pulmonology experts by using the bounding boxes to locate lesions of interest. For detecting multiple lesions, multi-object detection by faster R-CNN and by RetinaNet was adopted and compared. A total of twelve diagnostic labels were defined as the followings: pleural effusion, atelectasis, pulmonary nodule, cardiomegaly, consolidation, emphysema, pneumothorax, chemo-port, bronchial wall thickening, reticular opacity, pleural thickening, and bronchiectasis. The Faster R-CNN model showed higher overall sensitivity than RetinaNet, nevertheless the values of specificity were opposite. Some values such as cardiomegaly and chemo-port showed excellent sensitivity (100.0%, both). Others showed that the unique results such as bronchial wall thickening, reticular opacity, and pleural thickening can be described in the chest area. As far as we know, this is the first study to develop an object detection model for chest X-rays based on chest area defined by CT scans in one-to-one matched manner, preprocessed and conducted by a group of experts in pulmonology. Our model can be a potential tool for detecting the whole chest area with multiple diagnoses from a simple X-ray that is routinely taken in most clinics and hospitals on daily basis.
Collapse
Affiliation(s)
- Minji Kang
- grid.222754.40000 0001 0840 2678School of Industrial and Management Engineering, Korea University, Anam-ro 145, Seongbuk-gu, Seoul, 02841 Korea
| | - Tai Joon An
- grid.411947.e0000 0004 0470 4224Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | | | - Wan Seo
- grid.411947.e0000 0004 0470 4224Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Kangwon Cho
- Division of Pulmonary, Allergy, and Critical Care Medicine, Department of Internal Medicine, Changwon Fatima Hospital, Changwon, Korea
| | - Shinbum Kim
- Division of Pulmonary, Allergy, and Critical Care Medicine, Department of Internal Medicine, Andong Sungso Hospital, Andong, Korea
| | - Jun-Pyo Myong
- grid.411947.e0000 0004 0470 4224Department of Occupational and Environmental Medicine, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Banpodae-ro 222, Seocho-gu, Seoul, 06591 Korea
| | - Sung Won Han
- grid.222754.40000 0001 0840 2678School of Industrial and Management Engineering, Korea University, Anam-ro 145, Seongbuk-gu, Seoul, 02841 Korea
| |
Collapse
|
9
|
Mendes HR, Silva JC, Marcondes M, Tomal A. Optimization of image quality and dose in adult and pediatric chest radiography via Monte Carlo simulation and experimental methods. Radiat Phys Chem Oxf Engl 1993 2022. [DOI: 10.1016/j.radphyschem.2022.110396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
10
|
CheXGAT: A disease correlation-aware network for thorax disease diagnosis from chest X-ray images. Artif Intell Med 2022; 132:102382. [DOI: 10.1016/j.artmed.2022.102382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/07/2022] [Accepted: 08/19/2022] [Indexed: 11/23/2022]
|
11
|
Yang Y, Hu Y, Zhang X, Wang S. Two-Stage Selective Ensemble of CNN via Deep Tree Training for Medical Image Classification. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:9194-9207. [PMID: 33705343 DOI: 10.1109/tcyb.2021.3061147] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Medical image classification is an important task in computer-aided diagnosis systems. Its performance is critically determined by the descriptiveness and discriminative power of features extracted from images. With rapid development of deep learning, deep convolutional neural networks (CNNs) have been widely used to learn the optimal high-level features from the raw pixels of images for a given classification task. However, due to the limited amount of labeled medical images with certain quality distortions, such techniques crucially suffer from the training difficulties, including overfitting, local optimums, and vanishing gradients. To solve these problems, in this article, we propose a two-stage selective ensemble of CNN branches via a novel training strategy called deep tree training (DTT). In our approach, DTT is adopted to jointly train a series of networks constructed from the hidden layers of CNN in a hierarchical manner, leading to the advantage that vanishing gradients can be mitigated by supplementing gradients for hidden layers of CNN, and intrinsically obtain the base classifiers on the middle-level features with minimum computation burden for an ensemble solution. Moreover, the CNN branches as base learners are combined into the optimal classifier via the proposed two-stage selective ensemble approach based on both accuracy and diversity criteria. Extensive experiments on CIFAR-10 benchmark and two specific medical image datasets illustrate that our approach achieves better performance in terms of accuracy, sensitivity, specificity, and F1 score measurement.
Collapse
|
12
|
Meedeniya D, Kumarasinghe H, Kolonne S, Fernando C, Díez IDLT, Marques G. Chest X-ray analysis empowered with deep learning: A systematic review. Appl Soft Comput 2022; 126:109319. [PMID: 36034154 PMCID: PMC9393235 DOI: 10.1016/j.asoc.2022.109319] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 03/16/2022] [Accepted: 07/12/2022] [Indexed: 11/12/2022]
Abstract
Chest radiographs are widely used in the medical domain and at present, chest X-radiation particularly plays an important role in the diagnosis of medical conditions such as pneumonia and COVID-19 disease. The recent developments of deep learning techniques led to a promising performance in medical image classification and prediction tasks. With the availability of chest X-ray datasets and emerging trends in data engineering techniques, there is a growth in recent related publications. Recently, there have been only a few survey papers that addressed chest X-ray classification using deep learning techniques. However, they lack the analysis of the trends of recent studies. This systematic review paper explores and provides a comprehensive analysis of the related studies that have used deep learning techniques to analyze chest X-ray images. We present the state-of-the-art deep learning based pneumonia and COVID-19 detection solutions, trends in recent studies, publicly available datasets, guidance to follow a deep learning process, challenges and potential future research directions in this domain. The discoveries and the conclusions of the reviewed work have been organized in a way that researchers and developers working in the same domain can use this work to support them in taking decisions on their research.
Collapse
|
13
|
Malhotra P, Gupta S, Koundal D, Zaguia A, Enbeyle W. Deep Neural Networks for Medical Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:9580991. [PMID: 35310182 PMCID: PMC8930223 DOI: 10.1155/2022/9580991] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 01/06/2022] [Accepted: 01/10/2022] [Indexed: 12/31/2022]
Abstract
Image segmentation is a branch of digital image processing which has numerous applications in the field of analysis of images, augmented reality, machine vision, and many more. The field of medical image analysis is growing and the segmentation of the organs, diseases, or abnormalities in medical images has become demanding. The segmentation of medical images helps in checking the growth of disease like tumour, controlling the dosage of medicine, and dosage of exposure to radiations. Medical image segmentation is really a challenging task due to the various artefacts present in the images. Recently, deep neural models have shown application in various image segmentation tasks. This significant growth is due to the achievements and high performance of the deep learning strategies. This work presents a review of the literature in the field of medical image segmentation employing deep convolutional neural networks. The paper examines the various widely used medical image datasets, the different metrics used for evaluating the segmentation tasks, and performances of different CNN based networks. In comparison to the existing review and survey papers, the present work also discusses the various challenges in the field of segmentation of medical images and different state-of-the-art solutions available in the literature.
Collapse
Affiliation(s)
- Priyanka Malhotra
- Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh, Punjab, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh, Punjab, India
| | - Deepika Koundal
- Department of Systemics, School of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
| | - Atef Zaguia
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | | |
Collapse
|
14
|
Agrawal T, Choudhary P. Segmentation and classification on chest radiography: a systematic survey. THE VISUAL COMPUTER 2022; 39:875-913. [PMID: 35035008 PMCID: PMC8741572 DOI: 10.1007/s00371-021-02352-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/01/2021] [Indexed: 06/14/2023]
Abstract
Chest radiography (X-ray) is the most common diagnostic method for pulmonary disorders. A trained radiologist is required for interpreting the radiographs. But sometimes, even experienced radiologists can misinterpret the findings. This leads to the need for computer-aided detection diagnosis. For decades, researchers were automatically detecting pulmonary disorders using the traditional computer vision (CV) methods. Now the availability of large annotated datasets and computing hardware has made it possible for deep learning to dominate the area. It is now the modus operandi for feature extraction, segmentation, detection, and classification tasks in medical imaging analysis. This paper focuses on the research conducted using chest X-rays for the lung segmentation and detection/classification of pulmonary disorders on publicly available datasets. The studies performed using the Generative Adversarial Network (GAN) models for segmentation and classification on chest X-rays are also included in this study. GAN has gained the interest of the CV community as it can help with medical data scarcity. In this study, we have also included the research conducted before the popularity of deep learning models to have a clear picture of the field. Many surveys have been published, but none of them is dedicated to chest X-rays. This study will help the readers to know about the existing techniques, approaches, and their significance.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| |
Collapse
|
15
|
Yang Q, Li L, Zha Y, Yan Y, Xing D, Liu H, Yang L, Peng L, Zhang Y. Microvascular Permeability and Texture Analysis of the Skeletal Muscle of Diabetic Rabbits With Critical Limb Ischaemia Based on DCE-MRI. Front Endocrinol (Lausanne) 2022; 13:783163. [PMID: 35250854 PMCID: PMC8894257 DOI: 10.3389/fendo.2022.783163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 01/04/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND We evaluated skeletal muscle vascular permeability in diabetic rabbits with critical limb ischaemia using quantitative dynamic contrast agent-enhanced (DCE) magnetic resonance imaging (MRI) and explored the feasibility of using DCE-MRI Ktrans-based texture analysis for assessing early slight ischaemia-related skeletal muscle structural changes. METHOD Twenty-four male New Zealand white rabbits (2.7 ± 0.3 kg; n = 12 each in sham-operated and experimental groups) underwent axial MRI of the vastus lateralis muscle at 1, 2, and 3 weeks after alloxan injection. Between-group and intra-group postoperative permeability and texture parameters were compared. Texture features of experimental groups in the third week were modelled by receiver operating characteristic (ROC) curve analysis. Correlations of permeability and of statistical texture parameters with peripheral blood endothelial progenitor cells (EPCs) and microvascular density (MVD) were analysed. RESULTS In the experimental group, the transfer constant (Ktrans) was statistically significant at all time-points (F = 5.800, P = 0.009). Their vastus lateralis muscle Ktrans was significantly lower in the third than in the first week (P = 0.018) and correlated positively with peripheral blood EPCs in the experimental group [r = 0.598, (95% CI: 0.256, 0.807)]. The rate constant was negatively associated with vastus lateralis muscle MVD [r = -0.410, (95% CI: -0.698, -0.008)]. The area under the ROC curve of texture parameters based on Ktrans in ischaemic limbs was 0.882. CONCLUSIONS Quantitative DCE-MRI parameters could evaluate microvascular permeability of ischaemic limb skeletal muscle, and texture analysis based on DCE-MRI Ktrans allowed evaluation of early slight skeletal muscle structural changes.
Collapse
Affiliation(s)
- Qi Yang
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Liang Li
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yunfei Zha
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, China
- *Correspondence: Yunfei Zha,
| | - Yuchen Yan
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Dong Xing
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Huan Liu
- Advanced Application Team, GE Healthcare, Shanghai, China
| | - Liu Yang
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lin Peng
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yubiao Zhang
- Department of Orthopedics, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|
16
|
Gopatoti A, Vijayalakshmi P. Optimized chest X-ray image semantic segmentation networks for COVID-19 early detection. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:491-512. [PMID: 35213339 DOI: 10.3233/xst-211113] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
BACKGROUND Although detection of COVID-19 from chest X-ray radiography (CXR) images is faster than PCR sputum testing, the accuracy of detecting COVID-19 from CXR images is lacking in the existing deep learning models. OBJECTIVE This study aims to classify COVID-19 and normal patients from CXR images using semantic segmentation networks for detecting and labeling COVID-19 infected lung lobes in CXR images. METHODS For semantically segmenting infected lung lobes in CXR images for COVID-19 early detection, three structurally different deep learning (DL) networks such as SegNet, U-Net and hybrid CNN with SegNet plus U-Net, are proposed and investigated. Further, the optimized CXR image semantic segmentation networks such as GWO SegNet, GWO U-Net, and GWO hybrid CNN are developed with the grey wolf optimization (GWO) algorithm. The proposed DL networks are trained, tested, and validated without and with optimization on the openly available dataset that contains 2,572 COVID-19 CXR images including 2,174 training images and 398 testing images. The DL networks and their GWO optimized networks are also compared with other state-of-the-art models used to detect COVID-19 CXR images. RESULTS All optimized CXR image semantic segmentation networks for COVID-19 image detection developed in this study achieved detection accuracy higher than 92%. The result shows the superiority of optimized SegNet in segmenting COVID-19 infected lung lobes and classifying with an accuracy of 98.08% compared to optimized U-Net and hybrid CNN. CONCLUSION The optimized DL networks has potential to be utilised to more objectively and accurately identify COVID-19 disease using semantic segmentation of COVID-19 CXR images of the lungs.
Collapse
Affiliation(s)
- Anandbabu Gopatoti
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
- Anna University, Chennai, Tamil Nadu, India
| | - P Vijayalakshmi
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
17
|
Blais MA, Akhloufi MA. Deep Learning and Binary Relevance Classification of Multiple Diseases using Chest X-Ray images . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2794-2797. [PMID: 34891829 DOI: 10.1109/embc46164.2021.9629846] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Disease detection using chest X-ray (CXR) images is one of the most popular radiology methods to diagnose diseases through a visual inspection of abnormal symptoms in the lung region. A wide variety of diseases such as pneumonia, heart failure and lung cancer can be detected using CXRs. Although CXRs can show the symptoms of a variety of diseases, detecting and manually classifying those diseases can be difficult and time-consuming adding to clinicians' work burden. Research shows that nearly 90% of mistakes made in a lung cancer diagnosis involved chest radiography. A variety of algorithms and computer-assisted diagnosis tools (CAD) were proposed to assist radiologists in the interpretation of medical images to reduce diagnosis errors. In this work, we propose a deep learning approach to screen multiple diseases using more than 220,000 images from the CheXpert dataset. The proposed binary relevance approach using Deep Convolutional Neural Networks (CNNs) achieves high performance results and outperforms past published work in this area.Clinical relevance- This application can be used to support physicians ans speed-up the diagnosis work. The proposed CAD can increase the confidence in the diagnosis or suggest a second opinion. The CAD can also be used in emergency situations when a radiologist is not available immediately.
Collapse
|
18
|
Ueda D, Yamamoto A, Shimazaki A, Walston SL, Matsumoto T, Izumi N, Tsukioka T, Komatsu H, Inoue H, Kabata D, Nishiyama N, Miki Y. Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study. BMC Cancer 2021; 21:1120. [PMID: 34663260 PMCID: PMC8524996 DOI: 10.1186/s12885-021-08847-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 10/07/2021] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND We investigated the performance improvement of physicians with varying levels of chest radiology experience when using a commercially available artificial intelligence (AI)-based computer-assisted detection (CAD) software to detect lung cancer nodules on chest radiographs from multiple vendors. METHODS Chest radiographs and their corresponding chest CT were retrospectively collected from one institution between July 2017 and June 2018. Two author radiologists annotated pathologically proven lung cancer nodules on the chest radiographs while referencing CT. Eighteen readers (nine general physicians and nine radiologists) from nine institutions interpreted the chest radiographs. The readers interpreted the radiographs alone and then reinterpreted them referencing the CAD output. Suspected nodules were enclosed with a bounding box. These bounding boxes were judged correct if there was significant overlap with the ground truth, specifically, if the intersection over union was 0.3 or higher. The sensitivity, specificity, accuracy, PPV, and NPV of the readers' assessments were calculated. RESULTS In total, 312 chest radiographs were collected as a test dataset, including 59 malignant images (59 nodules of lung cancer) and 253 normal images. The model provided a modest boost to the reader's sensitivity, particularly helping general physicians. The performance of general physicians was improved from 0.47 to 0.60 for sensitivity, from 0.96 to 0.97 for specificity, from 0.87 to 0.90 for accuracy, from 0.75 to 0.82 for PPV, and from 0.89 to 0.91 for NPV while the performance of radiologists was improved from 0.51 to 0.60 for sensitivity, from 0.96 to 0.96 for specificity, from 0.87 to 0.90 for accuracy, from 0.76 to 0.80 for PPV, and from 0.89 to 0.91 for NPV. The overall increase in the ratios of sensitivity, specificity, accuracy, PPV, and NPV were 1.22 (1.14-1.30), 1.00 (1.00-1.01), 1.03 (1.02-1.04), 1.07 (1.03-1.11), and 1.02 (1.01-1.03) by using the CAD, respectively. CONCLUSION The AI-based CAD was able to improve the ability of physicians to detect nodules of lung cancer in chest radiographs. The use of a CAD model can indicate regions physicians may have overlooked during their initial assessment.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan.
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Akitoshi Shimazaki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Shannon Leigh Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Nobuhiro Izumi
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Takuma Tsukioka
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Hiroaki Komatsu
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Hidetoshi Inoue
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Daijiro Kabata
- Department of Medical Statistics, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Noritoshi Nishiyama
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| |
Collapse
|
19
|
Gómez Ó, Mesejo P, Ibáñez Ó. Automatic segmentation of skeletal structures in X-ray images using deep learning for comparative radiography. FORENSIC IMAGING 2021. [DOI: 10.1016/j.fri.2021.200458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
20
|
Singh A, Lall B, Panigrahi B, Agrawal A, Agrawal A, Thangakunam B, Christopher D. Deep LF-Net: Semantic lung segmentation from Indian chest radiographs including severely unhealthy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102666] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
21
|
Hržić F, Tschauner S, Sorantin E, Štajduhar I. XAOM: A method for automatic alignment and orientation of radiographs for computer-aided medical diagnosis. Comput Biol Med 2021; 132:104300. [PMID: 33714842 DOI: 10.1016/j.compbiomed.2021.104300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 02/23/2021] [Accepted: 02/24/2021] [Indexed: 12/18/2022]
Abstract
BACKGROUND AND OBJECTIVES Computer-aided diagnosis relies on machine learning algorithms that require filtered and preprocessed data as the input. Aligning the image in the desired direction is an additional manual step in post-processing, commonly overlooked due to workload issues. Several state-of-the-art approaches for fracture detection and disease-struck region segmentation benefit from correctly oriented images, thus requiring such preprocessing of X-ray images. Furthermore, it is desirable to have archived studies in a standardized format. Radiograph hanging protocols also differ from case to case, which means that images are not always aligned and oriented correctly. As a solution, the paper proposes XAOM, an X-ray Alignment and Orientation Method for images from 21 different body regions. METHODS Typically, other methods are crafted for this purpose to suit a specific body region and form of usage. In contrast, the method proposed in this paper is comprehensive and easily tuned to align and orient X-ray images of any body region. XAOM consists of two stages. For the first stage of the method, aligning X-ray images, we experimented with the following approaches: Hough transform, Fast line detection algorithm, and Principal Component Analysis method. For the second stage, we have experimented with the adaptations of several well known convolutional neural network topologies for correctly predicting image orientation: LeNet5, AlexNet, VGG16, VGG19, and ResNet50. RESULTS In the first stage, the PCA-based approach performed best. The average difference between the angle detected by the algorithm and the angle marked by the experts on the test set containing 200 pediatric X-ray images was 1.65∘, while the median value was 0.11∘. In the second stage, the VGG16-based network topology achieved the best accuracy of 0.993 on a test set containing 4,221 images. CONCLUSION XAOM is highly accurate at aligning and orienting pediatric X-ray images of 21 common body regions according to a set standard. The proposed method is also robust and can be easily adjusted to the different alignment and rotation criteria. AVAILABILITY The Python source code of the best performing implementation of XAOM is publicly available at https://github.com/fhrzic/XAOM.
Collapse
Affiliation(s)
- Franko Hržić
- University of Rijeka, Faculty of Engineering, Department of Computer Engineering, Vukovarska 58, Rijeka, 51000, Croatia; University of Rijeka, Center for Artificial Intelligence and Cybersecurity, Radmile Matejčić 2, Rijeka, 51000, Croatia
| | - Sebastian Tschauner
- Medical University of Graz, Department of Radiology, Division of Pediatric Radiology, Auenbruggerplatz 34, Graz, 8036, Austria
| | - Erich Sorantin
- Medical University of Graz, Department of Radiology, Division of Pediatric Radiology, Auenbruggerplatz 34, Graz, 8036, Austria
| | - Ivan Štajduhar
- University of Rijeka, Faculty of Engineering, Department of Computer Engineering, Vukovarska 58, Rijeka, 51000, Croatia; University of Rijeka, Center for Artificial Intelligence and Cybersecurity, Radmile Matejčić 2, Rijeka, 51000, Croatia.
| |
Collapse
|
22
|
Aly GH, Marey M, El-Sayed SA, Tolba MF. YOLO Based Breast Masses Detection and Classification in Full-Field Digital Mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105823. [PMID: 33190942 DOI: 10.1016/j.cmpb.2020.105823] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Accepted: 10/27/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE With the recent development in deep learning since 2012, the use of Convolutional Neural Networks (CNNs) in bioinformatics, especially medical imaging, achieved tremendous success. Besides that, breast masses detection and classifications in mammograms and their pathology classification are considered a critical challenge. Till now, the evaluation process of the screening mammograms is held by human readers which is considered very monotonous, tiring, lengthy, costly, and significantly prone to errors. METHODS We propose an end to end computer-aided diagnosis system based on You Only Look Once (YOLO). The proposed system first preprocesses the mammograms from their DICOM format to images without losing data. Then, it detects masses in full-field digital mammograms and distinguishes between the malignant and benign lesions without any human intervention. YOLO has three different architectures, and, in this paper, the three versions are used for mass detection and classification in the mammograms to compare their performance. The use of anchors in YOLO-V3 on the original form of data and its augmented version is proved to improve the detection accuracy especially when the k-means clustering is applied to generate anchors corresponding to the used dataset. Finally, ResNet and Inception are used as feature extractors to compare their classification performance against YOLO. RESULTS Mammograms with different resolutions are used and based on YOLO-V3, the best results are obtained through detecting 89.4% of the masses in the INbreast mammograms with an average precision of 94.2% and 84.6% for classifying the masses as benign and malignant respectively. YOLO's classification network is replaced with ResNet and InceptionV3 to get overall accuracy of 91.0% and 95.5%, respectively. CONCLUSION The proposed system showed using the experimental results the YOLO impact on the breast masses detection and classification. Especially using the anchor boxes concept in YOLO-V3 that are generated by applying k-means clustering on the dataset, we can detect most of the challenging cases of masses and classify them correctly. Also, by augmenting the dataset using different approaches and comparing with other recent YOLO based studies, it is found that augmenting the training set only is the fairest and accurate to be applied in the realistic scenarios.
Collapse
Affiliation(s)
- Ghada Hamed Aly
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt.
| | - Mohammed Marey
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt
| | - Safaa Amin El-Sayed
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt
| | - Mohamed Fahmy Tolba
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt
| |
Collapse
|
23
|
Cao XF, Li Y, Xin HN, Zhang HR, Pai M, Gao L. Application of artificial intelligence in digital chest radiography reading for pulmonary tuberculosis screening. Chronic Dis Transl Med 2021; 7:35-40. [PMID: 34013178 PMCID: PMC8110935 DOI: 10.1016/j.cdtm.2021.02.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Indexed: 12/18/2022] Open
Abstract
Currently, the diagnosis of tuberculosis (TB) is mainly based on the comprehensive consideration of the patient's symptoms and signs, laboratory examinations and chest radiography (CXR). CXR plays a pivotal role to support the early diagnosis of TB, especially when used for TB screening and differential diagnosis. However, high cost of CXR hardware and shortage of certified radiologists poses a major challenge for CXR application in TB screening in resource limited settings. The latest development of artificial intelligence (AI) combined with the accumulation of a large number of medical images provides new opportunities for the establishment of computer-aided detection (CAD) systems in the medical applications, especially in the era of deep learning (DL) technology. Several CAD solutions are now commercially available and there is growing evidence demonstrate their value in imaging diagnosis. Recently, WHO published a rapid communication which stated that CAD may be used as an alternative to human reader interpretation of plain digital CXRs for screening and triage of TB.
Collapse
Affiliation(s)
- Xue-Fang Cao
- NHC Key Laboratory of Systems Biology of Pathogens, Institute of Pathogen Biology, And Center for Tuberculosis Research, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Yuan Li
- JF Healthcare, Nanchang, Jiangxi 330072, China
| | - He-Nan Xin
- NHC Key Laboratory of Systems Biology of Pathogens, Institute of Pathogen Biology, And Center for Tuberculosis Research, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Hao-Ran Zhang
- NHC Key Laboratory of Systems Biology of Pathogens, Institute of Pathogen Biology, And Center for Tuberculosis Research, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Madhukar Pai
- McGill International TB Centre, McGill University, Montreal, Canada
| | - Lei Gao
- NHC Key Laboratory of Systems Biology of Pathogens, Institute of Pathogen Biology, And Center for Tuberculosis Research, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
24
|
Majumdar S, Verma R, Saha A, Bhattacharyya P, Maji P, Surjit M, Kundu M, Basu J, Saha S. Perspectives About Modulating Host Immune System in Targeting SARS-CoV-2 in India. Front Genet 2021; 12:637362. [PMID: 33664772 PMCID: PMC7921795 DOI: 10.3389/fgene.2021.637362] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 01/19/2021] [Indexed: 12/16/2022] Open
Abstract
Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), the causative agent of coronavirus induced disease-2019 (COVID-19), is a type of common cold virus responsible for a global pandemic which requires immediate measures for its containment. India has the world's largest population aged between 10 and 40 years. At the same time, India has a large number of individuals with diabetes, hypertension and kidney diseases, who are at a high risk of developing COVID-19. A vaccine against the SARS-CoV-2, may offer immediate protection from the causative agent of COVID-19, however, the protective memory may be short-lived. Even if vaccination is broadly successful in the world, India has a large and diverse population with over one-third being below the poverty line. Therefore, the success of a vaccine, even when one becomes available, is uncertain, making it necessary to focus on alternate approaches of tackling the disease. In this review, we discuss the differences in COVID-19 death/infection ratio between urban and rural India; and the probable role of the immune system, co-morbidities and associated nutritional status in dictating the death rate of COVID-19 patients in rural and urban India. Also, we focus on strategies for developing masks, vaccines, diagnostics and the role of drugs targeting host-virus protein-protein interactions in enhancing host immunity. We also discuss India's strengths including the resources of medicinal plants, good food habits and the role of information technology in combating COVID-19. We focus on the Government of India's measures and strategies for creating awareness in the containment of COVID-19 infection across the country.
Collapse
Affiliation(s)
| | - Rohit Verma
- Virology Laboratory, Vaccine and Infectious Disease Research Centre, Translational Health Science and Technology Institute, NCR Biotech Science Cluster, Faridabad, India
| | - Avishek Saha
- Ubiquitous Analytical Techniques, CSIR-Central Scientific Instruments Organisation, Chandigarh, India
| | | | - Pradipta Maji
- Biomedical Imaging and Bioinformatics Lab, Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
| | - Milan Surjit
- Virology Laboratory, Vaccine and Infectious Disease Research Centre, Translational Health Science and Technology Institute, NCR Biotech Science Cluster, Faridabad, India
| | | | - Joyoti Basu
- Department of Chemistry, Bose Institute, Kolkata, India
| | - Sudipto Saha
- Division of Bioinformatics, Bose Institute, Kolkata, India
| |
Collapse
|
25
|
Abstract
The paper describes a computer tool dedicated to the comprehensive analysis of lung changes in computed tomography (CT) images. The correlation between the dose delivered during radiotherapy and pulmonary fibrosis is offered as an example analysis. The input data, in DICOM (Digital Imaging and Communications in Medicine) format, is provided from CT images and dose distribution models of patients. The CT images are processed using convolution neural networks, and next, the selected slices go through the segmentation and registration algorithms. The results of the analysis are visualized in graphical format and also in numerical parameters calculated based on the images analysis.
Collapse
|
26
|
Varela-Santos S, Melin P. A new approach for classifying coronavirus COVID-19 based on its manifestation on chest X-rays using texture features and neural networks. Inf Sci (N Y) 2021; 545:403-414. [PMID: 32999505 PMCID: PMC7513693 DOI: 10.1016/j.ins.2020.09.041] [Citation(s) in RCA: 73] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 09/17/2020] [Accepted: 09/19/2020] [Indexed: 01/22/2023]
Abstract
Since the recent challenge that humanity is facing against COVID-19, several initiatives have been put forward with the goal of creating measures to help control the spread of the pandemic. In this paper we present a series of experiments using supervised learning models in order to perform an accurate classification on datasets consisting of medical images from COVID-19 patients and medical images of several other related diseases affecting the lungs. This work represents an initial experimentation using image texture feature descriptors, feed-forward and convolutional neural networks on newly created databases with COVID-19 images. The goal was setting a baseline for the future development of a system capable of automatically detecting the COVID-19 disease based on its manifestation on chest X-rays and computerized tomography images of the lungs.
Collapse
Affiliation(s)
- Sergio Varela-Santos
- Division of Graduate Studies, Tijuana Institute of Technology, Tijuana, 22414 Baja CA, Mexico
| | - Patricia Melin
- Division of Graduate Studies, Tijuana Institute of Technology, Tijuana, 22414 Baja CA, Mexico
| |
Collapse
|
27
|
Desai M, Shah M. An anatomization on breast cancer detection and diagnosis employing multi-layer perceptron neural network (MLP) and Convolutional neural network (CNN). CLINICAL EHEALTH 2021. [DOI: 10.1016/j.ceh.2020.11.002] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|
28
|
Oliveira H, Mota V, Machado AM, dos Santos JA. From 3D to 2D: Transferring knowledge for rib segmentation in chest X-rays. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.09.021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
29
|
Chung M, Lee J, Park S, Lee M, Lee CE, Lee J, Shin YG. Individual tooth detection and identification from dental panoramic X-ray images via point-wise localization and distance regularization. Artif Intell Med 2020; 111:101996. [PMID: 33461689 DOI: 10.1016/j.artmed.2020.101996] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 11/04/2020] [Accepted: 11/17/2020] [Indexed: 11/19/2022]
Abstract
Dental panoramic X-ray imaging is a popular diagnostic method owing to its very small dose of radiation. For an automated computer-aided diagnosis system in dental clinics, automatic detection and identification of individual teeth from panoramic X-ray images are critical prerequisites. In this study, we propose a point-wise tooth localization neural network by introducing a spatial distance regularization loss. The proposed network initially performs center point regression for all the anatomical teeth (i.e., 32 points), which automatically identifies each tooth. A novel distance regularization penalty is employed on the 32 points by considering L2 regularization loss of Laplacian on spatial distances. Subsequently, teeth boxes are individually localized using a multitask neural network on a patch basis. A multitask offset training is employed on the final output to improve the localization accuracy. Our method successfully localizes not only the existing teeth but also missing teeth; consequently, highly accurate detection and identification are achieved. The experimental results demonstrate that the proposed algorithm outperforms state-of-the-art approaches by increasing the average precision of teeth detection by 15.71 % compared to the best performing method. The accuracy of identification achieved a precision of 0.997 and recall value of 0.972. Moreover, the proposed network does not require any additional identification algorithm owing to the preceding regression of the fixed 32 points regardless of the existence of the teeth.
Collapse
Affiliation(s)
- Minyoung Chung
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Jusang Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Sanguk Park
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Minkyung Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Chae Eun Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Jeongjin Lee
- School of Computer Science and Engineering, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul, 06978, Republic of Korea.
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| |
Collapse
|
30
|
Transfer-to-Transfer Learning Approach for Computer Aided Detection of COVID-19 in Chest Radiographs. AI 2020. [DOI: 10.3390/ai1040032] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) global pandemic has severely impacted lives across the globe. Respiratory disorders in COVID-19 patients are caused by lung opacities similar to viral pneumonia. A Computer-Aided Detection (CAD) system for the detection of COVID-19 using chest radiographs would provide a second opinion for radiologists. For this research, we utilize publicly available datasets that have been marked by radiologists into two-classes (COVID-19 and non-COVID-19). We address the class imbalance problem associated with the training dataset by proposing a novel transfer-to-transfer learning approach, where we break a highly imbalanced training dataset into a group of balanced mini-sets and apply transfer learning between these. We demonstrate the efficacy of the method using well-established deep convolutional neural networks. Our proposed training mechanism is more robust to limited training data and class imbalance. We study the performance of our algorithm(s) based on 10-fold cross validation and two hold-out validation experiments to demonstrate its efficacy. We achieved an overall sensitivity of 0.94 for the hold-out validation experiments containing 2265 and 2139 marked as COVID-19 chest radiographs, respectively. For the 10-fold cross validation experiment, we achieve an overall Area under the Receiver Operating Characteristic curve (AUC) value of 0.996 for COVID-19 detection. This paper serves as a proof-of-concept that an automated detection approach can be developed with a limited set of COVID-19 images, and in areas with scarcity of trained radiologists.
Collapse
|
31
|
Analysis of cancer in histological images: employing an approach based on genetic algorithm. Pattern Anal Appl 2020. [DOI: 10.1007/s10044-020-00931-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
32
|
Wu H, Xie P, Zhang H, Li D, Cheng M. Predict pneumonia with chest X-ray images based on convolutional deep neural learning networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-191438] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The chest X-ray examination is one of the most important methods for screening and diagnosing of many lung diseases. Diagnosis of pneumonia by chest X-ray is one of the common methods used by medical experts. However, the image quality of chest X-Ray has some defects, such as low contrast, overlapping organs and blurred boundary, which seriously affects detecting pneumonia in chest X-rays. Therefore, it has important medical value and application significance to construct a stable and accurate automatic detection model of pneumonia through a large number of chest X-ray images. In this paper, we propose a novel hybrid system for detecting pneumonia from chest X-Ray image: ACNN-RF, which is an adaptive median filter Convolutional Neural Network (CNN) recognition model based on Random forest (RF). Firstly, the improved adaptive median filtering is employed to remove noise in the chest X-ray image, which makes the image more easily recognized. Secondly, we establish the CNN architecture based on Dropout to extract deep activation features from each chest X-ray image. Finally, we employ the RF classifier based on GridSearchCV class as a classifier for deep activation features in CNN model. It not only avoids the phenomenon of over-fitting in data training, but also improves the accuracy of image classification. During our experiment, the public chest X-ray image dataset used in the experiment contains 5863 images, which comprises 4265 frontal-view X-ray images of 1574 unique patients. The average recognition rate of pneumonia is up to 97% by the proposed ACNN-RF. The experimental results show that the ACNN-RF identification system is more effective than the previous traditional image identification system.
Collapse
Affiliation(s)
- Huaiguang Wu
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou, China
| | - Pengjie Xie
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou, China
| | - Huiyi Zhang
- Henan Provincial People’s Hospital, Zhengzhou, China
| | - Daiyi Li
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Ming Cheng
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| |
Collapse
|
33
|
Barinov L, Jairaj A, Becker M, Seymour S, Lee E, Schram A, Lane E, Goldszal A, Quigley D, Paster L. Impact of Data Presentation on Physician Performance Utilizing Artificial Intelligence-Based Computer-Aided Diagnosis and Decision Support Systems. J Digit Imaging 2020; 32:408-416. [PMID: 30324429 PMCID: PMC6499739 DOI: 10.1007/s10278-018-0132-5] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Ultrasound (US) is a valuable imaging modality used to detect primary breast malignancy. However, radiologists have a limited ability to distinguish between benign and malignant lesions on US, leading to false-positive and false-negative results, which limit the positive predictive value of lesions sent for biopsy (PPV3) and specificity. A recent study demonstrated that incorporating an AI-based decision support (DS) system into US image analysis could help improve US diagnostic performance. While the DS system is promising, its efficacy in terms of its impact also needs to be measured when integrated into existing clinical workflows. The current study evaluates workflow schemas for DS integration and its impact on diagnostic accuracy. The impact on two different reading methodologies, sequential and independent, was assessed. This study demonstrates significant accuracy differences between the two workflow schemas as measured by area under the receiver operating curve (AUC), as well as inter-operator variability differences as measured by Kendall’s tau-b. This evaluation has practical implications on the utilization of such technologies in diagnostic environments as compared to previous studies.
Collapse
Affiliation(s)
- L Barinov
- Koios Medical, New York, NY, USA. .,Princeton University, Princeton, NJ, USA. .,Rutgers University Robert Wood Johnson Medical School, New Brunswick, NJ, USA.
| | - A Jairaj
- Koios Medical, New York, NY, USA
| | - M Becker
- Rutgers University Robert Wood Johnson Medical School, New Brunswick, NJ, USA.,University Radiology Group, East Brunswick, NJ, USA
| | | | - E Lee
- Rutgers University Robert Wood Johnson Medical School, New Brunswick, NJ, USA.,University Radiology Group, East Brunswick, NJ, USA
| | - A Schram
- Rutgers University Robert Wood Johnson Medical School, New Brunswick, NJ, USA.,University Radiology Group, East Brunswick, NJ, USA
| | - E Lane
- University Radiology Group, East Brunswick, NJ, USA
| | - A Goldszal
- Rutgers University Robert Wood Johnson Medical School, New Brunswick, NJ, USA.,University Radiology Group, East Brunswick, NJ, USA
| | - D Quigley
- University Radiology Group, East Brunswick, NJ, USA
| | - L Paster
- Rutgers University Robert Wood Johnson Medical School, New Brunswick, NJ, USA.,University Radiology Group, East Brunswick, NJ, USA
| |
Collapse
|
34
|
Guo W, Gu X, Fang Q, Li Q. Comparison of performances of conventional and deep learning-based methods in segmentation of lung vessels and registration of chest radiographs. Radiol Phys Technol 2020; 14:6-15. [PMID: 32918159 DOI: 10.1007/s12194-020-00584-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 08/31/2020] [Accepted: 09/01/2020] [Indexed: 12/27/2022]
Abstract
Conventional machine learning-based methods have been effective in assisting physicians in making accurate decisions and utilized in computer-aided diagnosis for more than 30 years. Recently, deep learning-based methods, and convolutional neural networks in particular, have rapidly become preferred options in medical image analysis because of their state-of-the-art performance. However, the performances of conventional and deep learning-based methods cannot be compared reliably because of their evaluations on different datasets. Hence, we developed both conventional and deep learning-based methods for lung vessel segmentation and chest radiograph registration, and subsequently compared their performances on the same datasets. The results strongly indicated the superiority of deep learning-based methods over their conventional counterparts.
Collapse
Affiliation(s)
- Wei Guo
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
- School of Computer, Shenyang Aerospace University, Shenyang, 110136, Liaoning, China
| | - Xiaomeng Gu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Qiming Fang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Qiang Li
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.
| |
Collapse
|
35
|
Afzali A, Babapour Mofrad F, Pouladian M. Contour-based lung shape analysis in order to tuberculosis detection: modeling and feature description. Med Biol Eng Comput 2020; 58:1965-1986. [PMID: 32572669 DOI: 10.1007/s11517-020-02192-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2019] [Accepted: 05/18/2020] [Indexed: 11/26/2022]
Abstract
Statistical shape analysis of lung is a reliable alternative method for diagnosing pulmonary diseases such as tuberculosis (TB). The 2D contour-based lung shape analysis is investigated and developed using Fourier descriptors (FDs). The proposed 2D lung shape analysis is carried out in threefold: (1) represent the normal and the abnormal (i.e. pulmonary tuberculosis (PTB)) lung shape models using Fourier descriptors modeling (FDM) framework from chest X-ray (CXR) images, (2) estimate and compare the 2D inter-patient lung shape variations for the normal and abnormal lungs by applying principal component analysis (PCA) techniques, and (3) describe the optimal type of contour-based feature vectors to train a classifier in order to detect TB using one publicly available dataset-namely the Montgomery dataset. Since almost all of the previous works in lung shape analysis are content-based analysis, we proposed contour-based lung shape analysis for statistical modeling and feature description of PTB cases. The results show that the proposed approach is able to explain more than 95% of total variations in both of the normal and PTB cases using only 6 and 7 principal component modes for the right and the left lungs, respectively. In case of PTB detection, using 138 lung cases (80 normal and 58 PTB cases), we achieved the accuracy (ACC) and the area under the curve (AUC) of 82.03% and 88.75%, respectively. In comparison with existing state-of-art studies in the same dataset, the proposed approach is a very promising supplement for diagnosis of PTB disease. The method is robust and valuable for application in 2D automatic segmentation, classification, and atlas registration. Moreover, the approach could be used for any kind of pulmonary diseases. Graphical abstract Contour-based lung shape analysis in order to detect tuberculosis: modeling and feature description.
Collapse
Affiliation(s)
- Ali Afzali
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Farshid Babapour Mofrad
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran.
| | - Majid Pouladian
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| |
Collapse
|
36
|
Nahid AA, Sikder N, Bairagi AK, Razzaque MA, Masud M, Z. Kouzani A, Mahmud MAP. A Novel Method to Identify Pneumonia through Analyzing Chest Radiographs Employing a Multichannel Convolutional Neural Network. SENSORS 2020; 20:s20123482. [PMID: 32575656 PMCID: PMC7348917 DOI: 10.3390/s20123482] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 06/07/2020] [Accepted: 06/10/2020] [Indexed: 11/16/2022]
Abstract
Pneumonia is a virulent disease that causes the death of millions of people around the world. Every year it kills more children than malaria, AIDS, and measles combined and it accounts for approximately one in five child-deaths worldwide. The invention of antibiotics and vaccines in the past century has notably increased the survival rate of Pneumonia patients. Currently, the primary challenge is to detect the disease at an early stage and determine its type to initiate the appropriate treatment. Usually, a trained physician or a radiologist undertakes the task of diagnosing Pneumonia by examining the patient's chest X-ray. However, the number of such trained individuals is nominal when compared to the 450 million people who get affected by Pneumonia every year. Fortunately, this challenge can be met by introducing modern computers and improved Machine Learning techniques in Pneumonia diagnosis. Researchers have been trying to develop a method to automatically detect Pneumonia using machines by analyzing and the symptoms of the disease and chest radiographic images of the patients for the past two decades. However, with the development of cogent Deep Learning algorithms, the formation of such an automatic system is very much within the realms of possibility. In this paper, a novel diagnostic method has been proposed while using Image Processing and Deep Learning techniques that are based on chest X-ray images to detect Pneumonia. The method has been tested on a widely used chest radiography dataset, and the obtained results indicate that the model is very much potent to be employed in an automatic Pneumonia diagnosis scheme.
Collapse
Affiliation(s)
- Abdullah-Al Nahid
- Electronics and Communication Engineering Discipline, Khulna University, Khulna 9208, Bangladesh
- Correspondence: ; Tel.: +88-01948-820119
| | - Niloy Sikder
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Anupam Kumar Bairagi
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Md. Abdur Razzaque
- Department of Computer Science and Engineering, University of Dhaka, Dhaka 1000, Bangladesh;
| | - Mehedi Masud
- Department of Computer Science, Taif University, Taif 21944, Saudi Arabia;
| | - Abbas Z. Kouzani
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia; (A.Z.K.); (M.A.P.M.)
| | - M. A. Parvez Mahmud
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia; (A.Z.K.); (M.A.P.M.)
| |
Collapse
|
37
|
Hwang EJ, Park CM. Clinical Implementation of Deep Learning in Thoracic Radiology: Potential Applications and Challenges. Korean J Radiol 2020; 21:511-525. [PMID: 32323497 PMCID: PMC7183830 DOI: 10.3348/kjr.2019.0821] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Accepted: 01/31/2020] [Indexed: 12/25/2022] Open
Abstract
Chest X-ray radiography and computed tomography, the two mainstay modalities in thoracic radiology, are under active investigation with deep learning technology, which has shown promising performance in various tasks, including detection, classification, segmentation, and image synthesis, outperforming conventional methods and suggesting its potential for clinical implementation. However, the implementation of deep learning in daily clinical practice is in its infancy and facing several challenges, such as its limited ability to explain the output results, uncertain benefits regarding patient outcomes, and incomplete integration in daily workflow. In this review article, we will introduce the potential clinical applications of deep learning technology in thoracic radiology and discuss several challenges for its implementation in daily clinical practice.
Collapse
Affiliation(s)
- Eui Jin Hwang
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.
| |
Collapse
|
38
|
Chae KJ, Jin GY, Ko SB, Wang Y, Zhang H, Choi EJ, Choi H. Deep Learning for the Classification of Small (≤2 cm) Pulmonary Nodules on CT Imaging: A Preliminary Study. Acad Radiol 2020; 27:e55-e63. [PMID: 31780395 DOI: 10.1016/j.acra.2019.05.018] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 05/23/2019] [Accepted: 05/25/2019] [Indexed: 12/31/2022]
Abstract
RATIONALE AND OBJECTIVES We aimed to present a deep learning-based malignancy prediction model (CT-lungNET) that is simpler and faster to use in the diagnosis of small (≤2 cm) pulmonary nodules on nonenhanced chest CT and to preliminarily evaluate its performance and usefulness for human reviewers. MATERIALS AND METHODS A total of 173 whole nonenhanced chest CT images containing 208 pulmonary nodules (94 malignant and 11 benign nodules) ranging in size from 5 mm to 20 mm were collected. Pathologically confirmed nodules or nodules that remained unchanged for more than 1 year were included, and 30 benign and 30 malignant nodules were randomly assigned into the test set. We designed CT-lungNET to include three convolutional layers followed by two fully-connected layers and compared its diagnostic performance and processing time with those of AlexNET by using the area under the receiver operating curve (AUROC). An observer performance test was conducted involving eight human reviewers of four different groups (medical students, physicians, radiologic residents, and thoracic radiologists) at test 1 and test 2, referring to the CT-lungNET's malignancy prediction rate with pairwise comparison receiver operating curve analysis. RESULTS CT-lungNET showed an improved AUROC (0.85; 95% confidence interval: 0.74-0.93), compared to that of the AlexNET (0.82; 95% confidence interval: 0.71-0.91). The processing speed per one image slice for CT-lungNET was about 10 times faster than that for AlexNET (0.90 vs. 8.79 seconds). During the observer performance test, the classification performance of nonradiologists was increased with the aid of CTlungNET, (mean AUC improvement: 0.13; range: 0.03-0.19) but not significantly so in the radiologists group (mean AUC improvement: 0.02; range: -0.02 to 0.07). CONCLUSION CT-lungNET was able to provide better classification results with a significantly shorter amount of processing time as compared to AlexNET in the diagnosis of small pulmonary nodules on nonenhanced chest CT. In this preliminary observer performance test, CT-lungNET may have a role acting as a second reviewer for less experienced reviewers, resulting in enhanced performance in the diagnosis of early lung cancer.
Collapse
Affiliation(s)
- Kum J Chae
- Department of Radiology, Research Institute of Clinical Medicine of Chonbuk National University, Biomedical Research Institute of Chonbuk National University Hospital, 634-18 Keumam-Dong, Jeonju, Jeonbuk 561-712, South Korea
| | - Gong Y Jin
- Department of Radiology, Research Institute of Clinical Medicine of Chonbuk National University, Biomedical Research Institute of Chonbuk National University Hospital, 634-18 Keumam-Dong, Jeonju, Jeonbuk 561-712, South Korea.
| | - Seok B Ko
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| | - Yi Wang
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| | - Hao Zhang
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| | - Eun J Choi
- Department of Radiology, Research Institute of Clinical Medicine of Chonbuk National University, Biomedical Research Institute of Chonbuk National University Hospital, 634-18 Keumam-Dong, Jeonju, Jeonbuk 561-712, South Korea
| | - Hyemi Choi
- Department of Statistics and Institute of Applied Statistics, Chonbuk National University, Jeonju, South Korea
| |
Collapse
|
39
|
Computer-aided diagnosis for World Health Organization-defined chest radiograph primary-endpoint pneumonia in children. Pediatr Radiol 2020; 50:482-491. [PMID: 31930429 DOI: 10.1007/s00247-019-04593-0] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 09/26/2019] [Accepted: 11/28/2019] [Indexed: 12/25/2022]
Abstract
BACKGROUND The chest radiograph is the most common imaging modality to assess childhood pneumonia. It has been used in epidemiological and vaccine efficacy/effectiveness studies on childhood pneumonia. OBJECTIVE To develop computer-aided diagnosis (CAD4Kids) for chest radiography in children and to evaluate its accuracy in identifying World Health Organization (WHO)-defined chest radiograph primary-endpoint pneumonia compared to a consensus interpretation. MATERIALS AND METHODS Chest radiographs were independently evaluated by three radiologists based on WHO criteria. Automatic lung field segmentation was followed by manual inspection and correction, training, feature extraction and classification. Radiographs were filtered with Gaussian derivatives on multiple scales, extracting texture features to classify each pixel in the lung region. To obtain an image score, the 95th percentile score of the pixels was used. Training and testing were done in 10-fold cross validation. RESULTS The radiologist majority consensus reading of 858 interpretable chest radiographs included 333 (39%) categorised as primary-endpoint pneumonia, 208 (24%) as other infiltrate only and 317 (37%) as no primary-endpoint pneumonia or other infiltrate. Compared to the reference radiologist consensus reading, CAD4Kids had an area under the receiver operator characteristic (ROC) curve of 0.850 (95% confidence interval [CI] 0.823-0.876), with a sensitivity of 76% and specificity of 80% for identifying primary-endpoint pneumonia on chest radiograph. Furthermore, the ROC curve was 0.810 (95% CI 0.772-0.846) for CAD4Kids identifying primary-endpoint pneumonia compared to other infiltrate only. CONCLUSION Further development of the CAD4Kids software and validation in multicentre studies are important for future research on computer-aided diagnosis and artificial intelligence in paediatric radiology.
Collapse
|
40
|
A multi-level similarity measure for the retrieval of the common CT imaging signs of lung diseases. Med Biol Eng Comput 2020; 58:1015-1029. [PMID: 32124223 DOI: 10.1007/s11517-020-02146-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2019] [Accepted: 02/13/2020] [Indexed: 12/20/2022]
Abstract
The common CT imaging signs of lung diseases (CISLs) which frequently appear in lung CT images are widely used in the diagnosis of lung diseases. Computer-aided diagnosis (CAD) based on the CISLs can improve radiologists' performance in the diagnosis of lung diseases. Since similarity measure is important for CAD, we propose a multi-level method to measure the similarity between the CISLs. The CISLs are characterized in the low-level visual scale, mid-level attribute scale, and high-level semantic scale, for a rich representation. The similarity at multiple levels is calculated and combined in a weighted sum form as the final similarity. The proposed multi-level similarity method is capable of computing the level-specific similarity and optimal cross-level complementary similarity. The effectiveness of the proposed similarity measure method is evaluated on a dataset of 511 lung CT images from clinical patients for CISLs retrieval. It can achieve about 80% precision and take only 3.6 ms for the retrieval process. The extensive comparative evaluations on the same datasets are conducted to validate the advantages on retrieval performance of our multi-level similarity measure over the single-level measure and the two-level similarity methods. The proposed method can have wide applications in radiology and decision support. Graphical abstract.
Collapse
|
41
|
Kim J, Kim KH. Measuring the Effects of Education in Detecting Lung Cancer on Chest Radiographs: Utilization of a New Assessment Tool. JOURNAL OF CANCER EDUCATION : THE OFFICIAL JOURNAL OF THE AMERICAN ASSOCIATION FOR CANCER EDUCATION 2019; 34:1213-1218. [PMID: 30255391 DOI: 10.1007/s13187-018-1431-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This study was designed to evaluate the effect of group and individualized educational lectures to accurately interpret chest radiographs of lung cancer patients and to introduce a new educational tool in evaluating skills for reading chest radiographs. Utilizing "hotspot" technology will be instrumental in measuring the effect of education in interpreting chest radiographs. There were 48 participants in the study. Chest radiographs of 100 lung cancer patients and 11 healthy patients taken at various time points were used for evaluation. Using "hotspot" technology, lesions on each radiograph were outlined. Values were taken at baseline, after which the group received lectures. Several days later, they underwent exam 2. Exam 3 was conducted after individualized lectures. A final exam was taken after the participants underwent individualized training within 2 months. Scores significantly improved after the individual lessons (p < 0.001). This improvement in performance decreased in the final examination. Statistically significant differences were observed between exam 2 vs. exam 3 and exam 3 vs. the final exam (p < 0.001, p < 0.001). Participants demonstrated more improvement in detecting lesions in abnormal chest radiographs than in identifying normal ones. Although there was significant improvement in detecting abnormal radiographs by the end of the study (p < 0.001), no improvement was observed in detecting normal ones. We measured lung cancer detection rate using a new "hotspot" detection tool for chest radiographs. With the proposed scoring system, this tool could be objectively used in evaluating the educational effects.
Collapse
Affiliation(s)
- Junghyun Kim
- Veterans Health Service Medical Center, Seoul, Republic of Korea
| | - Kwan Hyoung Kim
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Uijeongbu St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| |
Collapse
|
42
|
|
43
|
Hypervascular hepatic focal lesions on dynamic contrast-enhanced CT: preliminary data from arterial phase scans texture analysis for classification. Clin Radiol 2019; 74:653.e11-653.e18. [DOI: 10.1016/j.crad.2019.05.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Accepted: 05/16/2019] [Indexed: 01/08/2023]
|
44
|
Souza JC, Bandeira Diniz JO, Ferreira JL, França da Silva GL, Corrêa Silva A, de Paiva AC. An automatic method for lung segmentation and reconstruction in chest X-ray using deep neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 177:285-296. [PMID: 31319957 DOI: 10.1016/j.cmpb.2019.06.005] [Citation(s) in RCA: 93] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 05/24/2019] [Accepted: 06/05/2019] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Chest X-ray (CXR) is one of the most used imaging techniques for detection and diagnosis of pulmonary diseases. A critical component in any computer-aided system, for either detection or diagnosis in digital CXR, is the automatic segmentation of the lung field. One of the main challenges inherent to this task is to include in the segmentation the lung regions overlapped by dense abnormalities, also known as opacities, which can be caused by diseases such as tuberculosis and pneumonia. This specific task is difficult because opacities frequently reach high intensity values which can be incorrectly interpreted by an automatic method as the lung boundary, and as a consequence, this creates a challenge in the segmentation process, because the chances of incomplete segmentations are increased considerably. The purpose of this work is to propose a method for automatic segmentation of lungs in CXR that addresses this problem by reconstructing the lung regions "lost" due to pulmonary abnormalities. METHODS The proposed method, which features two deep convolutional neural network models, consists of four steps main steps: (1) image acquisition, (2) initial segmentation, (3) reconstruction and (4) final segmentation. RESULTS The proposed method was experimented on 138 Chest X-ray images from Montgomery County's Tuberculosis Control Program, and has achieved as best result an average sensitivity of 97.54%, an average specificity of 96.79%, an average accuracy of 96.97%, an average Dice coefficient of 94%, and an average Jaccard index of 88.07%. CONCLUSIONS We demonstrate in our lung segmentation method that the problem of dense abnormalities in Chest X-rays can be efficiently addressed by performing a reconstruction step based on a deep convolutional neural network model.
Collapse
|
45
|
A Novel Computer-Aided Diagnosis Scheme on Small Annotated Set: G2C-CAD. BIOMED RESEARCH INTERNATIONAL 2019; 2019:6425963. [PMID: 31119180 PMCID: PMC6500711 DOI: 10.1155/2019/6425963] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 03/05/2019] [Indexed: 11/18/2022]
Abstract
Purpose Computer-aided diagnosis (CAD) can aid in improving diagnostic level; however, the main problem currently faced by CAD is that it cannot obtain sufficient labeled samples. To solve this problem, in this study, we adopt a generative adversarial network (GAN) approach and design a semisupervised learning algorithm, named G2C-CAD. Methods From the National Cancer Institute (NCI) Lung Image Database Consortium (LIDC) dataset, we extracted four types of pulmonary nodule sign images closely related to lung cancer: noncentral calcification, lobulation, spiculation, and nonsolid/ground-glass opacity (GGO) texture, obtaining a total of 3,196 samples. In addition, we randomly selected 2,000 non-lesion image blocks as negative samples. We split the data 90% for training and 10% for testing. We designed a DCGAN generative adversarial framework and trained it on the small sample set. We also trained our designed CNN-based fuzzy Co-forest on the labeled small sample set and obtained a preliminary classifier. Then, coupled with the simulated unlabeled samples generated by the trained DCGAN, we conducted iterative semisupervised learning, which continually improved the classification performance of the fuzzy Co-forest until the termination condition was reached. Finally, we tested the fuzzy Co-forest and compared its performance with that of a C4.5 random decision forest and the G2C-CAD system without the fuzzy scheme, using ROC and confusion matrix for evaluation. Results Four different types of lung cancer-related signs were used in the classification experiment: noncentral calcification, lobulation, spiculation, and nonsolid/ground-glass opacity (GGO) texture, along with negative image samples. For these five classes, the G2C-CAD system obtained AUCs of 0.946, 0.912, 0.908, 0.887, and 0.939, respectively. The average accuracy of G2C-CAD exceeded that of the C4.5 random decision tree by 14%. G2C-CAD also obtained promising test results on the LISS signs dataset; its AUCs for GGO, lobulation, spiculation, pleural indentation, and negative image samples were 0.972, 0.964, 0.941, 0.967, and 0.953, respectively. Conclusion The experimental results show that G2C-CAD is an appropriate method for addressing the problem of insufficient labeled samples in the medical image analysis field. Moreover, our system can be used to establish a training sample library for CAD classification diagnosis, which is important for future medical image analysis.
Collapse
|
46
|
Kumar SP, Latte MV. Fully Automated Segmentation of Lung Parenchyma Using Break and Repair Strategy. JOURNAL OF INTELLIGENT SYSTEMS 2019. [DOI: 10.1515/jisys-2017-0020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
The traditional segmentation methods available for pulmonary parenchyma are not accurate because most of the methods exclude nodules or tumors adhering to the lung pleural wall as fat. In this paper, several techniques are exhaustively used in different phases, including two-dimensional (2D) optimal threshold selection and 2D reconstruction for lung parenchyma segmentation. Then, lung parenchyma boundaries are repaired using improved chain code and Bresenham pixel interconnection. The proposed method of segmentation and repairing is fully automated. Here, 21 thoracic computer tomography slices having juxtapleural nodules and 115 lung parenchyma scans are used to verify the robustness and accuracy of the proposed method. Results are compared with the most cited active contour methods. Empirical results show that the proposed fully automated method for segmenting lung parenchyma is more accurate. The proposed method is 100% sensitive to the inclusion of nodules/tumors adhering to the lung pleural wall, the juxtapleural nodule segmentation is >98%, and the lung parenchyma segmentation accuracy is >96%.
Collapse
|
47
|
Candemir S, Antani S. A review on lung boundary detection in chest X-rays. Int J Comput Assist Radiol Surg 2019; 14:563-576. [PMID: 30730032 PMCID: PMC6420899 DOI: 10.1007/s11548-019-01917-1] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 01/16/2019] [Indexed: 01/22/2023]
Abstract
PURPOSE Chest radiography is the most common imaging modality for pulmonary diseases. Due to its wide usage, there is a rich literature addressing automated detection of cardiopulmonary diseases in digital chest X-rays (CXRs). One of the essential steps for automated analysis of CXRs is localizing the relevant region of interest, i.e., isolating lung region from other less relevant parts, for applying decision-making algorithms there. This article provides an overview of the recent literature on lung boundary detection in CXR images. METHODS We review the leading lung segmentation algorithms proposed in period 2006-2017. First, we present a review of articles for posterior-anterior view CXRs. Then, we mention studies which operate on lateral views. We pay particular attention to works that focus their efforts on deformed lungs and pediatric cases. We also highlight the radiographic measures extracted from lung boundary and their use in automatically detecting cardiopulmonary abnormalities. Finally, we identify challenges in dataset curation and expert delineation process, and we listed publicly available CXR datasets. RESULTS (1) We classified algorithms into four categories: rule-based, pixel classification-based, model-based, hybrid, and deep learning-based algorithms. Based on the reviewed articles, hybrid methods and deep learning-based methods surpass the algorithms in other classes and have segmentation performance as good as inter-observer performance. However, they require long training process and pose high computational complexity. (2) We found that most of the algorithms in the literature are evaluated on posterior-anterior view adult CXRs with a healthy lung anatomy appearance without considering challenges in abnormal CXRs. (3) We also found that there are limited studies for pediatric CXRs. The lung appearance in pediatrics, especially in infant cases, deviates from adult lung appearance due to the pediatric development stages. Moreover, pediatric CXRs are noisier than adult CXRs due to interference by other objects, such as someone holding the child's arms or the child's body, and irregular body pose. Therefore, lung boundary detection algorithms developed on adult CXRs may not perform accurately in pediatric cases and need additional constraints suitable for pediatric CXR imaging characteristics. (4) We have also stated that one of the main challenges in medical image analysis is accessing the suitable datasets. We listed benchmark CXR datasets for developing and evaluating the lung boundary algorithms. However, the number of CXR images with reference boundaries is limited due to the cumbersome but necessary process of expert boundary delineation. CONCLUSIONS A reliable computer-aided diagnosis system would need to support a greater variety of lung and background appearance. To our knowledge, algorithms in the literature are evaluated on posterior-anterior view adult CXRs with a healthy lung anatomy appearance, without considering ambiguous lung silhouettes due to pathological deformities, anatomical alterations due to misaligned body positioning, patient's development stage and gross background noises such as holding hands, jewelry, patient's head and legs in CXR. Considering all the challenges which are not very well addressed in the literature, developing lung boundary detection algorithms that are robust to such interference remains a challenging task. We believe that a broad review of lung region detection algorithms would be useful for researchers working in the field of automated detection/diagnosis algorithms for lung/heart pathologies in CXRs.
Collapse
Affiliation(s)
- Sema Candemir
- Lister Hill National Center for Biomedical Communications, Communications Engineering Branch, National Library of Medicine, National Institutes of Health, Bethesda, USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical Communications, Communications Engineering Branch, National Library of Medicine, National Institutes of Health, Bethesda, USA
| |
Collapse
|
48
|
Navarro F, Escudero-Vinolo M, Bescos J. Accurate Segmentation and Registration of Skin Lesion Images to Evaluate Lesion Change. IEEE J Biomed Health Inform 2019; 23:501-508. [DOI: 10.1109/jbhi.2018.2825251] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
49
|
Hwang EJ, Park S, Jin KN, Kim JI, Choi SY, Lee JH, Goo JM, Aum J, Yim JJ, Cohen JG, Ferretti GR, Park CM. Development and Validation of a Deep Learning-Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs. JAMA Netw Open 2019; 2:e191095. [PMID: 30901052 PMCID: PMC6583308 DOI: 10.1001/jamanetworkopen.2019.1095] [Citation(s) in RCA: 226] [Impact Index Per Article: 45.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
IMPORTANCE Interpretation of chest radiographs is a challenging task prone to errors, requiring expert readers. An automated system that can accurately classify chest radiographs may help streamline the clinical workflow. OBJECTIVES To develop a deep learning-based algorithm that can classify normal and abnormal results from chest radiographs with major thoracic diseases including pulmonary malignant neoplasm, active tuberculosis, pneumonia, and pneumothorax and to validate the algorithm's performance using independent data sets. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study developed a deep learning-based algorithm using single-center data collected between November 1, 2016, and January 31, 2017. The algorithm was externally validated with multicenter data collected between May 1 and July 31, 2018. A total of 54 221 chest radiographs with normal findings from 47 917 individuals (21 556 men and 26 361 women; mean [SD] age, 51 [16] years) and 35 613 chest radiographs with abnormal findings from 14 102 individuals (8373 men and 5729 women; mean [SD] age, 62 [15] years) were used to develop the algorithm. A total of 486 chest radiographs with normal results and 529 with abnormal results (1 from each participant; 628 men and 387 women; mean [SD] age, 53 [18] years) from 5 institutions were used for external validation. Fifteen physicians, including nonradiology physicians, board-certified radiologists, and thoracic radiologists, participated in observer performance testing. Data were analyzed in August 2018. EXPOSURES Deep learning-based algorithm. MAIN OUTCOMES AND MEASURES Image-wise classification performances measured by area under the receiver operating characteristic curve; lesion-wise localization performances measured by area under the alternative free-response receiver operating characteristic curve. RESULTS The algorithm demonstrated a median (range) area under the curve of 0.979 (0.973-1.000) for image-wise classification and 0.972 (0.923-0.985) for lesion-wise localization; the algorithm demonstrated significantly higher performance than all 3 physician groups in both image-wise classification (0.983 vs 0.814-0.932; all P < .005) and lesion-wise localization (0.985 vs 0.781-0.907; all P < .001). Significant improvements in both image-wise classification (0.814-0.932 to 0.904-0.958; all P < .005) and lesion-wise localization (0.781-0.907 to 0.873-0.938; all P < .001) were observed in all 3 physician groups with assistance of the algorithm. CONCLUSIONS AND RELEVANCE The algorithm consistently outperformed physicians, including thoracic radiologists, in the discrimination of chest radiographs with major thoracic diseases, demonstrating its potential to improve the quality and efficiency of clinical practice.
Collapse
Affiliation(s)
- Eui Jin Hwang
- Department of Radiology, Seoul National University College of Medicine, Seoul, South Korea
| | | | - Kwang-Nam Jin
- Department of Radiology, Seoul National University Boramae Medical Center, Seoul, South Korea
| | - Jung Im Kim
- Department of Radiology, Kyung Hee University Hospital at Gangdong, Kyung Hee University College of Medicine, Seoul, South Korea
| | - So Young Choi
- Department of Radiology, Eulji University Medical Center, College of Medicine, Seoul, South Korea
| | - Jong Hyuk Lee
- Department of Radiology, Seoul National University College of Medicine, Seoul, South Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University College of Medicine, Seoul, South Korea
| | | | - Jae-Joon Yim
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Seoul National University College of Medicine, Seoul, South Korea
| | - Julien G. Cohen
- Pôle Imagerie, Centre Hospitalier Universitaire de Grenoble, La Tronche, France
| | - Gilbert R. Ferretti
- Pôle Imagerie, Centre Hospitalier Universitaire de Grenoble, La Tronche, France
| | - Chang Min Park
- Department of Radiology, Seoul National University College of Medicine, Seoul, South Korea
| |
Collapse
|
50
|
Zhang J, Dashtbozorg B, Huang F, Tan T, ter Haar Romeny BM. A fully automated pipeline of extracting biomarkers to quantify vascular changes in retina-related diseases. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION 2018. [DOI: 10.1080/21681163.2018.1519851] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Jiong Zhang
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Behdad Dashtbozorg
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Fan Huang
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Tao Tan
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - B. M. ter Haar Romeny
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| |
Collapse
|