1
|
Pyrros A, Chen A, Rodríguez-Fernández JM, Borstelmann SM, Cole PA, Horowitz J, Chung J, Nikolaidis P, Boddipalli V, Siddiqui N, Willis M, Flanders AE, Koyejo S. Deep Learning-Based Digitally Reconstructed Tomography of the Chest in the Evaluation of Solitary Pulmonary Nodules: A Feasibility Study. Acad Radiol 2023; 30:739-748. [PMID: 35690536 PMCID: PMC9732145 DOI: 10.1016/j.acra.2022.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/07/2022] [Accepted: 05/09/2022] [Indexed: 12/13/2022]
Abstract
RATIONALE AND OBJECTIVES Computed tomography (CT) is preferred for evaluating solitary pulmonary nodules (SPNs) but access or availability may be lacking, in addition, overlapping anatomy can hinder detection of SPNs on chest radiographs. We developed and evaluated the clinical feasibility of a deep learning algorithm to generate digitally reconstructed tomography (DRT) images of the chest from digitally reconstructed frontal and lateral radiographs (DRRs) and use them to detect SPNs. METHODS This single-institution retrospective study included 637 patients with noncontrast helical CT of the chest (mean age 68 years, median age 69 years, standard deviation 11.7 years; 355 women) between 11/2012 and 12/2020, with SPNs measuring 10-30 mm. A deep learning model was trained on 562 patients, validated on 60 patients, and tested on the remaining 15 patients. Diagnostic performance (SPN detection) from planar radiography (DRRs and CT scanograms, PR) alone or with DRT was evaluated by two radiologists in an independent blinded fashion. The quality of the DRT SPN image in terms of nodule size and location, morphology, and opacity was also evaluated, and compared to the ground-truth CT images RESULTS: Diagnostic performance was higher from DRT plus PR than from PR alone (area under the receiver operating characteristic curve 0.95-0.98 versus 0.80-0.85; p < 0.05). DRT plus PR enabled diagnosis of SPNs in 11 more patients than PR alone. Interobserver agreement was 0.82 for DRT plus PR and 0.89 for PR alone; and interobserver agreement for size and location, morphology, and opacity of the DRT SPN was 0.94, 0.68, and 0.38, respectively. CONCLUSION For SPN detection, DRT plus PR showed better diagnostic performance than PR alone. Deep learning can be used to generate DRT images and improve detection of SPNs.
Collapse
Affiliation(s)
- Ayis Pyrros
- Department of Radiology, Duly Health and Care, Hinsdale, IL.
| | - Andrew Chen
- Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, Illinois
| | | | | | - Patrick A Cole
- Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, Illinois
| | - Jeanne Horowitz
- Department of Radiology, Northwestern Memorial Hospital, Northwestern University, Chicago, Illinois
| | - Jonathan Chung
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Paul Nikolaidis
- Department of Radiology, Northwestern Memorial Hospital, Northwestern University, Chicago, Illinois
| | | | - Nasir Siddiqui
- Department of Radiology, Duly Health and Care, Hinsdale, IL
| | - Melinda Willis
- Department of Radiology, Duly Health and Care, Hinsdale, IL
| | - Adam Eugene Flanders
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania
| | - Sanmi Koyejo
- Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, Illinois
| |
Collapse
|
2
|
Liu Y, Zeng F, Ma M, Zheng B, Yun Z, Qin G, Yang W, Feng Q. Bone suppression of lateral chest x-rays with imperfect and limited dual-energy subtraction images. Comput Med Imaging Graph 2023; 105:102186. [PMID: 36731328 DOI: 10.1016/j.compmedimag.2023.102186] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 01/06/2023] [Accepted: 01/06/2023] [Indexed: 01/22/2023]
Abstract
Bone suppression is to suppress the superimposed bone components over the soft tissues within the lung area of Chest X-ray (CXR), which is potentially useful for the subsequent lung disease diagnosis for radiologists, as well as computer-aided systems. Despite bone suppression methods for frontal CXRs being well studied, it remains challenging for lateral CXRs due to the limited and imperfect DES dataset containing paired lateral CXR and soft-tissue/bone images and more complex anatomical structures in the lateral view. In this work, we propose a bone suppression method for lateral CXRs by leveraging a two-stage distillation learning strategy and a specific data correction method. Specifically, a primary model is first trained on a real DES dataset with limited samples. The bone-suppressed results on a relatively large lateral CXR dataset produced by the primary model are improved by a designed gradient correction method. Secondly, the corrected results serve as training samples to train the distillated model. By automatically learning knowledge from both the primary model and the extra correction procedure, our distillated model is expected to promote the performance of the primary model while omitting the tedious correction procedure. We adopt an ensemble model named MsDd-MAP for the primary and distillated models, which learns the complementary information of Multi-scale and Dual-domain (i.e., intensity and gradient) and fuses them in a maximum-a-posteriori (MAP) framework. Our method is evaluated on a two-exposure lateral DES dataset consisting of 46 subjects and a lateral CXR dataset consisting of 240 subjects. The experimental results suggest that our method is superior to other competing methods regarding the quantitative evaluation metrics. Furthermore, the subjective evaluation by three experienced radiologists also indicates that the distillated model can produce more visually appealing soft-tissue images than the primary model, even comparable to real DES imaging for lateral CXRs.
Collapse
Affiliation(s)
- Yunbi Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Shenzhen, Guangdong 518172, China; Shenzhen Research Institute of Big Data, Shenzhen, China; University of Science and Technology of China, Hefei, China
| | - Fengxia Zeng
- Radiology Department, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Mengwei Ma
- Radiology Department, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Bowen Zheng
- Radiology Department, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Zhaoqiang Yun
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Genggeng Qin
- Radiology Department, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
3
|
Kim H, Lee KH, Han K, Lee JW, Kim JY, Im DJ, Hong YJ, Choi BW, Hur J. Development and Validation of a Deep Learning-Based Synthetic Bone-Suppressed Model for Pulmonary Nodule Detection in Chest Radiographs. JAMA Netw Open 2023; 6:e2253820. [PMID: 36719681 PMCID: PMC9890286 DOI: 10.1001/jamanetworkopen.2022.53820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 12/01/2022] [Indexed: 02/01/2023] Open
Abstract
Importance Dual-energy chest radiography exhibits better sensitivity than single-energy chest radiography, partly due to its ability to remove overlying anatomical structures. Objectives To develop and validate a deep learning-based synthetic bone-suppressed (DLBS) nodule-detection algorithm for pulmonary nodule detection on chest radiographs. Design, Setting, and Participants This decision analytical modeling study used data from 3 centers between November 2015 and July 2019 from 1449 patients. The DLBS nodule-detection algorithm was trained using single-center data (institute 1) of 998 chest radiographs. The DLBS algorithm was validated using 2 external data sets (institute 2, 246 patients; and institute 3, 205 patients). Statistical analysis was performed from March to December 2021. Exposures DLBS nodule-detection algorithm. Main Outcomes and Measures The nodule-detection performance of DLBS model was compared with the convolution neural network nodule-detection algorithm (original model). Reader performance testing was conducted by 3 thoracic radiologists assisted by the DLBS algorithm or not. Sensitivity and false-positive markings per image (FPPI) were compared. Results Training data consisted of 998 patients (539 men [54.0%]; mean [SD] age, 54.2 [9.82] years), and 2 external validation data sets consisted of 246 patients (133 men [54.1%]; mean [SD] age, 55.3 [8.7] years) and 205 patients (105 men [51.2%]; mean [SD] age, 51.8 [9.1] years). Using the external validation data set of institute 2, the bone-suppressed model showed higher sensitivity compared with that of the original model for nodule detection (91.5% [109 of 119] vs 79.8% [95 of 119]; P < .001). The overall mean of FPPI with the bone-suppressed model was reduced compared with the original model (0.07 [17 of 246] vs 0.09 [23 of 246]; P < .001). For the observer performance testing with the data of institute 3, the mean sensitivity of 3 radiologists was 77.5% (95% [CI], 69.9%-85.2%), whereas that of radiologists assisted by DLBS modeling was 92.1% (95% CI, 86.3%-97.3%; P < .001). The 3 radiologists had a reduced number of FPPI when assisted by the DLBS model (0.071 [95% CI, 0.041-0.111] vs 0.151 [95% CI, 0.111-0.210]; P < .001). Conclusions and Relevance This decision analytical modeling study found that the DLBS model was more sensitive to detecting pulmonary nodules on chest radiographs compared with the original model. These findings suggest that the DLBS model could be beneficial to radiologists in the detection of lung nodules in chest radiographs without need of the specialized equipment or increase of radiation dose.
Collapse
Affiliation(s)
- Hwiyoung Kim
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Kye Ho Lee
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
- Department of Radiology, Dankook University Hospital, Cheonan, Chungnam Province, Republic of Korea
| | - Kyunghwa Han
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Ji Won Lee
- Department of Radiology, Pusan National University Hospital, Pusan National University School of Medicine, Busan, Korea
- Medical Research Institute, Busan, Korea
| | - Jin Young Kim
- Department of Radiology, Dongsan Medical Center, Keimyung University College of Medicine, Daegu, Korea
| | - Dong Jin Im
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Yoo Jin Hong
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Byoung Wook Choi
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Jin Hur
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
4
|
Ait Nasser A, Akhloufi MA. A Review of Recent Advances in Deep Learning Models for Chest Disease Detection Using Radiography. Diagnostics (Basel) 2023; 13:159. [PMID: 36611451 PMCID: PMC9818166 DOI: 10.3390/diagnostics13010159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/21/2022] [Accepted: 12/26/2022] [Indexed: 01/05/2023] Open
Abstract
Chest X-ray radiography (CXR) is among the most frequently used medical imaging modalities. It has a preeminent value in the detection of multiple life-threatening diseases. Radiologists can visually inspect CXR images for the presence of diseases. Most thoracic diseases have very similar patterns, which makes diagnosis prone to human error and leads to misdiagnosis. Computer-aided detection (CAD) of lung diseases in CXR images is among the popular topics in medical imaging research. Machine learning (ML) and deep learning (DL) provided techniques to make this task more efficient and faster. Numerous experiments in the diagnosis of various diseases proved the potential of these techniques. In comparison to previous reviews our study describes in detail several publicly available CXR datasets for different diseases. It presents an overview of recent deep learning models using CXR images to detect chest diseases such as VGG, ResNet, DenseNet, Inception, EfficientNet, RetinaNet, and ensemble learning methods that combine multiple models. It summarizes the techniques used for CXR image preprocessing (enhancement, segmentation, bone suppression, and data-augmentation) to improve image quality and address data imbalance issues, as well as the use of DL models to speed-up the diagnosis process. This review also discusses the challenges present in the published literature and highlights the importance of interpretability and explainability to better understand the DL models' detections. In addition, it outlines a direction for researchers to help develop more effective models for early and automatic detection of chest diseases.
Collapse
Affiliation(s)
| | - Moulay A. Akhloufi
- Perception, Robotics and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1C 3E9, Canada
| |
Collapse
|
5
|
Sun H, Ren G, Teng X, Song L, Li K, Yang J, Hu X, Zhan Y, Wan SBN, Wong MFE, Chan KK, Tsang HCH, Xu L, Wu TC, Kong FM(S, Wang YXJ, Qin J, Chan WCL, Ying M, Cai J. Artificial intelligence-assisted multistrategy image enhancement of chest X-rays for COVID-19 classification. Quant Imaging Med Surg 2023; 13:394-416. [PMID: 36620146 PMCID: PMC9816729 DOI: 10.21037/qims-22-610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 09/17/2022] [Indexed: 11/13/2022]
Abstract
Background The coronavirus disease 2019 (COVID-19) led to a dramatic increase in the number of cases of patients with pneumonia worldwide. In this study, we aimed to develop an AI-assisted multistrategy image enhancement technique for chest X-ray (CXR) images to improve the accuracy of COVID-19 classification. Methods Our new classification strategy consisted of 3 parts. First, the improved U-Net model with a variational encoder segmented the lung region in the CXR images processed by histogram equalization. Second, the residual net (ResNet) model with multidilated-rate convolution layers was used to suppress the bone signals in the 217 lung-only CXR images. A total of 80% of the available data were allocated for training and validation. The other 20% of the remaining data were used for testing. The enhanced CXR images containing only soft tissue information were obtained. Third, the neural network model with a residual cascade was used for the super-resolution reconstruction of low-resolution bone-suppressed CXR images. The training and testing data consisted of 1,200 and 100 CXR images, respectively. To evaluate the new strategy, improved visual geometry group (VGG)-16 and ResNet-18 models were used for the COVID-19 classification task of 2,767 CXR images. The accuracy of the multistrategy enhanced CXR images was verified through comparative experiments with various enhancement images. In terms of quantitative verification, 8-fold cross-validation was performed on the bone suppression model. In terms of evaluating the COVID-19 classification, the CXR images obtained by the improved method were used to train 2 classification models. Results Compared with other methods, the CXR images obtained based on the proposed model had better performance in the metrics of peak signal-to-noise ratio and root mean square error. The super-resolution CXR images of bone suppression obtained based on the neural network model were also anatomically close to the real CXR images. Compared with the initial CXR images, the classification accuracy rates of the internal and external testing data on the VGG-16 model increased by 5.09% and 12.81%, respectively, while the values increased by 3.51% and 18.20%, respectively, for the ResNet-18 model. The numerical results were better than those of the single-enhancement, double-enhancement, and no-enhancement CXR images. Conclusions The multistrategy enhanced CXR images can help to classify COVID-19 more accurately than the other existing methods.
Collapse
Affiliation(s)
- Hongfei Sun
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
- School of Automation, Northwestern Polytechnical University, Xi’an, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Xinzhi Teng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Liming Song
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Kang Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi’an, China
| | - Xiaofei Hu
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Yuefu Zhan
- Department of Radiology, Hainan Women and Children’s Medical Center, Hainan, China
| | - Shiu Bun Nelson Wan
- Department of Radiology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China
| | - Man Fung Esther Wong
- Department of Radiology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China
| | - King Kwong Chan
- Department of Radiology and Imaging, Queen Elizabeth Hospital, Hong Kong, China
| | | | - Lu Xu
- Department of Radiology and Imaging, Queen Elizabeth Hospital, Hong Kong, China
| | - Tak Chiu Wu
- Department of Medicine, Queen Elizabeth Hospital, Hong Kong, China
| | | | - Yi Xiang J. Wang
- Deparment of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Wing Chi Lawrence Chan
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Michael Ying
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
6
|
Lim Y, Lee M, Cho H, Kim G, Choi J, Cha B, Kim S. Feasibility study of deep-learning-based bone suppression incorporated with single-energy material decomposition technique in chest X-rays. Br J Radiol 2022; 95:20211182. [PMID: 35993343 DOI: 10.1259/bjr.20211182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVE To improve the detection of lung abnormalities in chest X-rays by accurately suppressing overlapping bone structures in the lung area. According to literature on missed lung cancer in chest X-rays, such structures are a significant cause of chest-related diagnostic errors. METHODS This study presents a deep-learning-based bone suppression method where a residual U-Net model is trained for chest X-rays using data set generated from the single-energy material decomposition (SEMD) technique on CT. Synthetic projection images and soft-tissue selective images were obtained from the CT data set via the SEMD, which were then used as the input and label data of the U-Net network. The trained network was tested on synthetic chest X-rays and two real chest radiographs. RESULTS Bone-suppressed images of the real chest radiographs obtained by the proposed method were similar to the results from the American Association of Physicists in Medicine lung CT data; pulmonary nodules in the soft-tissue selective images appeared more clearly than in the synthetic projection images. The peak signal-to-noise ratio and structural similarity values measured between the output and the corresponding label images were approximately 17.85 and 0.90, respectively. CONCLUSION The proposed method effectively yielded bone-suppressed chest X-ray images, indicating its clinical usefulness, and it can improve the detection of lung abnormalities in chest X-rays. ADVANCES IN KNOWLEDGE The idea of using SEMD to obtain large amounts of paired images for deep-learning-based bone suppression algorithms is novel.
Collapse
Affiliation(s)
- Younghwan Lim
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Korea
| | - Minjae Lee
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Korea
| | - Hyosung Cho
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Korea
| | - Guna Kim
- Radiation Safety Management Division, Korea Atomic Energy Research Institute, Daejeon, Korea
| | - Jaegu Choi
- Electro-Medical Device Research Center, Korea Electrotechnology Research Institute, Ansan, Korea
| | - Bokyung Cha
- Electro-Medical Device Research Center, Korea Electrotechnology Research Institute, Ansan, Korea
| | - Sunkwon Kim
- Electro-Medical Device Research Center, Korea Electrotechnology Research Institute, Ansan, Korea
| |
Collapse
|
7
|
Cho K, Seo J, Kyung S, Kim M, Hong GS, Kim N. Bone suppression on pediatric chest radiographs via a deep learning-based cascade model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106627. [PMID: 35032722 DOI: 10.1016/j.cmpb.2022.106627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 12/05/2021] [Accepted: 01/07/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Bone suppression images (BSIs) of chest radiographs (CXRs) have been proven to improve diagnosis of pulmonary diseases. To acquire BSIs, dual-energy subtraction (DES) or a deep-learning-based model trained with DES-based BSIs have been used. However, neither technique could be applied to pediatric patients owing to the harmful effects of DES. In this study, we developed a novel method for bone suppression in pediatric CXRs. METHODS First, a model using digitally reconstructed radiographs (DRRs) of adults, which were used to generate pseudo-CXRs from computed tomography images, was developed by training a 2-channel contrastive-unpaired-image-translation network. Second, this model was applied to 129 pediatric DRRs to generate the paired training data of pseudo-pediatric CXRs. Finally, by training a U-Net with these paired data, a bone suppression model for pediatric CXRs was developed. RESULTS The evaluation metrics were peak signal to noise ratio, root mean absolute error and structural similarity index measure at soft-tissue and bone region of the lung. In addition, an expert radiologist scored the effectiveness of BSIs on a scale of 1-5. The obtained result of 3.31 ± 0.48 indicates that the BSIs show homogeneous bone removal despite subtle residual bone shadow. CONCLUSION Our method shows that the pixel intensity at soft-tissue regions was preserved, and bones were well subtracted; this can be useful for detecting early pulmonary disease in pediatric CXRs.
Collapse
Affiliation(s)
- Kyungjin Cho
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Jiyeon Seo
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Mingyu Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul 05505, Korea
| | - Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine & Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu Seoul 05505, Republic of Korea.
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul 05505, Korea; Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea.
| |
Collapse
|
8
|
Bae K, Oh DY, Yun ID, Jeon KN. Bone Suppression on Chest Radiographs for Pulmonary Nodule Detection: Comparison between a Generative Adversarial Network and Dual-Energy Subtraction. Korean J Radiol 2022; 23:139-149. [PMID: 34983100 PMCID: PMC8743147 DOI: 10.3348/kjr.2021.0146] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 08/04/2021] [Accepted: 08/17/2021] [Indexed: 12/29/2022] Open
Abstract
OBJECTIVE To compare the effects of bone suppression imaging using deep learning (BSp-DL) based on a generative adversarial network (GAN) and bone subtraction imaging using a dual energy technique (BSt-DE) on radiologists' performance for pulmonary nodule detection on chest radiographs (CXRs). MATERIALS AND METHODS A total of 111 adults, including 49 patients with 83 pulmonary nodules, who underwent both CXR using the dual energy technique and chest CT, were enrolled. Using CT as a reference, two independent radiologists evaluated CXR images for the presence or absence of pulmonary nodules in three reading sessions (standard CXR, BSt-DE CXR, and BSp-DL CXR). Person-wise and nodule-wise performances were assessed using receiver-operating characteristic (ROC) and alternative free-response ROC (AFROC) curve analyses, respectively. Subgroup analyses based on nodule size, location, and the presence of overlapping bones were performed. RESULTS BSt-DE with an area under the AFROC curve (AUAFROC) of 0.996 and 0.976 for readers 1 and 2, respectively, and BSp-DL with AUAFROC of 0.981 and 0.958, respectively, showed better nodule-wise performance than standard CXR (AUAFROC of 0.907 and 0.808, respectively; p ≤ 0.005). In the person-wise analysis, BSp-DL with an area under the ROC curve (AUROC) of 0.984 and 0.931 for readers 1 and 2, respectively, showed better performance than standard CXR (AUROC of 0.915 and 0.798, respectively; p ≤ 0.011) and comparable performance to BSt-DE (AUROC of 0.988 and 0.974; p ≥ 0.064). BSt-DE and BSp-DL were superior to standard CXR for detecting nodules overlapping with bones (p < 0.017) or in the upper/middle lung zone (p < 0.017). BSt-DE was superior (p < 0.017) to BSp-DL in detecting peripheral and sub-centimeter nodules. CONCLUSION BSp-DL (GAN-based bone suppression) showed comparable performance to BSt-DE and can improve radiologists' performance in detecting pulmonary nodules on CXRs. Nevertheless, for better delineation of small and peripheral nodules, further technical improvements are required.
Collapse
Affiliation(s)
- Kyungsoo Bae
- Department of Radiology, Institute of Health Sciences, Gyeongsang National University School of Medicine, Jinju, Korea.,Department of Radiology, Gyeongsang National University Changwon Hospital, Changwon, Korea
| | | | - Il Dong Yun
- Division of Computer and Electronic System Engineering, Hankuk University of Foreign Studies, Yongin, Korea
| | - Kyung Nyeo Jeon
- Department of Radiology, Institute of Health Sciences, Gyeongsang National University School of Medicine, Jinju, Korea.,Department of Radiology, Gyeongsang National University Changwon Hospital, Changwon, Korea.
| |
Collapse
|
9
|
Ren G, Xiao H, Lam SK, Yang D, Li T, Teng X, Qin J, Cai J. Deep learning-based bone suppression in chest radiographs using CT-derived features: a feasibility study. Quant Imaging Med Surg 2021; 11:4807-4819. [PMID: 34888191 DOI: 10.21037/qims-20-1230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 02/22/2021] [Indexed: 11/06/2022]
Abstract
Background Bone suppression of chest X-ray holds the potential to improve the accuracy of target localization in image-guided radiation therapy (IGRT). However, the training dataset for bone suppression is limited because of the scarcity of bone-free radiographs. This study aims to develop a deep learning-based bone suppression method using CT-derived features to reduce the reliance on the bone-free dataset. Methods In this study, 59 high-resolution lung CT scans were processed to generate the lung digital radiographs (DRs), bone DRs, and bone-free DRs, for the training and internal validation of the proposed cascade convolutional neural network (CCNN). A three-stage image processing framework (CT segmentation, DR simulation, and feature expansion) was developed to expand simulated lung DRs with different weightings of bone intensity. The CCNN consists of a bone detection network and a bone suppression network. In external validation, the trained CCNN was evaluated using 30 chest radiographs. The synthesized bone-suppressed radiographs were compared with the bone-suppressed reference in terms of peak signal-to-noise ratio (PSNR), mean absolute error (MAE), structural similarity index measure (SSIM), and Spearman's correlation coefficient. Furthermore, the effectiveness of the proposed feature expansion method and CCNN model were assessed via the ablation experiment and replacement experiment, respectively. Results Evaluation on real chest radiographs showed that the bone-suppressed chest radiographs closely matched with the bone-suppressed reference, achieving an accuracy of MAE =0.0087±0.0030, SSIM =0.8458±0.0317, correlation of 0.9554±0.0170, and PNSR of 20.86±1.60. After removing the feature expansion from the CCNN model, the performance decreased in terms of MAE (0.0294±0.0093, -237.9%), SSIM (0.7747±0.0.0416, -8.4%), correlation (0.8772±0.0271, -8.2%), and PSNR (15.53±1.42, -25.5%) metrics. Conclusions We successfully demonstrated a novel deep learning-based bone suppression method using CT-derived features to reduce the reliance on the bone-free dataset. Implementation of the feature expansion procedures resulted in a remarkable reinforcement of the model performance. For the application of target localization in IGRT, the clinical testing of the proposed method in the context of radiation therapy is a necessary procedure to move from theory into practice.
Collapse
Affiliation(s)
- Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Sai-Kit Lam
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Dongrong Yang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Xinzhi Teng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
10
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 103] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|
11
|
Farhat H, Sakr GE, Kilany R. Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19. MACHINE VISION AND APPLICATIONS 2020; 31:53. [PMID: 32834523 PMCID: PMC7386599 DOI: 10.1007/s00138-020-01101-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 06/21/2020] [Accepted: 07/07/2020] [Indexed: 05/07/2023]
Abstract
Shortly after deep learning algorithms were applied to Image Analysis, and more importantly to medical imaging, their applications increased significantly to become a trend. Likewise, deep learning applications (DL) on pulmonary medical images emerged to achieve remarkable advances leading to promising clinical trials. Yet, coronavirus can be the real trigger to open the route for fast integration of DL in hospitals and medical centers. This paper reviews the development of deep learning applications in medical image analysis targeting pulmonary imaging and giving insights of contributions to COVID-19. It covers more than 160 contributions and surveys in this field, all issued between February 2017 and May 2020 inclusively, highlighting various deep learning tasks such as classification, segmentation, and detection, as well as different pulmonary pathologies like airway diseases, lung cancer, COVID-19 and other infections. It summarizes and discusses the current state-of-the-art approaches in this research domain, highlighting the challenges, especially with COVID-19 pandemic current situation.
Collapse
Affiliation(s)
- Hanan Farhat
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - George E. Sakr
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - Rima Kilany
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| |
Collapse
|
12
|
Matsubara N, Teramoto A, Saito K, Fujita H. Bone suppression for chest X-ray image using a convolutional neural filter. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2019; 43:10.1007/s13246-019-00822-w. [PMID: 31773501 DOI: 10.1007/s13246-019-00822-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 11/19/2019] [Indexed: 12/22/2022]
Abstract
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.
Collapse
Affiliation(s)
- Naoki Matsubara
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Atsushi Teramoto
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan.
| | - Kuniaki Saito
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Hiroshi Fujita
- Department of Electrical, Electronic & Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu-city, Gifu, 501-1194, Japan
| |
Collapse
|
13
|
Liu Y, Zhang X, Cai G, Chen Y, Yun Z, Feng Q, Yang W. Automatic delineation of ribs and clavicles in chest radiographs using fully convolutional DenseNets. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 180:105014. [PMID: 31430596 DOI: 10.1016/j.cmpb.2019.105014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 08/04/2019] [Accepted: 08/04/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE In chest radiographs (CXRs), all bones and soft tissues are overlapping with each other, which raises issues for radiologists to read and interpret CXRs. Delineating the ribs and clavicles is helpful for suppressing them from chest radiographs so that their effects can be reduced for chest radiography analysis. However, delineating ribs and clavicles automatically is difficult by methods without deep learning models. Moreover, few of methods without deep learning models can delineate the anterior ribs effectively due to their faint rib edges in the posterior-anterior (PA) CXRs. METHODS In this work, we present an effective deep learning method for delineating posterior ribs, anterior ribs and clavicles automatically using a fully convolutional DenseNet (FC-DenseNet) as pixel classifier. We consider a pixel-weighted loss function to mitigate the uncertainty issue during manually delineating for robust prediction. RESULTS We conduct a comparative analysis with two other fully convolutional networks for edge detection and the state-of-the-art method without deep learning models. The proposed method significantly outperforms these methods in terms of quantitative evaluation metrics and visual perception. The average recall, precision and F-measure are 0.773 ± 0.030, 0.861 ± 0.043 and 0.814 ± 0.023 respectively, and the mean boundary distance (MBD) is 0.855 ± 0.642 pixels of the proposed method on the test dataset. The proposed method also performs well on JSRT and NIH Chest X-ray datasets, indicating its generalizability across multiple databases. Besides, a preliminary result of suppressing the bone components of CXRs has been produced by using our delineating system. CONCLUSIONS The proposed method can automatically delineate ribs and clavicles in CXRs and produce accurate edge maps.
Collapse
Affiliation(s)
- Yunbi Liu
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Xiao Zhang
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Guangwei Cai
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Yingyin Chen
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Zhaoqiang Yun
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China.
| |
Collapse
|