1
|
Hanaoka S, Nomura Y, Yoshikawa T, Nakao T, Takenaga T, Matsuzaki H, Yamamichi N, Abe O. Detection of pulmonary nodules in chest radiographs: novel cost function for effective network training with purely synthesized datasets. Int J Comput Assist Radiol Surg 2024; 19:1991-2000. [PMID: 39003437 PMCID: PMC11442563 DOI: 10.1007/s11548-024-03227-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 06/25/2024] [Indexed: 07/15/2024]
Abstract
PURPOSE Many large radiographic datasets of lung nodules are available, but the small and hard-to-detect nodules are rarely validated by computed tomography. Such difficult nodules are crucial for training nodule detection methods. This lack of difficult nodules for training can be addressed by artificial nodule synthesis algorithms, which can create artificially embedded nodules. This study aimed to develop and evaluate a novel cost function for training networks to detect such lesions. Embedding artificial lesions in healthy medical images is effective when positive cases are insufficient for network training. Although this approach provides both positive (lesion-embedded) images and the corresponding negative (lesion-free) images, no known methods effectively use these pairs for training. This paper presents a novel cost function for segmentation-based detection networks when positive-negative pairs are available. METHODS Based on the classic U-Net, new terms were added to the original Dice loss for reducing false positives and the contrastive learning of diseased regions in the image pairs. The experimental network was trained and evaluated, respectively, on 131,072 fully synthesized pairs of images simulating lung cancer and real chest X-ray images from the Japanese Society of Radiological Technology dataset. RESULTS The proposed method outperformed RetinaNet and a single-shot multibox detector. The sensitivities were 0.688 and 0.507 when the number of false positives per image was 0.2, respectively, with and without fine-tuning under the leave-one-case-out setting. CONCLUSION To our knowledge, this is the first study in which a method for detecting pulmonary nodules in chest X-ray images was evaluated on a real clinical dataset after being trained on fully synthesized images. The synthesized dataset is available at https://zenodo.org/records/10648433 .
Collapse
Affiliation(s)
- Shouhei Hanaoka
- Department of Radiology, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Yukihiro Nomura
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Japan
- Department of Computational Diagnostic Radiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Tomomi Takenaga
- Department of Radiology, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Hirotaka Matsuzaki
- Center for Epidemiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Nobutake Yamamichi
- Center for Epidemiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
2
|
Morís DI, Moura JD, Novo J, Ortega M. Adapted generative latent diffusion models for accurate pathological analysis in chest X-ray images. Med Biol Eng Comput 2024; 62:2189-2212. [PMID: 38499946 PMCID: PMC11190015 DOI: 10.1007/s11517-024-03056-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 02/16/2024] [Indexed: 03/20/2024]
Abstract
Respiratory diseases have a significant global impact, and assessing these conditions is crucial for improving patient outcomes. Chest X-ray is widely used for diagnosis, but expert evaluation can be challenging. Automatic computer-aided diagnosis methods can provide support for clinicians in these tasks. Deep learning has emerged as a set of algorithms with exceptional potential in such tasks. However, these algorithms require a vast amount of data, often scarce in medical imaging domains. In this work, a new data augmentation methodology based on adapted generative latent diffusion models is proposed to improve the performance of an automatic pathological screening in two high-impact scenarios: tuberculosis and lung nodules. The methodology is evaluated using three publicly available datasets, representative of real-world settings. An ablation study obtained the highest-performing image generation model configuration regarding the number of training steps. The results demonstrate that the novel set of generated images can improve the performance of the screening of these two highly relevant pathologies, obtaining an accuracy of 97.09%, 92.14% in each dataset of tuberculosis screening, respectively, and 82.19% in lung nodules. The proposal notably improves on previous image generation methods for data augmentation, highlighting the importance of the contribution in these critical public health challenges.
Collapse
Affiliation(s)
- Daniel I Morís
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - Joaquim de Moura
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain.
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| |
Collapse
|
3
|
Woodworth CF, Frota Lima LM, Bartholmai BJ, Koo CW. Imaging of Solid Pulmonary Nodules. Clin Chest Med 2024; 45:249-261. [PMID: 38816086 DOI: 10.1016/j.ccm.2023.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2024]
Abstract
Early detection with accurate classification of solid pulmonary nodules is critical in reducing lung cancer morbidity and mortality. Computed tomography (CT) remains the most widely used imaging examination for pulmonary nodule evaluation; however, other imaging modalities, such as PET/CT and MRI, are increasingly used for nodule characterization. Current advances in solid nodule imaging are largely due to developments in machine learning, including automated nodule segmentation and computer-aided detection. This review explores current multi-modality solid pulmonary nodule detection and characterization with discussion of radiomics and risk prediction models.
Collapse
Affiliation(s)
- Claire F Woodworth
- Department of Radiology, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | - Livia Maria Frota Lima
- Department of Radiology, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | - Brian J Bartholmai
- Department of Radiology, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | - Chi Wan Koo
- Department of Radiology, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA.
| |
Collapse
|
4
|
Wu D, Smith D, VanBerlo B, Roshankar A, Lee H, Li B, Ali F, Rahman M, Basmaji J, Tschirhart J, Ford A, VanBerlo B, Durvasula A, Vannelli C, Dave C, Deglint J, Ho J, Chaudhary R, Clausdorff H, Prager R, Millington S, Shah S, Buchanan B, Arntfield R. Improving the Generalizability and Performance of an Ultrasound Deep Learning Model Using Limited Multicenter Data for Lung Sliding Artifact Identification. Diagnostics (Basel) 2024; 14:1081. [PMID: 38893608 PMCID: PMC11172006 DOI: 10.3390/diagnostics14111081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 05/18/2024] [Accepted: 05/20/2024] [Indexed: 06/21/2024] Open
Abstract
Deep learning (DL) models for medical image classification frequently struggle to generalize to data from outside institutions. Additional clinical data are also rarely collected to comprehensively assess and understand model performance amongst subgroups. Following the development of a single-center model to identify the lung sliding artifact on lung ultrasound (LUS), we pursued a validation strategy using external LUS data. As annotated LUS data are relatively scarce-compared to other medical imaging data-we adopted a novel technique to optimize the use of limited external data to improve model generalizability. Externally acquired LUS data from three tertiary care centers, totaling 641 clips from 238 patients, were used to assess the baseline generalizability of our lung sliding model. We then employed our novel Threshold-Aware Accumulative Fine-Tuning (TAAFT) method to fine-tune the baseline model and determine the minimum amount of data required to achieve predefined performance goals. A subgroup analysis was also performed and Grad-CAM++ explanations were examined. The final model was fine-tuned on one-third of the external dataset to achieve 0.917 sensitivity, 0.817 specificity, and 0.920 area under the receiver operator characteristic curve (AUC) on the external validation dataset, exceeding our predefined performance goals. Subgroup analyses identified LUS characteristics that most greatly challenged the model's performance. Grad-CAM++ saliency maps highlighted clinically relevant regions on M-mode images. We report a multicenter study that exploits limited available external data to improve the generalizability and performance of our lung sliding model while identifying poorly performing subgroups to inform future iterative improvements. This approach may contribute to efficiencies for DL researchers working with smaller quantities of external validation data.
Collapse
Affiliation(s)
- Derek Wu
- Department of Medicine, Western University, London, ON N6A 5C1, Canada;
| | - Delaney Smith
- Faculty of Mathematics, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (D.S.); (H.L.)
| | - Blake VanBerlo
- Faculty of Mathematics, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (D.S.); (H.L.)
| | - Amir Roshankar
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - Hoseok Lee
- Faculty of Mathematics, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (D.S.); (H.L.)
| | - Brian Li
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - Faraz Ali
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - Marwan Rahman
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - John Basmaji
- Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada; (J.B.); (C.D.); (R.P.); (R.A.)
| | - Jared Tschirhart
- Schulich School of Medicine and Dentistry, Western University, London, ON N6A 5C1, Canada; (J.T.); (A.D.); (C.V.)
| | - Alex Ford
- Independent Researcher, London, ON N6A 1L8, Canada;
| | - Bennett VanBerlo
- Faculty of Engineering, Western University, London, ON N6A 5C1, Canada;
| | - Ashritha Durvasula
- Schulich School of Medicine and Dentistry, Western University, London, ON N6A 5C1, Canada; (J.T.); (A.D.); (C.V.)
| | - Claire Vannelli
- Schulich School of Medicine and Dentistry, Western University, London, ON N6A 5C1, Canada; (J.T.); (A.D.); (C.V.)
| | - Chintan Dave
- Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada; (J.B.); (C.D.); (R.P.); (R.A.)
| | - Jason Deglint
- Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.R.); (B.L.); (F.A.); (M.R.)
| | - Jordan Ho
- Department of Family Medicine, Western University, London, ON N6A 5C1, Canada;
| | - Rushil Chaudhary
- Department of Medicine, Western University, London, ON N6A 5C1, Canada;
| | - Hans Clausdorff
- Departamento de Medicina de Urgencia, Pontificia Universidad Católica de Chile, Santiago 8331150, Chile;
| | - Ross Prager
- Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada; (J.B.); (C.D.); (R.P.); (R.A.)
| | - Scott Millington
- Department of Critical Care Medicine, University of Ottawa, Ottawa, ON K1N 6N5, Canada;
| | - Samveg Shah
- Department of Medicine, University of Alberta, Edmonton, AB T6G 2R3, Canada;
| | - Brian Buchanan
- Department of Critical Care, University of Alberta, Edmonton, AB T6G 2R3, Canada;
| | - Robert Arntfield
- Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada; (J.B.); (C.D.); (R.P.); (R.A.)
| |
Collapse
|
5
|
Yuan Y, Liu L, Yang X, Liu L, Huang Q. Multi-scale Lesion Feature Fusion and Location-Aware for Chest Multi-disease Detection. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01133-7. [PMID: 38760643 DOI: 10.1007/s10278-024-01133-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 04/14/2024] [Accepted: 04/29/2024] [Indexed: 05/19/2024]
Abstract
Accurately identifying and locating lesions in chest X-rays has the potential to significantly enhance diagnostic efficiency, quality, and interpretability. However, current methods primarily focus on detecting of specific diseases in chest X-rays, disregarding the presence of multiple diseases in a single chest X-ray scan. Moreover, the diversity in lesion locations and attributes introduces complexity in accurately discerning specific traits for each lesion, leading to diminished accuracy when detecting multiple diseases. To address these issues, we propose a novel detection framework that enhances multi-scale lesion feature extraction and fusion, improving lesion position perception and subsequently boosting chest multi-disease detection performance. Initially, we construct a multi-scale lesion feature extraction network to tackle the uniqueness of various lesion features and locations, strengthening the global semantic correlation between lesion features and their positions. Following this, we introduce an instance-aware semantic enhancement network that dynamically amalgamates instance-specific features with high-level semantic representations across various scales. This adaptive integration effectively mitigates the loss of detailed information within lesion regions. Additionally, we perform lesion region feature mapping using candidate boxes to preserve crucial positional information, enhancing the accuracy of chest disease detection across multiple scales. Experimental results on the VinDr-CXR dataset reveal a 6% increment in mean average precision (mAP) and an 8.4% improvement in mean recall (mR) when compared to state-of-the-art baselines. This demonstrates the effectiveness of the model in accurately detecting multiple chest diseases by capturing specific features and location information.
Collapse
Affiliation(s)
- Yubo Yuan
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
| | - Lijun Liu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China.
- Key Laboratory of Application in Computer Technology in Yunnan Province, Kunming, 650500, China.
| | - Xiaobing Yang
- Department of State-Owned Assets and Laboratory Management, Kunming University of Science and Technology, Kunming, 650500, China
| | - Li Liu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
| | - Qingsong Huang
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
- Department of State-Owned Assets and Laboratory Management, Kunming University of Science and Technology, Kunming, 650500, China
| |
Collapse
|
6
|
Liu C, Wu Z, Wang B, Zhu M. Pulmonary nodule detection in x-ray images by feature augmentation and context aggregation. Phys Med Biol 2024; 69:045002. [PMID: 38237183 DOI: 10.1088/1361-6560/ad2013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 01/18/2024] [Indexed: 02/06/2024]
Abstract
Recent developments in x-ray image based pulmonary nodule detection have achieved remarkable results. However, existing methods are focused on transferring off-the-shelf coarse-grained classification models and fine-grained detection models rather than developing a dedicated framework optimized for nodule detection. In this paper, we propose PN-DetX, which as we know is the first dedicated pulmonary nodule detection framework. PN-DetX incorporates feature fusion and self-attention into x-ray based pulmonary nodule detection tasks, achieving improved detection performance. Specifically, PN-DetX adopts CSPDarknet backbone to extract features, and utilizes feature augmentation module to fuse features from different levels followed by context aggregation module to aggregate semantic information. To evaluate the efficacy of our method, we collect aLArge-scalePulmonaryNOduleDetection dataset,LAPNOD, comprising 2954 x-ray images along with expert-annotated ground truths. As we know, this is the first large-scale chest x-ray pulmonary nodule detection dataset. Experiments demonstrates that our method outperforms baseline by 3.8% mAP and 5.1%AP0.5. The generality of our approach is also evaluated on the publicly available dataset NODE21. We aspire for our method to serve as an inspiration for future research in the field of pulmonary nodule detection. The dataset and codes will be made in public.
Collapse
Affiliation(s)
- Chenglin Liu
- Department of Automation, University of Science and Technology of China, Hefei, People's Republic of China
| | - Zhi Wu
- School of Cyber Science and Technology, University of Science and Technology of China, Hefei, People's Republic of China
| | - Binquan Wang
- School of Cyber Science and Technology, University of Science and Technology of China, Hefei, People's Republic of China
| | - Ming Zhu
- Department of Automation, University of Science and Technology of China, Hefei, People's Republic of China
| |
Collapse
|
7
|
Srivastava D, Srivastava SK, Khan SB, Singh HR, Maakar SK, Agarwal AK, Malibari AA, Albalawi E. Early Detection of Lung Nodules Using a Revolutionized Deep Learning Model. Diagnostics (Basel) 2023; 13:3485. [PMID: 37998620 PMCID: PMC10669960 DOI: 10.3390/diagnostics13223485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/19/2023] [Accepted: 11/09/2023] [Indexed: 11/25/2023] Open
Abstract
According to the WHO (World Health Organization), lung cancer is the leading cause of cancer deaths globally. In the future, more than 2.2 million people will be diagnosed with lung cancer worldwide, making up 11.4% of every primary cause of cancer. Furthermore, lung cancer is expected to be the biggest driver of cancer-related mortality worldwide in 2020, with an estimated 1.8 million fatalities. Statistics on lung cancer rates are not uniform among geographic areas, demographic subgroups, or age groups. The chance of an effective treatment outcome and the likelihood of patient survival can be greatly improved with the early identification of lung cancer. Lung cancer identification in medical pictures like CT scans and MRIs is an area where deep learning (DL) algorithms have shown a lot of potential. This study uses the Hybridized Faster R-CNN (HFRCNN) to identify lung cancer at an early stage. Among the numerous uses for which faster R-CNN has been put to good use is identifying critical entities in medical imagery, such as MRIs and CT scans. Many research investigations in recent years have examined the use of various techniques to detect lung nodules (possible indicators of lung cancer) in scanned images, which may help in the early identification of lung cancer. One such model is HFRCNN, a two-stage, region-based entity detector. It begins by generating a collection of proposed regions, which are subsequently classified and refined with the aid of a convolutional neural network (CNN). A distinct dataset is used in the model's training process, producing valuable outcomes. More than a 97% detection accuracy was achieved with the suggested model, making it far more accurate than several previously announced methods.
Collapse
Affiliation(s)
- Durgesh Srivastava
- Department of Computer Science and Engineering, Sharda School of Engineering and Technology, Sharda University, Greater Noida 201310, India
- Chitkara Institute of Engineering and Technology, Chitkara University, Punjab 140601, India
| | | | - Surbhi Bhatia Khan
- Department of Data Science, School of Science Engineering and Environment, University of Salford, Manchester M54WT, UK
- Department of Engineering and Environment, University of Religions and Denominations, Qom 37491-13357, Iran
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
| | - Hare Ram Singh
- Department of Computer Science & Engineering, GNIOT, Greater Noida 201310, India
| | - Sunil K. Maakar
- School of Computing Science & Engineering, Galgotias University, Greater Noida 203201, India
| | - Ambuj Kumar Agarwal
- Department of Computer Science and Engineering, Sharda School of Engineering and Technology, Sharda University, Greater Noida 201310, India
| | - Areej A. Malibari
- Department of Industrial and Systems Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Eid Albalawi
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al Hofuf 36362, Saudi Arabia
| |
Collapse
|
8
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
9
|
Yanagawa M, Ito R, Nozaki T, Fujioka T, Yamada A, Fujita S, Kamagata K, Fushimi Y, Tsuboyama T, Matsui Y, Tatsugami F, Kawamura M, Ueda D, Fujima N, Nakaura T, Hirata K, Naganawa S. New trend in artificial intelligence-based assistive technology for thoracic imaging. LA RADIOLOGIA MEDICA 2023; 128:1236-1249. [PMID: 37639191 PMCID: PMC10547663 DOI: 10.1007/s11547-023-01691-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023]
Abstract
Although there is no solid agreement for artificial intelligence (AI), it refers to a computer system with intelligence similar to that of humans. Deep learning appeared in 2006, and more than 10 years have passed since the third AI boom was triggered by improvements in computing power, algorithm development, and the use of big data. In recent years, the application and development of AI technology in the medical field have intensified internationally. There is no doubt that AI will be used in clinical practice to assist in diagnostic imaging in the future. In qualitative diagnosis, it is desirable to develop an explainable AI that at least represents the basis of the diagnostic process. However, it must be kept in mind that AI is a physician-assistant system, and the final decision should be made by the physician while understanding the limitations of AI. The aim of this article is to review the application of AI technology in diagnostic imaging from PubMed database while particularly focusing on diagnostic imaging in thorax such as lesion detection and qualitative diagnosis in order to help radiologists and clinicians to become more familiar with AI in thorax.
Collapse
Affiliation(s)
- Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-City, Osaka, 565-0871, Japan.
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-0016, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8519, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-2621, Japan
| | - Shohei Fujita
- Department of Radiology, Graduate School of Medicine and Faculty of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyoku, Kyoto, 606-8507, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-City, Osaka, 565-0871, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kita-ku, Okayama, 700-8558, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-Machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N15, W5, Kita-ku, Sapporo, 060-8638, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo Chuo-ku, Kumamoto, 860-8556, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita 15 Nish I 7, Kita-ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
10
|
Arslan M, Haider A, Khurshid M, Abu Bakar SSU, Jani R, Masood F, Tahir T, Mitchell K, Panchagnula S, Mandair S. From Pixels to Pathology: Employing Computer Vision to Decode Chest Diseases in Medical Images. Cureus 2023; 15:e45587. [PMID: 37868395 PMCID: PMC10587792 DOI: 10.7759/cureus.45587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Radiology has been a pioneer in the healthcare industry's digital transformation, incorporating digital imaging systems like picture archiving and communication system (PACS) and teleradiology over the past thirty years. This shift has reshaped radiology services, positioning the field at a crucial junction for potential evolution into an integrated diagnostic service through artificial intelligence and machine learning. These technologies offer advanced tools for radiology's transformation. The radiology community has advanced computer-aided diagnosis (CAD) tools using machine learning techniques, notably deep learning convolutional neural networks (CNNs), for medical image pattern recognition. However, the integration of CAD tools into clinical practice has been hindered by challenges in workflow integration, unclear business models, and limited clinical benefits, despite development dating back to the 1990s. This comprehensive review focuses on detecting chest-related diseases through techniques like chest X-rays (CXRs), magnetic resonance imaging (MRI), nuclear medicine, and computed tomography (CT) scans. It examines the utilization of computer-aided programs by researchers for disease detection, addressing key areas: the role of computer-aided programs in disease detection advancement, recent developments in MRI, CXR, radioactive tracers, and CT scans for chest disease identification, research gaps for more effective development, and the incorporation of machine learning programs into diagnostic tools.
Collapse
Affiliation(s)
- Muhammad Arslan
- Department of Emergency Medicine, Royal Infirmary of Edinburgh, National Health Service (NHS) Lothian, Edinburgh, GBR
| | - Ali Haider
- Department of Allied Health Sciences, The University of Lahore, Gujrat Campus, Gujrat, PAK
| | - Mohsin Khurshid
- Department of Microbiology, Government College University Faisalabad, Faisalabad, PAK
| | | | - Rutva Jani
- Department of Internal Medicine, C. U. Shah Medical College and Hospital, Gujarat, IND
| | - Fatima Masood
- Department of Internal Medicine, Gulf Medical University, Ajman, ARE
| | - Tuba Tahir
- Department of Business Administration, Iqra University, Karachi, PAK
| | - Kyle Mitchell
- Department of Internal Medicine, University of Science, Arts and Technology, Olveston, MSR
| | - Smruthi Panchagnula
- Department of Internal Medicine, Ganni Subbalakshmi Lakshmi (GSL) Medical College, Hyderabad, IND
| | - Satpreet Mandair
- Department of Internal Medicine, Medical University of the Americas, Charlestown, KNA
| |
Collapse
|
11
|
Behrendt F, Bengs M, Bhattacharya D, Krüger J, Opfer R, Schlaefer A. A systematic approach to deep learning-based nodule detection in chest radiographs. Sci Rep 2023; 13:10120. [PMID: 37344565 DOI: 10.1038/s41598-023-37270-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 06/19/2023] [Indexed: 06/23/2023] Open
Abstract
Lung cancer is a serious disease responsible for millions of deaths every year. Early stages of lung cancer can be manifested in pulmonary lung nodules. To assist radiologists in reducing the number of overseen nodules and to increase the detection accuracy in general, automatic detection algorithms have been proposed. Particularly, deep learning methods are promising. However, obtaining clinically relevant results remains challenging. While a variety of approaches have been proposed for general purpose object detection, these are typically evaluated on benchmark data sets. Achieving competitive performance for specific real-world problems like lung nodule detection typically requires careful analysis of the problem at hand and the selection and tuning of suitable deep learning models. We present a systematic comparison of state-of-the-art object detection algorithms for the task of lung nodule detection. In this regard, we address the critical aspect of class imbalance and and demonstrate a data augmentation approach as well as transfer learning to boost performance. We illustrate how this analysis and a combination of multiple architectures results in state-of-the-art performance for lung nodule detection, which is demonstrated by the proposed model winning the detection track of the Node21 competition. The code for our approach is available at https://github.com/FinnBehrendt/node21-submit.
Collapse
Affiliation(s)
- Finn Behrendt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany.
| | - Marcel Bengs
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| | - Debayan Bhattacharya
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| | | | | | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| |
Collapse
|
12
|
VJ MJ, S K. Multi-classification approach for lung nodule detection and classification with proposed texture feature in X-ray images. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-28. [PMID: 37362672 PMCID: PMC10188326 DOI: 10.1007/s11042-023-15281-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 10/22/2022] [Accepted: 04/06/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer is a widespread type of cancer around the world. It is, moreover, a lethal type of tumor. Nevertheless, analysis signifies that earlier recognition of lung cancer considerably develops the possibilities of survival. By deploying X-rays and Computed Tomography (CT) scans, radiologists could identify hazardous nodules at an earlier period. However, when more citizens adopt these diagnoses, the workload rises for radiologists. Computer Assisted Diagnosis (CAD)-based detection systems can identify these nodules automatically and could assist radiologists in reducing their workloads. However, they result in lower sensitivity and a higher count of false positives. The proposed work introduces a new approach for Lung Nodule (LN) detection. At first, Histogram Equalization (HE) is done during pre-processing. As the next step, improved Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) based segmentation is done. Then, the characteristics, including "Gray Level Run-Length Matrix (GLRM), Gray Level Co-Occurrence Matrix (GLCM), and the proposed Local Vector Pattern (LVP)," are retrieved. These features are then categorized utilizing an optimized Convolutional Neural Network (CNN) and itdetectsnodule or non-nodule images. Subsequently, Long Short-Term Memory (LSTM) is deployed to categorize nodule types (benign, malignant, or normal). The CNN weights are fine-tuned by the Chaotic Population-based Beetle Swarm Algorithm (CP-BSA). Finally, the superiority of the proposed approach is confirmed across various measures. The developed approach has exhibited a high precision value of 0.9575 for the best case scenario, and high sensitivity value of 0.9646 for the mean case scenario. The superiority of the proposed approach is confirmed across various measures.
Collapse
Affiliation(s)
- Mary Jaya VJ
- Department of Computer Science, Assumption Autonomous College, Changanassery, Kerala India
| | - Krishnakumar S
- Department of Electronics, School of Technology and Applied Sciences, Mahatma Gandhi University Research Centre, Kochi, Kerala India
| |
Collapse
|
13
|
Juan J, Monsó E, Lozano C, Cufí M, Subías-Beltrán P, Ruiz-Dern L, Rafael-Palou X, Andreu M, Castañer E, Gallardo X, Ullastres A, Sans C, Lujàn M, Rubiés C, Ribas-Ripoll V. Computer-assisted diagnosis for an early identification of lung cancer in chest X rays. Sci Rep 2023; 13:7720. [PMID: 37173327 PMCID: PMC10182094 DOI: 10.1038/s41598-023-34835-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 05/09/2023] [Indexed: 05/15/2023] Open
Abstract
Computer-assisted diagnosis (CAD) algorithms have shown its usefulness for the identification of pulmonary nodules in chest x-rays, but its capability to diagnose lung cancer (LC) is unknown. A CAD algorithm for the identification of pulmonary nodules was created and used on a retrospective cohort of patients with x-rays performed in 2008 and not examined by a radiologist when obtained. X-rays were sorted according to the probability of pulmonary nodule, read by a radiologist and the evolution for the following three years was assessed. The CAD algorithm sorted 20,303 x-rays and defined four subgroups with 250 images each (percentiles ≥ 98, 66, 33 and 0). Fifty-eight pulmonary nodules were identified in the ≥ 98 percentile (23,2%), while only 64 were found in lower percentiles (8,5%) (p < 0.001). A pulmonary nodule was confirmed by the radiologist in 39 out of 173 patients in the high-probability group who had follow-up information (22.5%), and in 5 of them a LC was diagnosed with a delay of 11 months (12.8%). In one quarter of the chest x-rays considered as high-probability for pulmonary nodule by a CAD algorithm, the finding is confirmed and corresponds to an undiagnosed LC in one tenth of the cases.
Collapse
Affiliation(s)
- Judith Juan
- Innovation Department, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Eduard Monsó
- Airway Inflammation Research Group, Institut d'Investigació i Innovació Parc Taulí (I3PT), Parc Taulí 1, 08208, Sabadell, Spain.
| | - Carme Lozano
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Marta Cufí
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | | | | | | | - Marta Andreu
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Eva Castañer
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Xavier Gallardo
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Anna Ullastres
- Innovation Department, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Carles Sans
- Eurecat, Centre Tecnològic de Catalunya, Barcelona, Spain
| | - Manel Lujàn
- Respiratory Diseases Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Carles Rubiés
- Informatics and Systems Department, Granollers General Hospital, Granollers, Barcelona, Spain
| | | |
Collapse
|
14
|
Sebastian AE, Dua D. Lung Nodule Detection via Optimized Convolutional Neural Network: Impact of Improved Moth Flame Algorithm. SENSING AND IMAGING 2023; 24:11. [PMID: 36936054 PMCID: PMC10009866 DOI: 10.1007/s11220-022-00406-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 09/30/2022] [Accepted: 11/02/2022] [Indexed: 06/18/2023]
Abstract
Lung cancer is a high-risk disease that affects people all over the world, and lung nodules are the most common sign of early lung cancer. Since early identification of lung cancer can considerably improve a lung scanner patient's chances of survival, an accurate and efficient nodule detection system can be essential. Automatic lung nodule recognition decreases radiologists' effort, as well as the risk of misdiagnosis and missed diagnoses. Hence, this article developed a new lung nodule detection model with four stages like "Image pre-processing, segmentation, feature extraction and classification". In this processes, pre-processing is the first step, in which the input image is subjected to a series of operations. Then, the "Otsu Thresholding model" is used to segment the pre-processed pictures. Then in the third stage, the LBP features are retrieved that is then classified via optimized Convolutional Neural Network (CNN). In this, the activation function and convolutional layer count of CNN is optimally tuned via a proposed algorithm known as Improved Moth Flame Optimization (IMFO). At the end, the betterment of the scheme is validated by carrying out analysis in terms of certain measures. Especially, the accuracy of the proposed work is 6.85%, 2.91%, 1.75%, 0.73%, 1.83%, as well as 4.05% superior to the extant SVM, KNN, CNN, MFO, WTEEB as well as GWO + FRVM methods respectively.
Collapse
Affiliation(s)
| | - Disha Dua
- Indira Gandhi Delhi Technical University for Women, Delhi, Delhi, India
| |
Collapse
|
15
|
Leveraging Deep Learning Decision-Support System in Specialized Oncology Center: A Multi-Reader Retrospective Study on Detection of Pulmonary Lesions in Chest X-ray Images. Diagnostics (Basel) 2023; 13:diagnostics13061043. [PMID: 36980351 PMCID: PMC10047277 DOI: 10.3390/diagnostics13061043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 02/27/2023] [Accepted: 03/07/2023] [Indexed: 03/12/2023] Open
Abstract
Chest X-ray (CXR) is considered to be the most widely used modality for detecting and monitoring various thoracic findings, including lung carcinoma and other pulmonary lesions. However, X-ray imaging shows particular limitations when detecting primary and secondary tumors and is prone to reading errors due to limited resolution and disagreement between radiologists. To address these issues, we developed a deep-learning-based automatic detection algorithm (DLAD) to automatically detect and localize suspicious lesions on CXRs. Five radiologists were invited to retrospectively evaluate 300 CXR images from a specialized oncology center, and the performance of individual radiologists was subsequently compared with that of DLAD. The proposed DLAD achieved significantly higher sensitivity (0.910 (0.854–0.966)) than that of all assessed radiologists (RAD 10.290 (0.201–0.379), p < 0.001, RAD 20.450 (0.352–0.548), p < 0.001, RAD 30.670 (0.578–0.762), p < 0.001, RAD 40.810 (0.733–0.887), p = 0.025, RAD 50.700 (0.610–0.790), p < 0.001). The DLAD specificity (0.775 (0.717–0.833)) was significantly lower than for all assessed radiologists (RAD 11.000 (0.984–1.000), p < 0.001, RAD 20.970 (0.946–1.000), p < 0.001, RAD 30.980 (0.961–1.000), p < 0.001, RAD 40.975 (0.953–0.997), p < 0.001, RAD 50.995 (0.985–1.000), p < 0.001). The study results demonstrate that the proposed DLAD could be utilized as a decision-support system to reduce radiologists’ false negative rate.
Collapse
|
16
|
Anomaly Detection in Chest X-rays Based on Dual-Attention Mechanism and Multi-Scale Feature Fusion. Symmetry (Basel) 2023. [DOI: 10.3390/sym15030668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023] Open
Abstract
The efficient and automatic detection of chest abnormalities is vital for the auxiliary diagnosis of medical images. Many studies utilize computer vision and deep learning approaches involving symmetry and asymmetry concepts to detect chest abnormalities, and achieve promising findings. However, an accurate instance-level and multi-label detection of abnormalities in chest X-rays remains a significant challenge. Here, a novel anomaly detection method for symmetric chest X-rays using dual-attention and multi-scale feature fusion is proposed. Three aspects of our method should be noted in comparison with the previous approaches. We improved the deep neural network with channel-dimensional and spatial-dimensional attention to capture the abundant contextual features. We then used an optimized multi-scale learning framework for feature fusion to adapt to the scale variation in the abnormalities. Considering the influence of the data imbalance and other factors, we introduced a seesaw loss function to flexibly adjust the sample weights and enhance the model learning efficiency. The rigorous experimental evaluation of a public chest X-ray dataset with fourteen different types of abnormalities demonstrates that our model has a mean average precision of 0.362 and outperforms existing methods.
Collapse
|
17
|
Niehoff JH, Kalaitzidis J, Kroeger JR, Schoenbeck D, Borggrefe J, Michael AE. Evaluation of the clinical performance of an AI-based application for the automated analysis of chest X-rays. Sci Rep 2023; 13:3680. [PMID: 36872333 PMCID: PMC9985819 DOI: 10.1038/s41598-023-30521-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 02/24/2023] [Indexed: 03/07/2023] Open
Abstract
The AI-Rad Companion Chest X-ray (AI-Rad, Siemens Healthineers) is an artificial-intelligence based application for the analysis of chest X-rays. The purpose of the present study is to evaluate the performance of the AI-Rad. In total, 499 radiographs were retrospectively included. Radiographs were independently evaluated by radiologists and the AI-Rad. Findings indicated by the AI-Rad and findings described in the written report (WR) were compared to the findings of a ground truth reading (consensus decision of two radiologists after assessing additional radiographs and CT scans). The AI-Rad can offer superior sensitivity for the detection of lung lesions (0.83 versus 0.52), consolidations (0.88 versus 0.78) and atelectasis (0.54 versus 0.43) compared to the WR. However, the superior sensitivity is accompanied by higher false-detection-rates. The sensitivity of the AI-Rad for the detection of pleural effusions is lower compared to the WR (0.74 versus 0.88). The negative-predictive-values (NPV) of the AI-Rad for the detection of all pre-defined findings are on a high level and comparable to the WR. The seemingly advantageous high sensitivity of the AI-Rad is partially offset by the disadvantage of a high false-detection-rate. At the current stage of development, therefore, the high NPVs may be the greatest benefit of the AI-Rad giving radiologists the possibility to re-insure their own negative search for pathologies and thus boosting their confidence in their reports.
Collapse
Affiliation(s)
- Julius Henning Niehoff
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany.
| | - Jana Kalaitzidis
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Jan Robert Kroeger
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Denise Schoenbeck
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Jan Borggrefe
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Arwed Elias Michael
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
18
|
Shen Z, Ouyang X, Xiao B, Cheng JZ, Shen D, Wang Q. Image synthesis with disentangled attributes for chest X-ray nodule augmentation and detection. Med Image Anal 2023; 84:102708. [PMID: 36516554 DOI: 10.1016/j.media.2022.102708] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 11/18/2022] [Accepted: 11/25/2022] [Indexed: 12/12/2022]
Abstract
Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers. Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR images. However, it requires large-scale and diverse medical data with high-quality annotations to train such robust and accurate CADs. To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation. Nevertheless, previous methods lack the ability to generate nodules that are realistic with the shape/size attributes desired by the detector. To address this issue, we introduce a novel lung nodule synthesis framework in this paper, which decomposes nodule attributes into three main aspects including the shape, the size, and the texture, respectively. A GAN-based Shape Generator firstly models nodule shapes by generating diverse shape masks. The following Size Modulation then enables quantitative control on the diameters of the generated nodule shapes in pixel-level granularity. A coarse-to-fine gated convolutional Texture Generator finally synthesizes visually plausible nodule textures conditioned on the modulated shape masks. Moreover, we propose to synthesize nodule CXR images by controlling the disentangled nodule attributes for data augmentation, in order to better compensate for the nodules that are easily missed in the detection task. Our experiments demonstrate the enhanced image quality, diversity, and controllability of the proposed lung nodule synthesis framework. We also validate the effectiveness of our data augmentation strategy on greatly improving nodule detection performance.
Collapse
Affiliation(s)
- Zhenrong Shen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Xi Ouyang
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Bin Xiao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Jie-Zhi Cheng
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Dinggang Shen
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai Clinical Research and Trial Center, Shanghai 201210, China.
| |
Collapse
|
19
|
Ait Nasser A, Akhloufi MA. A Review of Recent Advances in Deep Learning Models for Chest Disease Detection Using Radiography. Diagnostics (Basel) 2023; 13:159. [PMID: 36611451 PMCID: PMC9818166 DOI: 10.3390/diagnostics13010159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/21/2022] [Accepted: 12/26/2022] [Indexed: 01/05/2023] Open
Abstract
Chest X-ray radiography (CXR) is among the most frequently used medical imaging modalities. It has a preeminent value in the detection of multiple life-threatening diseases. Radiologists can visually inspect CXR images for the presence of diseases. Most thoracic diseases have very similar patterns, which makes diagnosis prone to human error and leads to misdiagnosis. Computer-aided detection (CAD) of lung diseases in CXR images is among the popular topics in medical imaging research. Machine learning (ML) and deep learning (DL) provided techniques to make this task more efficient and faster. Numerous experiments in the diagnosis of various diseases proved the potential of these techniques. In comparison to previous reviews our study describes in detail several publicly available CXR datasets for different diseases. It presents an overview of recent deep learning models using CXR images to detect chest diseases such as VGG, ResNet, DenseNet, Inception, EfficientNet, RetinaNet, and ensemble learning methods that combine multiple models. It summarizes the techniques used for CXR image preprocessing (enhancement, segmentation, bone suppression, and data-augmentation) to improve image quality and address data imbalance issues, as well as the use of DL models to speed-up the diagnosis process. This review also discusses the challenges present in the published literature and highlights the importance of interpretability and explainability to better understand the DL models' detections. In addition, it outlines a direction for researchers to help develop more effective models for early and automatic detection of chest diseases.
Collapse
Affiliation(s)
| | - Moulay A. Akhloufi
- Perception, Robotics and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1C 3E9, Canada
| |
Collapse
|
20
|
Bi M, Zheng S, Li X, Liu H, Feng X, Fan Y, Shen L. MIB-ANet: A novel multi-scale deep network for nasal endoscopy-based adenoid hypertrophy grading. Front Med (Lausanne) 2023; 10:1142261. [PMID: 37122318 PMCID: PMC10140414 DOI: 10.3389/fmed.2023.1142261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 03/23/2023] [Indexed: 05/02/2023] Open
Abstract
Introduction To develop a novel deep learning model to automatically grade adenoid hypertrophy, based on nasal endoscopy, and asses its performance with that of E.N.T. clinicians. Methods A total of 3,179 nasoendoscopic images, including 4-grade adenoid hypertrophy (Parikh grading standard, 2006), were collected to develop and test deep neural networks. MIB-ANet, a novel multi-scale grading network, was created for adenoid hypertrophy grading. A comparison between MIB-ANet and E.N.T. clinicians was conducted. Results In the SYSU-SZU-EA Dataset, the MIB-ANet achieved 0.76251 F1 score and 0.76807 accuracy, and showed the best classification performance among all of the networks. The visualized heatmaps show that MIB-ANet can detect whether adenoid contact with adjacent tissues, which was interpretable for clinical decision. MIB-ANet achieved at least 6.38% higher F1 score and 4.31% higher accuracy than the junior E.N.T. clinician, with much higher (80× faster) diagnosing speed. Discussion The novel multi-scale grading network MIB-ANet, designed for adenoid hypertrophy, achieved better classification performance than four classical CNNs and the junior E.N.T. clinician. Nonetheless, further studies are required to improve the accuracy of MIB-ANet.
Collapse
Affiliation(s)
- Mingmin Bi
- Department of Otolaryngology, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen, China
| | - Siting Zheng
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
- AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen, China
| | - Xuechen Li
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
- AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen, China
| | - Haiyan Liu
- Department of Otolaryngology, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen, China
| | - Xiaoshan Feng
- Department of Otolaryngology, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen, China
| | - Yunping Fan
- Department of Otolaryngology, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen, China
- *Correspondence: Yunping Fan, ; Linlin Shen,
| | - Linlin Shen
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
- AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen, China
- *Correspondence: Yunping Fan, ; Linlin Shen,
| |
Collapse
|
21
|
Islam R, Tarique M. Chest X-Ray Images to Differentiate COVID-19 from Pneumonia with Artificial Intelligence Techniques. Int J Biomed Imaging 2022; 2022:5318447. [PMID: 36588667 PMCID: PMC9800093 DOI: 10.1155/2022/5318447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 11/05/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
This paper presents an automated and noninvasive technique to discriminate COVID-19 patients from pneumonia patients using chest X-ray images and artificial intelligence. The reverse transcription-polymerase chain reaction (RT-PCR) test is commonly administered to detect COVID-19. However, the RT-PCR test necessitates person-to-person contact to administer, requires variable time to produce results, and is expensive. Moreover, this test is still unreachable to the significant global population. The chest X-ray images can play an important role here as the X-ray machines are commonly available at any healthcare facility. However, the chest X-ray images of COVID-19 and viral pneumonia patients are very similar and often lead to misdiagnosis subjectively. This investigation has employed two algorithms to solve this problem objectively. One algorithm uses lower-dimension encoded features extracted from the X-ray images and applies them to the machine learning algorithms for final classification. The other algorithm relies on the inbuilt feature extractor network to extract features from the X-ray images and classifies them with a pretrained deep neural network VGG16. The simulation results show that the proposed two algorithms can extricate COVID-19 patients from pneumonia with the best accuracy of 100% and 98.1%, employing VGG16 and the machine learning algorithm, respectively. The performances of these two algorithms have also been collated with those of other existing state-of-the-art methods.
Collapse
Affiliation(s)
- Rumana Islam
- Department of ECE, University of Windsor, ON, Canada N9B 3P4
| | - Mohammed Tarique
- Department of ECE, University of Science and Technology of Fujairah, UAE
| |
Collapse
|
22
|
Bilal A, Shafiq M, Fang F, Waqar M, Ullah I, Ghadi YY, Long H, Zeng R. IGWO-IVNet3: DL-Based Automatic Diagnosis of Lung Nodules Using an Improved Gray Wolf Optimization and InceptionNet-V3. SENSORS (BASEL, SWITZERLAND) 2022; 22:9603. [PMID: 36559970 PMCID: PMC9786099 DOI: 10.3390/s22249603] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 11/28/2022] [Accepted: 12/06/2022] [Indexed: 06/17/2023]
Abstract
Artificial intelligence plays an essential role in diagnosing lung cancer. Lung cancer is notoriously difficult to diagnose until it has progressed to a late stage, making it a leading cause of cancer-related mortality. Lung cancer is fatal if not treated early, making this a significant issue. Initial diagnosis of malignant nodules is often made using chest radiography (X-ray) and computed tomography (CT) scans; nevertheless, the possibility of benign nodules leads to wrong choices. In their first phases, benign and malignant nodules seem very similar. Additionally, radiologists have a hard time viewing and categorizing lung abnormalities. Lung cancer screenings performed by radiologists are often performed with the use of computer-aided diagnostic technologies. Computer scientists have presented many methods for identifying lung cancer in recent years. Low-quality images compromise the segmentation process, rendering traditional lung cancer prediction algorithms inaccurate. This article suggests a highly effective strategy for identifying and categorizing lung cancer. Noise in the pictures was reduced using a weighted filter, and the improved Gray Wolf Optimization method was performed before segmentation with watershed modification and dilation operations. We used InceptionNet-V3 to classify lung cancer into three groups, and it performed well compared to prior studies: 98.96% accuracy, 94.74% specificity, as well as 100% sensitivity.
Collapse
Affiliation(s)
- Anas Bilal
- College of Information Science and Technology, Hainan Normal University, Haikou 571158, China
| | - Muhammad Shafiq
- School of Information Engineering, Qujing Normal University, Qujing 655011, China
| | - Fang Fang
- College of Information Engineering, Hainan Vocational University of Science and Technology, Haikou 571126, China
| | - Muhammad Waqar
- Department of Computer Science, COMSATS University, Islamabad 45550, Pakistan
| | - Inam Ullah
- BK21 Chungbuk Information Technology Education and Research Center, Chungbuk National University, Cheongju-si 28644, Republic of Korea
| | - Yazeed Yasin Ghadi
- Department of Computer Science, Al Ain University, Abu Dhabi 64141, United Arab Emirates
| | - Haixia Long
- College of Information Science and Technology, Hainan Normal University, Haikou 571158, China
| | - Rao Zeng
- College of Information Science and Technology, Hainan Normal University, Haikou 571158, China
| |
Collapse
|
23
|
Rahman T, Akinbi A, Chowdhury MEH, Rashid TA, Şengür A, Khandakar A, Islam KR, Ismael AM. COV-ECGNET: COVID-19 detection using ECG trace images with deep convolutional neural network. Health Inf Sci Syst 2022; 10:1. [PMID: 35096384 PMCID: PMC8785028 DOI: 10.1007/s13755-021-00169-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 12/27/2021] [Indexed: 12/25/2022] Open
Abstract
The reliable and rapid identification of the COVID-19 has become crucial to prevent the rapid spread of the disease, ease lockdown restrictions and reduce pressure on public health infrastructures. Recently, several methods and techniques have been proposed to detect the SARS-CoV-2 virus using different images and data. However, this is the first study that will explore the possibility of using deep convolutional neural network (CNN) models to detect COVID-19 from electrocardiogram (ECG) trace images. In this work, COVID-19 and other cardiovascular diseases (CVDs) were detected using deep-learning techniques. A public dataset of ECG images consisting of 1937 images from five distinct categories, such as normal, COVID-19, myocardial infarction (MI), abnormal heartbeat (AHB), and recovered myocardial infarction (RMI) were used in this study. Six different deep CNN models (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, and MobileNetv2) were used to investigate three different classification schemes: (i) two-class classification (normal vs COVID-19); (ii) three-class classification (normal, COVID-19, and other CVDs), and finally, (iii) five-class classification (normal, COVID-19, MI, AHB, and RMI). For two-class and three-class classification, Densenet201 outperforms other networks with an accuracy of 99.1%, and 97.36%, respectively; while for the five-class classification, InceptionV3 outperforms others with an accuracy of 97.83%. ScoreCAM visualization confirms that the networks are learning from the relevant area of the trace images. Since the proposed method uses ECG trace images which can be captured by smartphones and are readily available facilities in low-resources countries, this study will help in faster computer-aided diagnosis of COVID-19 and other cardiac abnormalities.
Collapse
Affiliation(s)
- Tawsifur Rahman
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | - Alex Akinbi
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool, UK
| | | | - Tarik A. Rashid
- Computer Science and Engineering Department, School of Science and Engineering, University of Kurdistan Hewler, Erbīl, KRG Iraq
| | - Abdulkadir Şengür
- Electrical-Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | | | - Aras M. Ismael
- Information Technology Department, College of Informatics, Sulaimani Polytechnic University, Sulaymaniyah, Iraq
| |
Collapse
|
24
|
Chiu HY, Peng RHT, Lin YC, Wang TW, Yang YX, Chen YY, Wu MH, Shiao TH, Chao HS, Chen YM, Wu YT. Artificial Intelligence for Early Detection of Chest Nodules in X-ray Images. Biomedicines 2022; 10:2839. [PMID: 36359360 PMCID: PMC9687210 DOI: 10.3390/biomedicines10112839] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 11/02/2022] [Accepted: 11/04/2022] [Indexed: 09/06/2024] Open
Abstract
Early detection increases overall survival among patients with lung cancer. This study formulated a machine learning method that processes chest X-rays (CXRs) to detect lung cancer early. After we preprocessed our dataset using monochrome and brightness correction, we used different kinds of preprocessing methods to enhance image contrast and then used U-net to perform lung segmentation. We used 559 CXRs with a single lung nodule labeled by experts to train a You Only Look Once version 4 (YOLOv4) deep-learning architecture to detect lung nodules. In a testing dataset of 100 CXRs from patients at Taipei Veterans General Hospital and 154 CXRs from the Japanese Society of Radiological Technology dataset, the sensitivity of the AI model using a combination of different preprocessing methods performed the best at 79%, with 3.04 false positives per image. We then tested the AI by using 383 sets of CXRs obtained in the past 5 years prior to lung cancer diagnoses. The median time from detection to diagnosis for radiologists assisted with AI was 46 (3-523) days, longer than that for radiologists (8 (0-263) days). The AI model can assist radiologists in the early detection of lung nodules.
Collapse
Affiliation(s)
- Hwa-Yen Chiu
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Division of Internal Medicine, Hsinchu Branch, Taipei Veterans General Hospital, Hsinchu 310, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Rita Huan-Ting Peng
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yi-Chian Lin
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Ting-Wei Wang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Ya-Xuan Yang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Ying-Ying Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Department of Critical Care Medicine, Taiwan Adventist Hospital, Taipei 105, Taiwan
| | - Mei-Han Wu
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Department of Medical Imaging, Cheng Hsin General Hospital, Taipei 112, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei 112, Taiwan
| | - Tsu-Hui Shiao
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Heng-Sheng Chao
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yuh-Min Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| |
Collapse
|
25
|
Govindarajan A, Govindarajan A, Tanamala S, Chattoraj S, Reddy B, Agrawal R, Iyer D, Srivastava A, Kumar P, Putha P. Role of an Automated Deep Learning Algorithm for Reliable Screening of Abnormality in Chest Radiographs: A Prospective Multicenter Quality Improvement Study. Diagnostics (Basel) 2022; 12:2724. [PMID: 36359565 PMCID: PMC9689183 DOI: 10.3390/diagnostics12112724] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/01/2022] [Accepted: 11/03/2022] [Indexed: 11/10/2023] Open
Abstract
In medical practice, chest X-rays are the most ubiquitous diagnostic imaging tests. However, the current workload in extensive health care facilities and lack of well-trained radiologists is a significant challenge in the patient care pathway. Therefore, an accurate, reliable, and fast computer-aided diagnosis (CAD) system capable of detecting abnormalities in chest X-rays is crucial in improving the radiological workflow. In this prospective multicenter quality-improvement study, we have evaluated whether artificial intelligence (AI) can be used as a chest X-ray screening tool in real clinical settings. Methods: A team of radiologists used the AI-based chest X-ray screening tool (qXR) as a part of their daily reporting routine to report consecutive chest X-rays for this prospective multicentre study. This study took place in a large radiology network in India between June 2021 and March 2022. Results: A total of 65,604 chest X-rays were processed during the study period. The overall performance of AI achieved in detecting normal and abnormal chest X-rays was good. The high negatively predicted value (NPV) of 98.9% was achieved. The AI performance in terms of area under the curve (AUC), NPV for the corresponding subabnormalities obtained were blunted CP angle (0.97, 99.5%), hilar dysmorphism (0.86, 99.9%), cardiomegaly (0.96, 99.7%), reticulonodular pattern (0.91, 99.9%), rib fracture (0.98, 99.9%), scoliosis (0.98, 99.9%), atelectasis (0.96, 99.9%), calcification (0.96, 99.7%), consolidation (0.95, 99.6%), emphysema (0.96, 99.9%), fibrosis (0.95, 99.7%), nodule (0.91, 99.8%), opacity (0.92, 99.2%), pleural effusion (0.97, 99.7%), and pneumothorax (0.99, 99.9%). Additionally, the turnaround time (TAT) decreased by about 40.63% from pre-qXR period to post-qXR period. Conclusions: The AI-based chest X-ray solution (qXR) screened chest X-rays and assisted in ruling out normal patients with high confidence, thus allowing the radiologists to focus more on assessing pathology on abnormal chest X-rays and treatment pathways.
Collapse
|
26
|
Wang R, Fu G, Li J, Pei Y. Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases. Med Phys 2022; 49:7054-7070. [PMID: 35880443 DOI: 10.1002/mp.15871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 03/18/2022] [Accepted: 06/28/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. METHODS In this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects. RESULTS The method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. CONCLUSIONS We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.
Collapse
Affiliation(s)
- Ruiqian Wang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Guanghui Fu
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Japan
| |
Collapse
|
27
|
Frequency of Missed Findings on Chest Radiographs (CXRs) in an International, Multicenter Study: Application of AI to Reduce Missed Findings. Diagnostics (Basel) 2022; 12:diagnostics12102382. [PMID: 36292071 PMCID: PMC9600490 DOI: 10.3390/diagnostics12102382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/21/2022] [Accepted: 09/26/2022] [Indexed: 11/25/2022] Open
Abstract
Background: Missed findings in chest X-ray interpretation are common and can have serious consequences. Methods: Our study included 2407 chest radiographs (CXRs) acquired at three Indian and five US sites. To identify CXRs reported as normal, we used a proprietary radiology report search engine based on natural language processing (mPower, Nuance). Two thoracic radiologists reviewed all CXRs and recorded the presence and clinical significance of abnormal findings on a 5-point scale (1—not important; 5—critical importance). All CXRs were processed with the AI model (Qure.ai) and outputs were recorded for the presence of findings. Data were analyzed to obtain area under the ROC curve (AUC). Results: Of 410 CXRs (410/2407, 18.9%) with unreported/missed findings, 312 (312/410, 76.1%) findings were clinically important: pulmonary nodules (n = 157), consolidation (60), linear opacities (37), mediastinal widening (21), hilar enlargement (17), pleural effusions (11), rib fractures (6) and pneumothoraces (3). AI detected 69 missed findings (69/131, 53%) with an AUC of up to 0.935. The AI model was generalizable across different sites, geographic locations, patient genders and age groups. Conclusion: A substantial number of important CXR findings are missed; the AI model can help to identify and reduce the frequency of important missed findings in a generalizable manner.
Collapse
|
28
|
Rani G, Misra A, Dhaka VS, Zumpano E, Vocaturo E. Spatial feature and resolution maximization GAN for bone suppression in chest radiographs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:107024. [PMID: 35863123 DOI: 10.1016/j.cmpb.2022.107024] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 06/29/2022] [Accepted: 07/12/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Chest radiographs (CXR) are in great demand for visualizing the pathology of the lungs. However, the appearance of bones in the lung region hinders the localization of any lesion or nodule present in the CXR. Thus, bone suppression becomes an important task for the effective screening of lung diseases. Simultaneously, it is equally important to preserve spatial information and image quality because they provide crucial insights on the size and area of infection, color accuracy, structural quality, etc. Many researchers considered bone suppression as an image denoising problem and proposed conditional Generative Adversarial Network-based (cGAN) models for generating bone suppressed images from CXRs. These works do not focus on the retention of spatial features and image quality. The authors of this manuscript developed the Spatial Feature and Resolution Maximization (SFRM) GAN to efficiently minimize the visibility of bones in CXRs while ensuring maximum retention of critical information. METHOD This task is achieved by modifying the architectures of the discriminator and generator of the pix2pix model. The discriminator is combined with the Wasserstein GAN with Gradient Penalty to increase its performance and training stability. For the generator, a combination of different task-specific loss functions, viz., L1, Perceptual, and Sobel loss are employed to capture the intrinsic information in the image. RESULT The proposed model reported as measures of performance a mean PSNR of 43.588, mean NMSE of 0.00025, mean SSIM of 0.989, and mean Entropy of 0.454 bits/pixel on a test size of 100 images. Further, the combination of δ=104, α=1, β=10, and γ=10 are the hyperparameters that provided the best trade-off between image denoising and quality retention. CONCLUSION The degree of bone suppression and spatial information preservation can be improved by adding the Sobel and Perceptual loss respectively. SFRM-GAN not only suppresses bones but also retains the image quality and intrinsic information. Based on the results of student's t-test it is concluded that SFRM-GAN yields statistically significant results at a 0.95 level of confidence and shows its supremacy over the state-of-the-art models. Thus, it may be used for denoising and preprocessing of images.
Collapse
Affiliation(s)
- Geeta Rani
- Department of Computer and Communication Engineering, Manipal University Jaipur, India.
| | - Ankit Misra
- Department of Computer Science and Engineering, Manipal University Jaipur, India; Goergen Institute for Data Science, University of Rochester, USA.
| | - Vijaypal Singh Dhaka
- Department of Computer and Communication Engineering, Manipal University Jaipur, India.
| | - Ester Zumpano
- Department of Computer Engineering, Modeling, Electronics and Systems Engineering, University of Calabria, Italy.
| | - Eugenio Vocaturo
- Department of Computer Engineering, Modeling, Electronics and Systems Engineering, University of Calabria, Italy.
| |
Collapse
|
29
|
Performance of a Chest Radiography AI Algorithm for Detection of Missed or Mislabeled Findings: A Multicenter Study. Diagnostics (Basel) 2022; 12:diagnostics12092086. [PMID: 36140488 PMCID: PMC9497851 DOI: 10.3390/diagnostics12092086] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 08/23/2022] [Accepted: 08/25/2022] [Indexed: 12/02/2022] Open
Abstract
Purpose: We assessed whether a CXR AI algorithm was able to detect missed or mislabeled chest radiograph (CXR) findings in radiology reports. Methods: We queried a multi-institutional radiology reports search database of 13 million reports to identify all CXR reports with addendums from 1999–2021. Of the 3469 CXR reports with an addendum, a thoracic radiologist excluded reports where addenda were created for typographic errors, wrong report template, missing sections, or uninterpreted signoffs. The remaining reports contained addenda (279 patients) with errors related to side-discrepancies or missed findings such as pulmonary nodules, consolidation, pleural effusions, pneumothorax, and rib fractures. All CXRs were processed with an AI algorithm. Descriptive statistics were performed to determine the sensitivity, specificity, and accuracy of the AI in detecting missed or mislabeled findings. Results: The AI had high sensitivity (96%), specificity (100%), and accuracy (96%) for detecting all missed and mislabeled CXR findings. The corresponding finding-specific statistics for the AI were nodules (96%, 100%, 96%), pneumothorax (84%, 100%, 85%), pleural effusion (100%, 17%, 67%), consolidation (98%, 100%, 98%), and rib fractures (87%, 100%, 94%). Conclusions: The CXR AI could accurately detect mislabeled and missed findings. Clinical Relevance: The CXR AI can reduce the frequency of errors in detection and side-labeling of radiographic findings.
Collapse
|
30
|
Neural architecture search for pneumonia diagnosis from chest X-rays. Sci Rep 2022; 12:11309. [PMID: 35788644 PMCID: PMC9252574 DOI: 10.1038/s41598-022-15341-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 06/22/2022] [Indexed: 11/25/2022] Open
Abstract
Pneumonia is one of the diseases that causes the most fatalities worldwide, especially in children. Recently, pneumonia-caused deaths have increased dramatically due to the novel Coronavirus global pandemic. Chest X-ray (CXR) images are one of the most readily available and common imaging modality for the detection and identification of pneumonia. However, the detection of pneumonia from chest radiography is a difficult task even for experienced radiologists. Artificial Intelligence (AI) based systems have great potential in assisting in quick and accurate diagnosis of pneumonia from chest X-rays. The aim of this study is to develop a Neural Architecture Search (NAS) method to find the best convolutional architecture capable of detecting pneumonia from chest X-rays. We propose a Learning by Teaching framework inspired by the teaching-driven learning methodology from humans, and conduct experiments on a pneumonia chest X-ray dataset with over 5000 images. Our proposed method yields an area under ROC curve (AUC) of 97.6% for pneumonia detection, which improves upon previous NAS methods by 5.1% (absolute).
Collapse
|
31
|
Yu AC, Mohajer B, Eng J. External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review. Radiol Artif Intell 2022; 4:e210064. [PMID: 35652114 DOI: 10.1148/ryai.210064] [Citation(s) in RCA: 94] [Impact Index Per Article: 47.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 03/09/2022] [Accepted: 04/12/2022] [Indexed: 01/17/2023]
Abstract
Purpose To assess generalizability of published deep learning (DL) algorithms for radiologic diagnosis. Materials and Methods In this systematic review, the PubMed database was searched for peer-reviewed studies of DL algorithms for image-based radiologic diagnosis that included external validation, published from January 1, 2015, through April 1, 2021. Studies using nonimaging features or incorporating non-DL methods for feature extraction or classification were excluded. Two reviewers independently evaluated studies for inclusion, and any discrepancies were resolved by consensus. Internal and external performance measures and pertinent study characteristics were extracted, and relationships among these data were examined using nonparametric statistics. Results Eighty-three studies reporting 86 algorithms were included. The vast majority (70 of 86, 81%) reported at least some decrease in external performance compared with internal performance, with nearly half (42 of 86, 49%) reporting at least a modest decrease (≥0.05 on the unit scale) and nearly a quarter (21 of 86, 24%) reporting a substantial decrease (≥0.10 on the unit scale). No study characteristics were found to be associated with the difference between internal and external performance. Conclusion Among published external validation studies of DL algorithms for image-based radiologic diagnosis, the vast majority demonstrated diminished algorithm performance on the external dataset, with some reporting a substantial performance decrease.Keywords: Meta-Analysis, Computer Applications-Detection/Diagnosis, Neural Networks, Computer Applications-General (Informatics), Epidemiology, Technology Assessment, Diagnosis, Informatics Supplemental material is available for this article. © RSNA, 2022.
Collapse
Affiliation(s)
- Alice C Yu
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| | - Bahram Mohajer
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| | - John Eng
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| |
Collapse
|
32
|
Dense Convolutional Network and Its Application in Medical Image Analysis. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2384830. [PMID: 35509707 PMCID: PMC9060995 DOI: 10.1155/2022/2384830] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 03/23/2022] [Indexed: 12/28/2022]
Abstract
Dense convolutional network (DenseNet) is a hot topic in deep learning research in recent years, which has good applications in medical image analysis. In this paper, DenseNet is summarized from the following aspects. First, the basic principle of DenseNet is introduced; second, the development of DenseNet is summarized and analyzed from five aspects: broaden DenseNet structure, lightweight DenseNet structure, dense unit, dense connection mode, and attention mechanism; finally, the application research of DenseNet in the field of medical image analysis is summarized from three aspects: pattern recognition, image segmentation, and object detection. The network structures of DenseNet are systematically summarized in this paper, which has certain positive significance for the research and development of DenseNet.
Collapse
|
33
|
Wang X, Wang L, Sheng Y, Zhu C, Jiang N, Bai C, Xia M, Shao Z, Gu Z, Huang X, Zhao R, Liu Z. Automatic and accurate segmentation of peripherally inserted central catheter (PICC) from chest X-rays using multi-stage attention-guided learning. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
34
|
Khanam N, Kumar R. Recent Applications of Artificial Intelligence in Early Cancer Detection. Curr Med Chem 2022; 29:4410-4435. [PMID: 35196970 DOI: 10.2174/0929867329666220222154733] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 11/30/2021] [Accepted: 12/08/2021] [Indexed: 11/22/2022]
Abstract
Cancer is a deadly disease often caused by the accumulation of various genetic mutations and pathological alterations. The death rate can only be reduced when it is detected in the early stages because treatment of cancer when the tumor has not metastasized in many regions of the body is more effective. However, early cancer detection is fraught with difficulties. Advances in artificial intelligence (AI) have developed a new scope for efficient and early detection of such a fatal disease. AI algorithms have a remarkable ability to perform well on a variety of tasks that are presented or fed to the system. Numerous studies have produced machine learning and deep learning-assisted cancer prediction models to detect cancer from previously accessible data with better accuracy, sensitivity, and specificity. It has been observed that the accuracy of prediction models in classifying fed data as benign, malignant, or normal is improved by implementing efficient image processing techniques and data segmentation augmentation methodologies, along with advanced algorithms. In this review, recent AI-based models for the diagnosis of the most prevalent cancers in the breast, lung, brain, and skin have been analysed. Available AI techniques, data preparation, modeling processes, and performance assessments have been included in the review.
Collapse
Affiliation(s)
- Nausheen Khanam
- Amity Institute of Biotechnology, Amity University Uttar Pradesh Lucknow Campus, Uttar Pradesh, India
| | - Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh Lucknow Campus, Uttar Pradesh, India
| |
Collapse
|
35
|
Agrawal T, Choudhary P. Segmentation and classification on chest radiography: a systematic survey. THE VISUAL COMPUTER 2022; 39:875-913. [PMID: 35035008 PMCID: PMC8741572 DOI: 10.1007/s00371-021-02352-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/01/2021] [Indexed: 06/14/2023]
Abstract
Chest radiography (X-ray) is the most common diagnostic method for pulmonary disorders. A trained radiologist is required for interpreting the radiographs. But sometimes, even experienced radiologists can misinterpret the findings. This leads to the need for computer-aided detection diagnosis. For decades, researchers were automatically detecting pulmonary disorders using the traditional computer vision (CV) methods. Now the availability of large annotated datasets and computing hardware has made it possible for deep learning to dominate the area. It is now the modus operandi for feature extraction, segmentation, detection, and classification tasks in medical imaging analysis. This paper focuses on the research conducted using chest X-rays for the lung segmentation and detection/classification of pulmonary disorders on publicly available datasets. The studies performed using the Generative Adversarial Network (GAN) models for segmentation and classification on chest X-rays are also included in this study. GAN has gained the interest of the CV community as it can help with medical data scarcity. In this study, we have also included the research conducted before the popularity of deep learning models to have a clear picture of the field. Many surveys have been published, but none of them is dedicated to chest X-rays. This study will help the readers to know about the existing techniques, approaches, and their significance.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| |
Collapse
|
36
|
Lin C, Zheng Y, Xiao X, Lin J. CXR-RefineDet: Single-Shot Refinement Neural Network for Chest X-Ray Radiograph Based on Multiple Lesions Detection. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4182191. [PMID: 35035832 PMCID: PMC8759881 DOI: 10.1155/2022/4182191] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/25/2023]
Abstract
The workload of radiologists has dramatically increased in the context of the COVID-19 pandemic, causing misdiagnosis and missed diagnosis of diseases. The use of artificial intelligence technology can assist doctors in locating and identifying lesions in medical images. In order to improve the accuracy of disease diagnosis in medical imaging, we propose a lung disease detection neural network that is superior to the current mainstream object detection model in this paper. By combining the advantages of RepVGG block and Resblock in information fusion and information extraction, we design a backbone RRNet with few parameters and strong feature extraction capabilities. After that, we propose a structure called Information Reuse, which can solve the problem of low utilization of the original network output features by connecting the normalized features back to the network. Combining the network of RRNet and the improved RefineDet, we propose the overall network which was called CXR-RefineDet. Through a large number of experiments on the largest public lung chest radiograph detection dataset VinDr-CXR, it is found that the detection accuracy and inference speed of CXR-RefineDet have reached 0.1686 mAP and 6.8 fps, respectively, which is better than the two-stage object detection algorithm using a strong backbone like ResNet-50 and ResNet-101. In addition, the fast reasoning speed of CXR-RefineDet also provides the possibility for the actual implementation of the computer-aided diagnosis system.
Collapse
Affiliation(s)
- Cong Lin
- College of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang 524025, China
| | - Yongbin Zheng
- College of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang 524025, China
| | - Xiuchun Xiao
- College of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang 524025, China
| | - Jialun Lin
- College of Biomedical Information and Engineering, Hainan Medical University, Haikou 571199, China
| |
Collapse
|
37
|
Anwar SM. AIM and Explainable Methods in Medical Imaging and Diagnostics. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
38
|
Peng T, Gu Y, Wang J. Lung contour detection in Chest X-ray images using Mask Region-based Convolutional Neural Network and Adaptive Closed Polyline Searching Method. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2839-2842. [PMID: 34891839 DOI: 10.1109/embc46164.2021.9630012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Detection of lung contour on chest X-ray images (CXRs) is a necessary step for computer-aid medical imaging analysis. Because of the low-intensity contrast around lung boundary and large inter-subject variance, it is challenging to detect lung from structural CXR images accurately. To tackle this problem, we design an automatic and hybrid detection network containing two stages for lung contour detection on CXRs. In the first stage, an image preprocessing stage based on a deep learning model is used to automatically extract coarse lung contours. In the second stage, a refinement step is used to fine-tune the coarse segmentation results based on an improved principal curve-based method coupled with an improved machine learning method. The model is evaluated on several public datasets, and experiments demonstrate that the performance of the proposed method outperforms state-of-the-art methods.Clinical Relevance- This can help radiologists for automatic separate lung, which can decrease the workloads of the radiologists' manually delineated lung contour in CXRs.
Collapse
|
39
|
Horry M, Chakraborty S, Pradhan B, Paul M, Gomes D, Ul-Haq A, Alamri A. Deep Mining Generation of Lung Cancer Malignancy Models from Chest X-ray Images. SENSORS 2021; 21:s21196655. [PMID: 34640976 PMCID: PMC8513105 DOI: 10.3390/s21196655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/28/2021] [Accepted: 10/05/2021] [Indexed: 12/19/2022]
Abstract
Lung cancer is the leading cause of cancer death and morbidity worldwide. Many studies have shown machine learning models to be effective in detecting lung nodules from chest X-ray images. However, these techniques have yet to be embraced by the medical community due to several practical, ethical, and regulatory constraints stemming from the “black-box” nature of deep learning models. Additionally, most lung nodules visible on chest X-rays are benign; therefore, the narrow task of computer vision-based lung nodule detection cannot be equated to automated lung cancer detection. Addressing both concerns, this study introduces a novel hybrid deep learning and decision tree-based computer vision model, which presents lung cancer malignancy predictions as interpretable decision trees. The deep learning component of this process is trained using a large publicly available dataset on pathological biomarkers associated with lung cancer. These models are then used to inference biomarker scores for chest X-ray images from two independent data sets, for which malignancy metadata is available. Next, multi-variate predictive models were mined by fitting shallow decision trees to the malignancy stratified datasets and interrogating a range of metrics to determine the best model. The best decision tree model achieved sensitivity and specificity of 86.7% and 80.0%, respectively, with a positive predictive value of 92.9%. Decision trees mined using this method may be considered as a starting point for refinement into clinically useful multi-variate lung cancer malignancy models for implementation as a workflow augmentation tool to improve the efficiency of human radiologists.
Collapse
Affiliation(s)
- Michael Horry
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia;
- IBM Australia Ltd., Sydney, NSW 2000, Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Correspondence: (S.C.); (B.P.)
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Malaysia
- Correspondence: (S.C.); (B.P.)
| | - Manoranjan Paul
- Machine Vision and Digital Health (MaViDH), School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia; (M.P.); (D.G.); (A.U.-H.)
| | - Douglas Gomes
- Machine Vision and Digital Health (MaViDH), School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia; (M.P.); (D.G.); (A.U.-H.)
| | - Anwaar Ul-Haq
- Machine Vision and Digital Health (MaViDH), School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia; (M.P.); (D.G.); (A.U.-H.)
| | - Abdullah Alamri
- Department of Geology and Geophysics, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia;
| |
Collapse
|
40
|
Subhalakshmi RT, Appavu Alias Balamurugan S, Sasikala S. Automatic Segmentation and Classification of COVID-19 CT Image Using Deep Learning and Multi-Scale Recurrent Neural Network Based Classifier. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from
the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of
computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis
from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN)
based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior
performance with the maximum sensitivity, specificity, and accuracy.
Collapse
Affiliation(s)
- R. T. Subhalakshmi
- Department of Information Technology, Sethu Institute of Technology, Virudhunagar 626115, India
| | | | - S. Sasikala
- Department of Computer Science and Engineering, Velammal College of Engineering and Technology, Madurai 625009, Tamil Nadu, India
| |
Collapse
|
41
|
Zhou L, Yin X, Zhang T, Feng Y, Zhao Y, Jin M, Peng M, Xing C, Li F, Wang Z, Wei G, Jia X, Liu Y, Wu X, Lu L. Detection and Semiquantitative Analysis of Cardiomegaly, Pneumothorax, and Pleural Effusion on Chest Radiographs. Radiol Artif Intell 2021; 3:e200172. [PMID: 34350406 PMCID: PMC8328111 DOI: 10.1148/ryai.2021200172] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 04/14/2021] [Accepted: 04/23/2021] [Indexed: 11/11/2022]
Abstract
PURPOSE To develop and evaluate deep learning models for the detection and semiquantitative analysis of cardiomegaly, pneumothorax, and pleural effusion on chest radiographs. MATERIALS AND METHODS In this retrospective study, models were trained for lesion detection or for lung segmentation. The first dataset for lesion detection consisted of 2838 chest radiographs from 2638 patients (obtained between November 2018 and January 2020) containing findings positive for cardiomegaly, pneumothorax, and pleural effusion that were used in developing Mask region-based convolutional neural networks plus Point-based Rendering models. Separate detection models were trained for each disease. The second dataset was from two public datasets, which included 704 chest radiographs for training and testing a U-Net for lung segmentation. Based on accurate detection and segmentation, semiquantitative indexes were calculated for cardiomegaly (cardiothoracic ratio), pneumothorax (lung compression degree), and pleural effusion (grade of pleural effusion). Detection performance was evaluated by average precision (AP) and free-response receiver operating characteristic (FROC) curve score with the intersection over union greater than 75% (AP75; FROC score75). Segmentation performance was evaluated by Dice similarity coefficient. RESULTS The detection models achieved high accuracy for detecting cardiomegaly (AP75, 98.0%; FROC score75, 0.985), pneumothorax (AP75, 71.2%; FROC score75, 0.728), and pleural effusion (AP75, 78.2%; FROC score75, 0.802), and they also weakened boundary aliasing. The segmentation effect of the lung field (Dice, 0.960), cardiomegaly (Dice, 0.935), pneumothorax (Dice, 0.827), and pleural effusion (Dice, 0.826) was good, which provided important support for semiquantitative analysis. CONCLUSION The developed models could detect cardiomegaly, pneumothorax, and pleural effusion, and semiquantitative indexes could be calculated from segmentations.Keywords: Computer-Aided Diagnosis (CAD), Thorax, CardiacSupplemental material is available for this article.© RSNA, 2021.
Collapse
Affiliation(s)
- Leilei Zhou
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Xindao Yin
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Tao Zhang
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Yuan Feng
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Ying Zhao
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Mingxu Jin
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Mingyang Peng
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Chunhua Xing
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Fengfang Li
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Ziteng Wang
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Guoliang Wei
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Xiao Jia
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Yujun Liu
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Xinying Wu
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Lingquan Lu
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| |
Collapse
|
42
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 106] [Impact Index Per Article: 35.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|
43
|
Polat H, Özerdem MS, Ekici F, Akpolat V. Automatic detection and localization of COVID-19 pneumonia using axial computed tomography images and deep convolutional neural networks. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2021; 31:509-524. [PMID: 33821092 PMCID: PMC8013431 DOI: 10.1002/ima.22558] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 01/09/2021] [Accepted: 01/27/2021] [Indexed: 05/13/2023]
Abstract
COVID-19 was first reported as an unknown group of pneumonia in Wuhan City, Hubei province of China in late December of 2019. The rapid increase in the number of cases diagnosed with COVID-19 and the lack of experienced radiologists can cause diagnostic errors in the interpretation of the images along with the exceptional workload occurring in this process. Therefore, the urgent development of automated diagnostic systems that can scan radiological images quickly and accurately is important in combating the pandemic. With this motivation, a deep convolutional neural network (CNN)-based model that can automatically detect patterns related to lesions caused by COVID-19 from chest computed tomography (CT) images is proposed in this study. In this context, the image ground-truth regarding the COVID-19 lesions scanned by the radiologist was evaluated as the main criteria of the segmentation process. A total of 16 040 CT image segments were obtained by applying segmentation to the raw 102 CT images. Then, 10 420 CT image segments related to healthy lung regions were labeled as COVID-negative, and 5620 CT image segments, in which the findings related to the lesions were detected in various forms, were labeled as COVID-positive. With the proposed CNN architecture, 93.26% diagnostic accuracy performance was achieved. The sensitivity and specificity performance metrics for the proposed automatic diagnosis model were 93.27% and 93.24%, respectively. Additionally, it has been shown that by scanning the small regions of the lungs, COVID-19 pneumonia can be localized automatically with high resolution and the lesion densities can be successfully evaluated quantitatively.
Collapse
Affiliation(s)
- Hasan Polat
- Department of Electrical and EnergyBingol UniversityBingölTurkey
| | - Mehmet Siraç Özerdem
- Department of Electrical and Electronics EngineeringDicle UniversityDiyarbakırTurkey
| | - Faysal Ekici
- Department of RadiologyDicle UniversityDiyarbakırTurkey
| | - Veysi Akpolat
- Department of BiophysicsDicle UniversityDiyarbakırTurkey
| |
Collapse
|
44
|
Peters AA, Decasper A, Munz J, Klaus J, Loebelenz LI, Hoffner MKM, Hourscht C, Heverhagen JT, Christe A, Ebner L. Performance of an AI based CAD system in solid lung nodule detection on chest phantom radiographs compared to radiology residents and fellow radiologists. J Thorac Dis 2021; 13:2728-2737. [PMID: 34164165 PMCID: PMC8182550 DOI: 10.21037/jtd-20-3522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Background Despite the decreasing relevance of chest radiography in lung cancer screening, chest radiography is still frequently applied to assess for lung nodules. The aim of the current study was to determine the accuracy of a commercial AI based CAD system for the detection of artificial lung nodules on chest radiograph phantoms and compare the performance to radiologists in training. Methods Sixty-one anthropomorphic lung phantoms were equipped with 140 randomly deployed artificial lung nodules (5, 8, 10, 12 mm). A random generator chose nodule size and distribution before a two-plane chest X-ray (CXR) of each phantom was performed. Seven blinded radiologists in training (2 fellows, 5 residents) with 2 to 5 years of experience in chest imaging read the CXRs on a PACS-workstation independently. Results of the software were recorded separately. McNemar test was used to compare each radiologist’s results to the AI-computer-aided-diagnostic (CAD) software in a per-nodule and a per-phantom approach and Fleiss-Kappa was applied for inter-rater and intra-observer agreements. Results Five out of seven readers showed a significantly higher accuracy than the AI algorithm. The pooled accuracies of the radiologists in a nodule-based and a phantom-based approach were 0.59 and 0.82 respectively, whereas the AI-CAD showed accuracies of 0.47 and 0.67, respectively. Radiologists’ average sensitivity for 10 and 12 mm nodules was 0.80 and dropped to 0.66 for 8 mm (P=0.04) and 0.14 for 5 mm nodules (P<0.001). The radiologists and the algorithm both demonstrated a significant higher sensitivity for peripheral compared to central nodules (0.66 vs. 0.48; P=0.004 and 0.64 vs. 0.094; P=0.025, respectively). Inter-rater agreements were moderate among the radiologists and between radiologists and AI-CAD software (K’=0.58±0.13 and 0.51±0.1). Intra-observer agreement was calculated for two readers and was almost perfect for the phantom-based (K’=0.85±0.05; K’=0.80±0.02); and substantial to almost perfect for the nodule-based approach (K’=0.83±0.02; K’=0.78±0.02). Conclusions The AI based CAD system as a primary reader acts inferior to radiologists regarding lung nodule detection in chest phantoms. Chest radiography has reasonable accuracy in lung nodule detection if read by a radiologist alone and may be further optimized by an AI based CAD system as a second reader.
Collapse
Affiliation(s)
- Alan A Peters
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Amanda Decasper
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jaro Munz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jeremias Klaus
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Laura I Loebelenz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Maximilian Korbinian Michael Hoffner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Cynthia Hourscht
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Johannes T Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Department of BioMedical Research, Experimental Radiology, University of Bern, Bern, Switzerland.,Department of Radiology, The Ohio State University, Columbus, OH, USA
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
45
|
On the performance of lung nodule detection, segmentation and classification. Comput Med Imaging Graph 2021; 89:101886. [PMID: 33706112 DOI: 10.1016/j.compmedimag.2021.101886] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 01/11/2021] [Accepted: 02/02/2021] [Indexed: 01/10/2023]
Abstract
Computed tomography (CT) screening is an effective way for early detection of lung cancer in order to improve the survival rate of such a deadly disease. For more than two decades, image processing techniques such as nodule detection, segmentation, and classification have been extensively studied to assist physicians in identifying nodules from hundreds of CT slices to measure shapes and HU distributions of nodules automatically and to distinguish their malignancy. Thanks to new parallel computation, multi-layer convolution, nonlinear pooling operation, and the big data learning strategy, recent development of deep-learning algorithms has shown great progress in lung nodule screening and computer-assisted diagnosis (CADx) applications due to their high sensitivity and low false positive rates. This paper presents a survey of state-of-the-art deep-learning-based lung nodule screening and analysis techniques focusing on their performance and clinical applications, aiming to help better understand the current performance, the limitation, and the future trends of lung nodule analysis.
Collapse
|
46
|
Elzeki OM, Abd Elfattah M, Salem H, Hassanien AE, Shams M. A novel perceptual two layer image fusion using deep learning for imbalanced COVID-19 dataset. PeerJ Comput Sci 2021; 7:e364. [PMID: 33817014 PMCID: PMC7959632 DOI: 10.7717/peerj-cs.364] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 12/30/2020] [Indexed: 05/31/2023]
Abstract
BACKGROUND AND PURPOSE COVID-19 is a new strain of viruses that causes life stoppage worldwide. At this time, the new coronavirus COVID-19 is spreading rapidly across the world and poses a threat to people's health. Experimental medical tests and analysis have shown that the infection of lungs occurs in almost all COVID-19 patients. Although Computed Tomography of the chest is a useful imaging method for diagnosing diseases related to the lung, chest X-ray (CXR) is more widely available, mainly due to its lower price and results. Deep learning (DL), one of the significant popular artificial intelligence techniques, is an effective way to help doctors analyze how a large number of CXR images is crucial to performance. MATERIALS AND METHODS In this article, we propose a novel perceptual two-layer image fusion using DL to obtain more informative CXR images for a COVID-19 dataset. To assess the proposed algorithm performance, the dataset used for this work includes 87 CXR images acquired from 25 cases, all of which were confirmed with COVID-19. The dataset preprocessing is needed to facilitate the role of convolutional neural networks (CNN). Thus, hybrid decomposition and fusion of Nonsubsampled Contourlet Transform (NSCT) and CNN_VGG19 as feature extractor was used. RESULTS Our experimental results show that imbalanced COVID-19 datasets can be reliably generated by the algorithm established here. Compared to the COVID-19 dataset used, the fuzed images have more features and characteristics. In evaluation performance measures, six metrics are applied, such as QAB/F, QMI, PSNR, SSIM, SF, and STD, to determine the evaluation of various medical image fusion (MIF). In the QMI, PSNR, SSIM, the proposed algorithm NSCT + CNN_VGG19 achieves the greatest and the features characteristics found in the fuzed image is the largest. We can deduce that the proposed fusion algorithm is efficient enough to generate CXR COVID-19 images that are more useful for the examiner to explore patient status. CONCLUSIONS A novel image fusion algorithm using DL for an imbalanced COVID-19 dataset is the crucial contribution of this work. Extensive results of the experiment display that the proposed algorithm NSCT + CNN_VGG19 outperforms competitive image fusion algorithms.
Collapse
Affiliation(s)
- Omar M. Elzeki
- Faculty of Computers and Information Sciences, Mansoura University, Mansoura, Egypt
| | | | - Hanaa Salem
- Communications and Computers Engineering Department, Faculty of Engineering, Delta University for Science and Technology, Gamasa, Egypt
| | - Aboul Ella Hassanien
- Faculty of Computers and Artificial Intelligence, Cairo University, Cairo, Egypt
- Scientific Research Group in Egypt (SRGE), Cairo, Egypt
| | - Mahmoud Shams
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh, Egypt
| |
Collapse
|
47
|
Multiscale CNN with compound fusions for false positive reduction in lung nodule detection. Artif Intell Med 2021; 113:102017. [PMID: 33685584 DOI: 10.1016/j.artmed.2021.102017] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Revised: 07/18/2020] [Accepted: 07/21/2020] [Indexed: 12/20/2022]
Abstract
Pulmonary lung nodules are often benign at the early stage but they could easily become malignant and metastasize to other locations in later stages. Morphological characteristics of these nodule instances vary largely in terms of their size, shape, and texture. There are also other co-existing lung anatomical structures such as lung walls and blood vessels surrounding these nodules resulting in complex contextual information. As a result, their early diagnosis to enable decisive intervention using Computer-Aided Diagnosis (CAD) systems face serious challenges, especially at low false positive rates. In this paper, we propose a new Convolutional Neural Network (CNN) architecture called Multiscale CNN with Compound Fusions (MCNN-CF) for this purpose which uses multiscale 3D patches as inputs and performs a fusion of intermediate features at two different depths of the network in two diverse fashions. The network is trained by a new iterative training procedure adapted to circumvent the class imbalance problem and obtained a Competitive Performance Metric (CPM) score of 0.948 when tested on the LUNA16 dataset. Experimental results illustrate the robustness of the proposed system which has increased the confidence of the prediction probabilities in the detection of the most variety of nodules.
Collapse
|
48
|
Sari S, Soesanti I, Setiawan NA. Development of CAD System for Automatic Lung Nodule Detection: A Review. BIO WEB OF CONFERENCES 2021. [DOI: 10.1051/bioconf/20214104001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Lung cancer is a type of cancer that spreads rapidly and is the leading cause of mortality globally. The Computer-Aided Detection (CAD) system for automatic lung cancer detection has a significant influence on human survival. In this article, we report the summary of relevant literature on CAD systems for lung cancer detection. The CAD system includes preprocessing techniques, segmentation, lung nodule detection, and false-positive reduction with feature extraction. In evaluating some of the work on this topic, we used a search of selected literature, the dataset used for method validation, the number of cases, the image size, several techniques in nodule detection, feature extraction, sensitivity, and false-positive rates. The best performance CAD systems of our analysis results show the sensitivity value is high with low false positives and other parameters for lung nodule detection. Furthermore, it also uses a large dataset, so the further systems have improved accuracy and precision in detection. CNN is the best lung nodule detection method and need to develop, it is preferable because this method has witnessed various growth in recent years and has yielded impressive outcomes. We hope this article will help professional researchers and radiologists in developing CAD systems for lung cancer detection.
Collapse
|
49
|
|
50
|
NSCR-Based DenseNet for Lung Tumor Recognition Using Chest CT Image. BIOMED RESEARCH INTERNATIONAL 2020; 2020:6636321. [PMID: 33490248 PMCID: PMC7787714 DOI: 10.1155/2020/6636321] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 11/15/2020] [Accepted: 12/04/2020] [Indexed: 01/10/2023]
Abstract
Nonnegative sparse representation has become a popular methodology in medical analysis and diagnosis in recent years. In order to resolve network degradation, higher dimensionality in feature extraction, data redundancy, and other issues faced when medical images parameters are trained using convolutional neural networks. Lung tumors in chest CT image based on nonnegative, sparse, and collaborative representation classification of DenseNet (DenseNet-NSCR) are proposed by this paper: firstly, initialization parameters of pretrained DenseNet model using transfer learning; secondly, training DenseNet using CT images to extract feature vectors for the full connectivity layer; thirdly, a nonnegative, sparse, and collaborative representation (NSCR) is used to represent the feature vector and solve the coding coefficient matrix; fourthly, the residual similarity is used for classification. The experimental results show that the DenseNet-NSCR classification is better than the other models, and the various evaluation indexes such as specificity and sensitivity are also high, and the method has better robustness and generalization ability through comparison experiment using AlexNet, GoogleNet, and DenseNet-201 models.
Collapse
|