1
|
Lan T, Zeng F, Yi Z, Xu X, Zhu M. ICNoduleNet: Enhancing Pulmonary Nodule Detection Performance on Sharp Kernel CT Imaging. IEEE J Biomed Health Inform 2024; 28:4751-4760. [PMID: 38758615 DOI: 10.1109/jbhi.2024.3402186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/19/2024]
Abstract
Thoracic computed tomography (CT) currently plays the primary role in pulmonary nodule detection, where the reconstruction kernel significantly impacts performance in computer-aided pulmonary nodule detectors. The issue of kernel selection affecting performance has been overlooked in pulmonary nodule detection. This paper first introduces a novel pulmonary nodule detection dataset named Reconstruction Kernel Imaging for Pulmonary Nodule Detection (RKPN) for quantifying algorithm differences between the two imaging types. The dataset contains pairs of images taken from the same patient on the same date, featuring both smooth (B31f) and sharp kernel (B60f) reconstructions. All other imaging parameters and pulmonary nodule labels remain entirely consistent across these pairs. Extensive quantification reveals mainstream detectors perform better on smooth kernel imaging than on sharp kernel imaging. To address suboptimal detection on the sharp kernel imaging, we further propose an image conversion-based pulmonary nodule detector called ICNoduleNet. A lightweight 3D slice-channel converter (LSCC) module is introduced to convert sharp kernel images into smooth kernel images, which can sufficiently learn inter-slice and inter-channel feature information while avoiding introducing excessive parameters. We conduct thorough experiments that validate the effectiveness of ICNoduleNet, it takes sharp kernel images as input and can achieve comparable or even superior detection performance to the baseline that uses the smooth kernel images. The evaluation shows promising results and proves the effectiveness of ICNoduleNet.
Collapse
|
2
|
Xu R, Liu Z, Luo Y, Hu H, Shen L, Du B, Kuang K, Yang J. SGDA: Towards 3-D Universal Pulmonary Nodule Detection via Slice Grouped Domain Attention. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:1093-1105. [PMID: 37028322 DOI: 10.1109/tcbb.2023.3253713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Lung cancer is the leading cause of cancer death worldwide. The best solution for lung cancer is to diagnose the pulmonary nodules in the early stage, which is usually accomplished with the aid of thoracic computed tomography (CT). As deep learning thrives, convolutional neural networks (CNNs) have been introduced into pulmonary nodule detection to help doctors in this labor-intensive task and demonstrated to be very effective. However, the current pulmonary nodule detection methods are usually domain-specific, and cannot satisfy the requirement of working in diverse real-world scenarios. To address this issue, we propose a slice grouped domain attention (SGDA) module to enhance the generalization capability of the pulmonary nodule detection networks. This attention module works in the axial, coronal, and sagittal directions. In each direction, we divide the input feature into groups, and for each group, we utilize a universal adapter bank to capture the feature subspaces of the domains spanned by all pulmonary nodule datasets. Then the bank outputs are combined from the perspective of domain to modulate the input group. Extensive experiments demonstrate that SGDA enables substantially better multi-domain pulmonary nodule detection performance compared with the state-of-the-art multi-domain learning methods.
Collapse
|
3
|
Mao K, Jing X, Wang G, Chang Y, Liu J, Zhao Y, Yu S, Liu J. A novel open-source CADs platform for 3D CT pulmonary analysis. Comput Biol Med 2024; 169:107878. [PMID: 38141446 DOI: 10.1016/j.compbiomed.2023.107878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 11/10/2023] [Accepted: 12/18/2023] [Indexed: 12/25/2023]
Abstract
Computer-aided diagnosis (CAD) systems play vital roles in the early detection of pulmonary nodules for reducing lung cancer mortality rates. To provide better services for professional doctors, this paper proposes an efficient open-source CAD platform with flexible equipments, user-friendly interfaces, and completed functions for 3D CT pulmonary nodule analysis. For the platform's design and implementation, we fully consider application scenarios and system requirements. The platform supplies core functions for (1) Basic Image Processing, (2) Intelligent Image Analysis, (3) Multi-View Image Visualization, (4) Report Editing and Generation, (5) User Information Management, and (6) Inference Service Monitoring. Specifically, other state-of-the-art or user-defined algorithms can be integrated as plugin modules with no interference for system architecture. System evaluation with use-case testing demonstrates the effectiveness and universality of the proposed platform.
Collapse
Affiliation(s)
- Keming Mao
- Software College, Northeastern University, Shenyang, China
| | - Xin Jing
- Software College, Northeastern University, Shenyang, China
| | - Gao Wang
- Software College, Northeastern University, Shenyang, China
| | - Yachen Chang
- School of Software Technology, Zhejiang University, Ningbo, China
| | - Jiale Liu
- Software College, Northeastern University, Shenyang, China.
| | - Yuhai Zhao
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Shiyu Yu
- China Mobile Group Liaoning Company Limited, Shenyang, China
| | - Jingyu Liu
- China Mobile Group Liaoning Company Limited, Shenyang, China
| |
Collapse
|
4
|
Saikia S, Si T, Deb D, Bora K, Mallik S, Maulik U, Zhao Z. Lesion detection in women breast's dynamic contrast-enhanced magnetic resonance imaging using deep learning. Sci Rep 2023; 13:22555. [PMID: 38110462 PMCID: PMC10728155 DOI: 10.1038/s41598-023-48553-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 11/28/2023] [Indexed: 12/20/2023] Open
Abstract
Breast cancer is one of the most common cancers in women and the second foremost cause of cancer death in women after lung cancer. Recent technological advances in breast cancer treatment offer hope to millions of women in the world. Segmentation of the breast's Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is one of the necessary tasks in the diagnosis and detection of breast cancer. Currently, a popular deep learning model, U-Net is extensively used in biomedical image segmentation. This article aims to advance the state of the art and conduct a more in-depth analysis with a focus on the use of various U-Net models in lesion detection in women's breast DCE-MRI. In this article, we perform an empirical study of the effectiveness and efficiency of U-Net and its derived deep learning models including ResUNet, Dense UNet, DUNet, Attention U-Net, UNet++, MultiResUNet, RAUNet, Inception U-Net and U-Net GAN for lesion detection in breast DCE-MRI. All the models are applied to the benchmarked 100 Sagittal T2-Weighted fat-suppressed DCE-MRI slices of 20 patients and their performance is compared. Also, a comparative study has been conducted with V-Net, W-Net, and DeepLabV3+. Non-parametric statistical test Wilcoxon Signed Rank Test is used to analyze the significance of the quantitative results. Furthermore, Multi-Criteria Decision Analysis (MCDA) is used to evaluate overall performance focused on accuracy, precision, sensitivity, F[Formula: see text]-score, specificity, Geometric-Mean, DSC, and false-positive rate. The RAUNet segmentation model achieved a high accuracy of 99.76%, sensitivity of 85.04%, precision of 90.21%, and Dice Similarity Coefficient (DSC) of 85.04% whereas ResNet achieved 99.62% accuracy, 62.26% sensitivity, 99.56% precision, and 72.86% DSC. ResUNet is found to be the most effective model based on MCDA. On the other hand, U-Net GAN takes the least computational time to perform the segmentation task. Both quantitative and qualitative results demonstrate that the ResNet model performs better than other models in segmenting the images and lesion detection, though computational time in achieving the objectives varies.
Collapse
Affiliation(s)
- Sudarshan Saikia
- Information Technology Department, Oil India Limited, Duliajan, Assam, 786602, India
| | - Tapas Si
- AI Innovation Lab, Department of Computer Science & Engineering, University of Engineering & Management, Jaipur, GURUKUL, Jaipur, Rajasthan, 303807, India
| | - Darpan Deb
- Department of Computer Application, Christ University, Bengaluru, 560029, India
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, 781001, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, 02115, USA
| | - Ujjwal Maulik
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
5
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
6
|
Jenkin Suji R, Bhadauria SS, Wilfred Godfrey W. A survey and taxonomy of 2.5D approaches for lung segmentation and nodule detection in CT images. Comput Biol Med 2023; 165:107437. [PMID: 37717526 DOI: 10.1016/j.compbiomed.2023.107437] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/20/2023] [Accepted: 08/28/2023] [Indexed: 09/19/2023]
Abstract
CAD systems for lung cancer diagnosis and detection can significantly offer unbiased, infatiguable diagnostics with minimal variance, decreasing the mortality rate and the five-year survival rate. Lung segmentation and lung nodule detection are critical steps in the lung cancer CAD system pipeline. Literature on lung segmentation and lung nodule detection mostly comprises techniques that process 3-D volumes or 2-D slices and surveys. However, surveys that highlight 2.5D techniques for lung segmentation and lung nodule detection still need to be included. This paper presents a background and discussion on 2.5D methods to fill this gap. Further, this paper also gives a taxonomy of 2.5D approaches and a detailed description of the 2.5D approaches. Based on the taxonomy, various 2.5D techniques for lung segmentation and lung nodule detection are clustered into these 2.5D approaches, which is followed by possible future work in this direction.
Collapse
|
7
|
Armato SG, Drukker K, Hadjiiski L. AI in medical imaging grand challenges: translation from competition to research benefit and patient care. Br J Radiol 2023; 96:20221152. [PMID: 37698542 PMCID: PMC10546459 DOI: 10.1259/bjr.20221152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/24/2023] [Accepted: 07/11/2023] [Indexed: 09/13/2023] Open
Abstract
Artificial intelligence (AI), in one form or another, has been a part of medical imaging for decades. The recent evolution of AI into approaches such as deep learning has dramatically accelerated the application of AI across a wide range of radiologic settings. Despite the promises of AI, developers and users of AI technology must be fully aware of its potential biases and pitfalls, and this knowledge must be incorporated throughout the AI system development pipeline that involves training, validation, and testing. Grand challenges offer an opportunity to advance the development of AI methods for targeted applications and provide a mechanism for both directing and facilitating the development of AI systems. In the process, a grand challenge centralizes (with the challenge organizers) the burden of providing a valid benchmark test set to assess performance and generalizability of participants' models and the collection and curation of image metadata, clinical/demographic information, and the required reference standard. The most relevant grand challenges are those designed to maximize the open-science nature of the competition, with code and trained models deposited for future public access. The ultimate goal of AI grand challenges is to foster the translation of AI systems from competition to research benefit and patient care. Rather than reference the many medical imaging grand challenges that have been organized by groups such as MICCAI, RSNA, AAPM, and grand-challenge.org, this review assesses the role of grand challenges in promoting AI technologies for research advancement and for eventual clinical implementation, including their promises and limitations.
Collapse
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Karen Drukker
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
8
|
Cellina M, Cacioppa LM, Cè M, Chiarpenello V, Costa M, Vincenzo Z, Pais D, Bausano MV, Rossini N, Bruno A, Floridi C. Artificial Intelligence in Lung Cancer Screening: The Future Is Now. Cancers (Basel) 2023; 15:4344. [PMID: 37686619 PMCID: PMC10486721 DOI: 10.3390/cancers15174344] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/27/2023] [Accepted: 08/28/2023] [Indexed: 09/10/2023] Open
Abstract
Lung cancer has one of the worst morbidity and fatality rates of any malignant tumour. Most lung cancers are discovered in the middle and late stages of the disease, when treatment choices are limited, and patients' survival rate is low. The aim of lung cancer screening is the identification of lung malignancies in the early stage of the disease, when more options for effective treatments are available, to improve the patients' outcomes. The desire to improve the efficacy and efficiency of clinical care continues to drive multiple innovations into practice for better patient management, and in this context, artificial intelligence (AI) plays a key role. AI may have a role in each process of the lung cancer screening workflow. First, in the acquisition of low-dose computed tomography for screening programs, AI-based reconstruction allows a further dose reduction, while still maintaining an optimal image quality. AI can help the personalization of screening programs through risk stratification based on the collection and analysis of a huge amount of imaging and clinical data. A computer-aided detection (CAD) system provides automatic detection of potential lung nodules with high sensitivity, working as a concurrent or second reader and reducing the time needed for image interpretation. Once a nodule has been detected, it should be characterized as benign or malignant. Two AI-based approaches are available to perform this task: the first one is represented by automatic segmentation with a consequent assessment of the lesion size, volume, and densitometric features; the second consists of segmentation first, followed by radiomic features extraction to characterize the whole abnormalities providing the so-called "virtual biopsy". This narrative review aims to provide an overview of all possible AI applications in lung cancer screening.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, 20121 Milano, Italy;
| | - Laura Maria Cacioppa
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
- Division of Interventional Radiology, Department of Radiological Sciences, University Hospital “Azienda Ospedaliera Universitaria delle Marche”, 60126 Ancona, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Vittoria Chiarpenello
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Marco Costa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Zakaria Vincenzo
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Daniele Pais
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Maria Vittoria Bausano
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Nicolò Rossini
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
| | - Alessandra Bruno
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
| | - Chiara Floridi
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
- Division of Interventional Radiology, Department of Radiological Sciences, University Hospital “Azienda Ospedaliera Universitaria delle Marche”, 60126 Ancona, Italy
- Division of Radiology, Department of Radiological Sciences, University Hospital “Azienda Ospedaliera Universitaria delle Marche”, 60126 Ancona, Italy
| |
Collapse
|
9
|
Si T, Patra DK, Mallik S, Bandyopadhyay A, Sarkar A, Qin H. Identification of breast lesion through integrated study of gorilla troops optimization and rotation-based learning from MRI images. Sci Rep 2023; 13:11577. [PMID: 37463919 PMCID: PMC10354050 DOI: 10.1038/s41598-023-36300-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 05/31/2023] [Indexed: 07/20/2023] Open
Abstract
Breast cancer has emerged as the most life-threatening disease among women around the world. Early detection and treatment of breast cancer are thought to reduce the need for surgery and boost the survival rate. The Magnetic Resonance Imaging (MRI) segmentation techniques for breast cancer diagnosis are investigated in this article. Kapur's entropy-based multilevel thresholding is used in this study to determine optimal values for breast DCE-MRI lesion segmentation using Gorilla Troops Optimization (GTO). An improved GTO, is developed by incorporating Rotational opposition based-learning (RBL) into GTO called (GTORBL) and applied it to the same problem. The proposed approaches are tested on 20 patients' T2 Weighted Sagittal (T2 WS) DCE-MRI 100 slices. The proposed approaches are compared with Tunicate Swarm Algorithm (TSA), Particle Swarm Optimization (PSO), Arithmetic Optimization Algorithm (AOA), Slime Mould Algorithm (SMA), Multi-verse Optimization (MVO), Hidden Markov Random Field (HMRF), Improved Markov Random Field (IMRF), and Conventional Markov Random Field (CMRF). The Dice Similarity Coefficient (DSC), sensitivity, and accuracy of the proposed GTO-based approach is achieved [Formula: see text], [Formula: see text], and [Formula: see text] respectively. Another proposed GTORBL-based segmentation method achieves accuracy values of [Formula: see text] , sensitivity of [Formula: see text] , and DSC of [Formula: see text]. The one-way ANOVA test followed by Tukey HSD and Wilcoxon Signed Rank Test are used to examine the results. Furthermore, Multi-Criteria Decision Making is used to evaluate overall performance focused on sensitivity, accuracy, false-positive rate, precision, specificity, [Formula: see text]-score, Geometric-Mean, and DSC. According to both quantitative and qualitative findings, the proposed strategies outperform other compared methodologies.
Collapse
Affiliation(s)
- Tapas Si
- Department of Computer Science & Engineering, University of Engineering & Management, Jaipur, GURUKUL, Sikar Road (NH-11), Udaipuria Mod, Jaipur, Rajasthan, 303807, India
| | - Dipak Kumar Patra
- Department of Computer Science, Hijli College, Kharagpur, West Bengal, 721306, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA, USA.
| | - Anjan Bandyopadhyay
- School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT), Bhubaneswar, Odisha, India
| | - Achyuth Sarkar
- Department of Computer Science & Engineering, National Institute of Technology Arunachal Pradesh, Arunachal Pradesh, 791113, India
| | - Hong Qin
- Department of Computer Science and Engineering, University of Tennessee at Chattanooga, Chattanooga, TN, USA.
| |
Collapse
|
10
|
Modak S, Abdel-Raheem E, Rueda L. Applications of Deep Learning in Disease Diagnosis of Chest Radiographs: A Survey on Materials and Methods. BIOMEDICAL ENGINEERING ADVANCES 2023. [DOI: 10.1016/j.bea.2023.100076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
|
11
|
Zhang H, Chen L, Gu X, Zhang M, Qin Y, Yao F, Wang Z, Gu Y, Yang GZ. Trustworthy learning with (un)sure annotation for lung nodule diagnosis with CT. Med Image Anal 2023; 83:102627. [PMID: 36283199 DOI: 10.1016/j.media.2022.102627] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 07/22/2022] [Accepted: 09/10/2022] [Indexed: 02/04/2023]
Abstract
Recent evolution in deep learning has proven its value for CT-based lung nodule classification. Most current techniques are intrinsically black-box systems, suffering from two generalizability issues in clinical practice. First, benign-malignant discrimination is often assessed by human observers without pathologic diagnoses at the nodule level. We termed these data as "unsure-annotation data". Second, a classifier does not necessarily acquire reliable nodule features for stable learning and robust prediction with patch-level labels during learning. In this study, we construct a sure-annotation dataset with pathologically-confirmed labels and propose a collaborative learning framework to facilitate sure nodule classification by integrating unsure-annotation data knowledge through nodule segmentation and malignancy score regression. A loss function is designed to learn reliable features by introducing interpretability constraints regulated with nodule segmentation maps. Furthermore, based on model inference results that reflect the understanding from both machine and experts, we explore a new nodule analysis method for similar historical nodule retrieval and interpretable diagnosis. Detailed experimental results demonstrate that our approach is beneficial for achieving improved performance coupled with trustworthy model reasoning for lung cancer prediction with limited data. Extensive cross-evaluation results further illustrate the effect of unsure-annotation data for deep-learning based methods in lung nodule classification.
Collapse
Affiliation(s)
- Hanxiao Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Liang Chen
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiao Gu
- Imperial College London, London, UK
| | - Minghui Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | | | - Feng Yao
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Zhexin Wang
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China.
| | - Yun Gu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China; Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, China.
| | - Guang-Zhong Yang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
12
|
Jin H, Yu C, Gong Z, Zheng R, Zhao Y, Fu Q. Machine learning techniques for pulmonary nodule computer-aided diagnosis using CT images: A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
13
|
Wang H, Tang N, Zhang C, Hao Y, Meng X, Li J. Practice toward standardized performance testing of computer-aided detection algorithms for pulmonary nodule. Front Public Health 2022; 10:1071673. [PMID: 36568775 PMCID: PMC9768365 DOI: 10.3389/fpubh.2022.1071673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
This study aimed at implementing practice to build a standardized protocol to test the performance of computer-aided detection (CAD) algorithms for pulmonary nodules. A test dataset was established according to a standardized procedure, including data collection, curation and annotation. Six types of pulmonary nodules were manually annotated as reference standard. Three specific rules to match algorithm output with reference standard were applied and compared. These rules included: (1) "center hit" [whether the center of algorithm highlighted region of interest (ROI) hit the ROI of reference standard]; (2) "center distance" (whether the distance between algorithm highlighted ROI center and reference standard center was below a certain threshold); (3) "area overlap" (whether the overlap between algorithm highlighted ROI and reference standard was above a certain threshold). Performance metrics were calculated and the results were compared among ten algorithms under test (AUTs). The test set currently consisted of CT sequences from 593 patients. Under "center hit" rule, the average recall rate, average precision, and average F1 score of ten algorithms under test were 54.68, 38.19, and 42.39%, respectively. Correspondingly, the results under "center distance" rule were 55.43, 38.69, and 42.96%, and the results under "area overlap" rule were 40.35, 27.75, and 31.13%. Among the six types of pulmonary nodules, the AUTs showed the highest miss rate for pure ground-glass nodules, with an average of 59.32%, followed by pleural nodules and solid nodules, with an average of 49.80 and 42.21%, respectively. The algorithm testing results changed along with specific matching methods adopted in the testing process. The AUTs showed uneven performance on different types of pulmonary nodules. This centralized testing protocol supports the comparison between algorithms with similar intended use, and helps evaluate algorithm performance.
Collapse
Affiliation(s)
- Hao Wang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Na Tang
- School of Bioengineering, Chongqing University, Chongqing, China
| | - Chao Zhang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Ye Hao
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Xiangfeng Meng
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,*Correspondence: Xiangfeng Meng
| | - Jiage Li
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,Jiage Li
| |
Collapse
|
14
|
Nadeem SA, Comellas AP, Hoffman EA, Saha PK. Airway Detection in COPD at Low-Dose CT Using Deep Learning and Multiparametric Freeze and Grow. Radiol Cardiothorac Imaging 2022; 4:e210311. [PMID: 36601453 PMCID: PMC9806731 DOI: 10.1148/ryct.210311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 09/27/2022] [Accepted: 10/27/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE To present and validate a fully automated airway detection method at low-dose CT in patients with chronic obstructive pulmonary disease (COPD). MATERIALS AND METHODS In this retrospective study, deep learning (DL) and freeze-and-grow (FG) methods were optimized and applied to automatically detect airways at low-dose CT. Four data sets were used: two data sets consisting of matching standard- and low-dose CT scans from the Genetic Epidemiology of COPD (COPDGene) phase II (2014-2017) cohort (n = 2 × 236; mean age ± SD, 70 years ± 9; 123 women); one data set consisting of low-dose CT scans from the COPDGene phase III (2018-2020) cohort (n = 335; mean age ± SD, 73 years ± 8; 173 women); and one data set consisting of low-dose, anonymized CT scans from the 2003 Dutch-Belgian Randomized Lung Cancer Screening trial (n = 55) acquired by using different CT scanners. Performance measures for different methods were computed and compared by using the Wilcoxon signed rank test. RESULTS At low-dose CT, 56 294 of 62 480 (90.1%) airways of the reference total airway count (TAC) and 32 109 of 37 864 (84.8%) airways of the peripheral TAC (TACp), detected at standard-dose CT, were detected. Significant losses (P < .001) of 14 526 of 76 453 (19.0%) airways and 884 of 6908 (12.8%) airways in the TAC and 12 256 of 43 462 (28.2%) airways and 699 of 3882 (18.0%) airways in the TACp were observed, respectively, for the multiprotocol and multiscanner data without retraining. When using the automated low-dose CT method, TAC values of 347, 342, 323, and 266 and TACp values of 205, 202, 289, and 141 were observed for those who have never smoked and participants at Global Initiative for Chronic Obstructive Lung Disease stages 0, 1, and 2, respectively, which were superior to the respective values previously reported for matching groups when using a semiautomated method at standard-dose CT. CONCLUSION A low-cost, automated CT-based airway detection method was suitable for investigation of airway phenotypes at low-dose CT.Keywords: Airway, Airway Count, Airway Detection, Chronic Obstructive Pulmonary Disease, CT, Deep Learning, Generalizability, Low-Dose CT, Segmentation, Thorax, LungClinical trial registration no. NCT00608764 Supplemental material is available for this article. © RSNA, 2022.
Collapse
|
15
|
Katase S, Ichinose A, Hayashi M, Watanabe M, Chin K, Takeshita Y, Shiga H, Tateishi H, Onozawa S, Shirakawa Y, Yamashita K, Shudo J, Nakamura K, Nakanishi A, Kuroki K, Yokoyama K. Development and performance evaluation of a deep learning lung nodule detection system. BMC Med Imaging 2022; 22:203. [PMID: 36419044 PMCID: PMC9682774 DOI: 10.1186/s12880-022-00938-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 11/14/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Lung cancer is the leading cause of cancer-related deaths throughout the world. Chest computed tomography (CT) is now widely used in the screening and diagnosis of lung cancer due to its effectiveness. Radiologists must identify each small nodule shadow from 3D volume images, which is very burdensome and often results in missed nodules. To address these challenges, we developed a computer-aided detection (CAD) system that automatically detects lung nodules in CT images. METHODS A total of 1997 chest CT scans were collected for algorithm development. The algorithm was designed using deep learning technology. In addition to evaluating detection performance on various public datasets, its robustness to changes in radiation dose was assessed by a phantom study. To investigate the clinical usefulness of the CAD system, a reader study was conducted with 10 doctors, including inexperienced and expert readers. This study investigated whether the use of the CAD as a second reader could prevent nodular lesions in lungs that require follow-up examinations from being overlooked. Analysis was performed using the Jackknife Free-Response Receiver-Operating Characteristic (JAFROC). RESULTS The CAD system achieved sensitivity of 0.98/0.96 at 3.1/7.25 false positives per case on two public datasets. Sensitivity did not change within the range of practical doses for a study using a phantom. A second reader study showed that the use of this system significantly improved the detection ability of nodules that could be picked up clinically (p = 0.026). CONCLUSIONS We developed a deep learning-based CAD system that is robust to imaging conditions. Using this system as a second reader increased detection performance.
Collapse
Affiliation(s)
- Shichiro Katase
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Akimichi Ichinose
- grid.410862.90000 0004 1770 2279Imaging Technology Center, ICT Strategy Division, Fujifilm Corporation, 2-26-30, Nishi-Azabu, Minato-ku, Tokyo, Japan
| | - Mahiro Hayashi
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Masanaka Watanabe
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Kinka Chin
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Yuhei Takeshita
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Hisae Shiga
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Hidekatsu Tateishi
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Shiro Onozawa
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Yuya Shirakawa
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Koji Yamashita
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Jun Shudo
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Keigo Nakamura
- grid.410862.90000 0004 1770 2279Imaging Technology Center, ICT Strategy Division, Fujifilm Corporation, 2-26-30, Nishi-Azabu, Minato-ku, Tokyo, Japan
| | - Akihito Nakanishi
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Kazunori Kuroki
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Kenichi Yokoyama
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| |
Collapse
|
16
|
Sekeroglu K, Soysal ÖM. Multi-Perspective Hierarchical Deep-Fusion Learning Framework for Lung Nodule Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:8949. [PMID: 36433541 PMCID: PMC9697252 DOI: 10.3390/s22228949] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 11/09/2022] [Accepted: 11/10/2022] [Indexed: 06/16/2023]
Abstract
Lung cancer is the leading cancer type that causes mortality in both men and women. Computer-aided detection (CAD) and diagnosis systems can play a very important role for helping physicians with cancer treatments. This study proposes a hierarchical deep-fusion learning scheme in a CAD framework for the detection of nodules from computed tomography (CT) scans. In the proposed hierarchical approach, a decision is made at each level individually employing the decisions from the previous level. Further, individual decisions are computed for several perspectives of a volume of interest. This study explores three different approaches to obtain decisions in a hierarchical fashion. The first model utilizes raw images. The second model uses a single type of feature image having salient content. The last model employs multi-type feature images. All models learn the parameters by means of supervised learning. The proposed CAD frameworks are tested using lung CT scans from the LIDC/IDRI database. The experimental results showed that the proposed multi-perspective hierarchical fusion approach significantly improves the performance of the classification. The proposed hierarchical deep-fusion learning model achieved a sensitivity of 95% with only 0.4 fp/scan.
Collapse
Affiliation(s)
- Kazim Sekeroglu
- Department of Computer Science, Southeastern Louisiana University, Hammond, LA 70402, USA
| | - Ömer Muhammet Soysal
- Department of Computer Science, Southeastern Louisiana University, Hammond, LA 70402, USA
- School of Electrical Engineering and Computer Science, Louisiana State University, Baton Rouge, LA 70803, USA
| |
Collapse
|
17
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:5569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
18
|
Mei J, Cheng MM, Xu G, Wan LR, Zhang H. SANet: A Slice-Aware Network for Pulmonary Nodule Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:4374-4387. [PMID: 33687839 DOI: 10.1109/tpami.2021.3065086] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Lung cancer is the most common cause of cancer death worldwide. A timely diagnosis of the pulmonary nodules makes it possible to detect lung cancer in the early stage, and thoracic computed tomography (CT) provides a convenient way to diagnose nodules. However, it is hard even for experienced doctors to distinguish them from the massive CT slices. The currently existing nodule datasets are limited in both scale and category, which is insufficient and greatly restricts its applications. In this paper, we collect the largest and most diverse dataset named PN9 for pulmonary nodule detection by far. Specifically, it contains 8,798 CT scans and 40,439 annotated nodules from 9 common classes. We further propose a slice-aware network (SANet) for pulmonary nodule detection. A slice grouped non-local (SGNL) module is developed to capture long-range dependencies among any positions and any channels of one slice group in the feature map. And we introduce a 3D region proposal network to generate pulmonary nodule candidates with high sensitivity, while this detection stage usually comes with many false positives. Subsequently, a false positive reduction module (FPR) is proposed by using the multi-scale feature maps. To verify the performance of SANet and the significance of PN9, we perform extensive experiments compared with several state-of-the-art 2D CNN-based and 3D CNN-based detection methods. Promising evaluation results on PN9 prove the effectiveness of our proposed SANet. The dataset and source code is available at https://mmcheng.net/SANet/.
Collapse
|
19
|
Son JW, Hong JY, Kim Y, Kim WJ, Shin DY, Choi HS, Bak SH, Moon KM. How Many Private Data Are Needed for Deep Learning in Lung Nodule Detection on CT Scans? A Retrospective Multicenter Study. Cancers (Basel) 2022; 14:cancers14133174. [PMID: 35804946 PMCID: PMC9265117 DOI: 10.3390/cancers14133174] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 12/24/2022] Open
Abstract
Simple Summary The early detection of lung nodules is important for patient treatment and follow-up. Many researchers are investigating deep-learning-based lung nodule detection to ease the burden of lung nodule detection by radiologists. The purpose of this paper is to provide guidelines for collecting lung nodule data to facilitate research. We collected chest computed tomography scans reviewed by radiologists at three hospitals. In addition, several experiments were conducted using the large-scale open dataset, LUNA16. As a result of the experiment, it was possible to prove the value of using the collected data compared to using LUNA16. We also demonstrated the effectiveness of transfer learning from pre-trained learning weights in LUNA16. Finally, our study provides information on the amount of lung nodule data that must be collected to stabilize lung nodule detection performance. Abstract Early detection of lung nodules is essential for preventing lung cancer. However, the number of radiologists who can diagnose lung nodules is limited, and considerable effort and time are required. To address this problem, researchers are investigating the automation of deep-learning-based lung nodule detection. However, deep learning requires large amounts of data, which can be difficult to collect. Therefore, data collection should be optimized to facilitate experiments at the beginning of lung nodule detection studies. We collected chest computed tomography scans from 515 patients with lung nodules from three hospitals and high-quality lung nodule annotations reviewed by radiologists. We conducted several experiments using the collected datasets and publicly available data from LUNA16. The object detection model, YOLOX was used in the lung nodule detection experiment. Similar or better performance was obtained when training the model with the collected data rather than LUNA16 with large amounts of data. We also show that weight transfer learning from pre-trained open data is very useful when it is difficult to collect large amounts of data. Good performance can otherwise be expected when reaching more than 100 patients. This study offers valuable insights for guiding data collection in lung nodules studies in the future.
Collapse
Affiliation(s)
| | - Ji Young Hong
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Chuncheon Sacred Heart Hospital, Hallym University Medical Center, Chuncheon 24253, Korea;
| | - Yoon Kim
- ZIOVISION, Chuncheon 24341, Korea; (J.W.S.); (Y.K.)
- Department of Computer Science and Engineering, College of IT, Kangwon National University, Chuncheon 24341, Korea
| | - Woo Jin Kim
- Department of Internal Medicine, Kangwon National Universtiy, Chuncheon 24341, Korea;
| | - Dae-Yong Shin
- KNU-Industry Cooperation Foundation, Kangwon National Universtiy, Chuncheon 24341, Korea;
| | - Hyun-Soo Choi
- ZIOVISION, Chuncheon 24341, Korea; (J.W.S.); (Y.K.)
- Department of Computer Science and Engineering, College of IT, Kangwon National University, Chuncheon 24341, Korea
- Correspondence: (H.-S.C.); (S.H.B.); (K.M.M.); Tel.: +82-33-250-8452 (H.-S.C.); +82-2-3010-3491 (S.H.B.); +82-33-610-3058 (K.M.M.)
| | - So Hyeon Bak
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea
- Correspondence: (H.-S.C.); (S.H.B.); (K.M.M.); Tel.: +82-33-250-8452 (H.-S.C.); +82-2-3010-3491 (S.H.B.); +82-33-610-3058 (K.M.M.)
| | - Kyoung Min Moon
- Department of Pulmonary, Allergy and Critical Care Medicine, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung 25440, Korea
- Correspondence: (H.-S.C.); (S.H.B.); (K.M.M.); Tel.: +82-33-250-8452 (H.-S.C.); +82-2-3010-3491 (S.H.B.); +82-33-610-3058 (K.M.M.)
| |
Collapse
|
20
|
Explainable Machine Learning Solution for Observing Optimal Surgery Timings in Thoracic Cancer Diagnosis. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136506] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
In this paper, we introduce an AI-based procedure to estimate and assist in choosing the optimal surgery timing, in the case of a thoracic cancer diagnostic, based on an explainable machine learning model trained on a knowledge base. This decision is usually taken by the surgeon after examining a set of clinical parameters and their evolution in time. Therefore, it is sometimes subjective, it depends heavily on the previous experience of the surgeon, and it might not be confirmed by the histopathological exam. Therefore, we propose a pipeline of automatic processing steps with the purpose of inferring the prospective result of the histopathologic exam, generating an explanation of why this inference holds, and finally, evaluating it against the conclusive opinion of an experienced surgeon. To obtain an accurate practical result, the training dataset is labeled manually by the thoracic surgeon, creating a training knowledge base that is not biased towards clinical practice. The resulting intelligent system benefits from both the precision of a classical expert system and the flexibility of deep neural networks, and it is supposed to avoid, at maximum, any possible human misinterpretations and provide a factual estimate for the proper timing for surgical intervention. Overall, the experiments showed a 7% improvement on the test set compared with the medical opinion alone. To enable the reproducibility of the AI system, complete handling of a case study is presented from both the medical and technical aspects.
Collapse
|
21
|
de Vente C, Boulogne LH, Venkadesh KV, Sital C, Lessmann N, Jacobs C, Sanchez CI, van Ginneken B. Automated COVID-19 Grading With Convolutional Neural Networks in Computed Tomography Scans: A Systematic Comparison. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2022; 3:129-138. [PMID: 35582210 PMCID: PMC9014473 DOI: 10.1109/tai.2021.3115093] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 06/02/2021] [Accepted: 09/18/2021] [Indexed: 11/08/2022]
Abstract
Amidst the ongoing pandemic, the assessment of computed tomography (CT) images for COVID-19 presence can exceed the workload capacity of radiologists. Several studies addressed this issue by automating COVID-19 classification and grading from CT images with convolutional neural networks (CNNs). Many of these studies reported initial results of algorithms that were assembled from commonly used components. However, the choice of the components of these algorithms was often pragmatic rather than systematic and systems were not compared to each other across papers in a fair manner. We systematically investigated the effectiveness of using 3-D CNNs instead of 2-D CNNs for seven commonly used architectures, including DenseNet, Inception, and ResNet variants. For the architecture that performed best, we furthermore investigated the effect of initializing the network with pretrained weights, providing automatically computed lesion maps as additional network input, and predicting a continuous instead of a categorical output. A 3-D DenseNet-201 with these components achieved an area under the receiver operating characteristic curve of 0.930 on our test set of 105 CT scans and an AUC of 0.919 on a publicly available set of 742 CT scans, a substantial improvement in comparison with a previously published 2-D CNN. This article provides insights into the performance benefits of various components for COVID-19 classification and grading systems. We have created a challenge on grand-challenge.org to allow for a fair comparison between the results of this and future research.
Collapse
Affiliation(s)
- Coen de Vente
- Radboud University Medical Center, Donders Institute for Brain, Cognition and BehaviourDepartment of Medical Imaging6525GANijmegenThe Netherlands.,Informatics Institute, Faculty of ScienceUniversity of Amsterdam 1012 WX Amsterdam The Netherlands
| | - Luuk H Boulogne
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Kiran Vaidhya Venkadesh
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Cheryl Sital
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Nikolas Lessmann
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Colin Jacobs
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| | - Clara I Sanchez
- Informatics Institute, Faculty of ScienceUniversity of Amsterdam 1012 WX Amsterdam The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Radboud Institute for Health SciencesDepartment of Medical Imaging 6525 GA Nijmegen The Netherlands
| |
Collapse
|
22
|
Chiu HY, Chao HS, Chen YM. Application of Artificial Intelligence in Lung Cancer. Cancers (Basel) 2022; 14:1370. [PMID: 35326521 PMCID: PMC8946647 DOI: 10.3390/cancers14061370] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 03/07/2022] [Indexed: 12/12/2022] Open
Abstract
Lung cancer is the leading cause of malignancy-related mortality worldwide due to its heterogeneous features and diagnosis at a late stage. Artificial intelligence (AI) is good at handling a large volume of computational and repeated labor work and is suitable for assisting doctors in analyzing image-dominant diseases like lung cancer. Scientists have shown long-standing efforts to apply AI in lung cancer screening via CXR and chest CT since the 1960s. Several grand challenges were held to find the best AI model. Currently, the FDA have approved several AI programs in CXR and chest CT reading, which enables AI systems to take part in lung cancer detection. Following the success of AI application in the radiology field, AI was applied to digitalized whole slide imaging (WSI) annotation. Integrating with more information, like demographics and clinical data, the AI systems could play a role in decision-making by classifying EGFR mutations and PD-L1 expression. AI systems also help clinicians to estimate the patient's prognosis by predicting drug response, the tumor recurrence rate after surgery, radiotherapy response, and side effects. Though there are still some obstacles, deploying AI systems in the clinical workflow is vital for the foreseeable future.
Collapse
Affiliation(s)
- Hwa-Yen Chiu
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Division of Internal Medicine, Hsinchu Branch, Taipei Veterans General Hospital, Hsinchu 310, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Heng-Sheng Chao
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yuh-Min Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| |
Collapse
|
23
|
Suzuki K, Otsuka Y, Nomura Y, Kumamaru KK, Kuwatsuru R, Aoki S. Development and Validation of a Modified Three-Dimensional U-Net Deep-Learning Model for Automated Detection of Lung Nodules on Chest CT Images From the Lung Image Database Consortium and Japanese Datasets. Acad Radiol 2022; 29 Suppl 2:S11-S17. [PMID: 32839096 DOI: 10.1016/j.acra.2020.07.030] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 07/13/2020] [Accepted: 07/22/2020] [Indexed: 12/17/2022]
Abstract
RATIONALE AND OBJECTIVES A more accurate lung nodule detection algorithm is needed. We developed a modified three-dimensional (3D) U-net deep-learning model for the automated detection of lung nodules on chest CT images. The purpose of this study was to evaluate the accuracy of the developed modified 3D U-net deep-learning model. MATERIALS AND METHODS In this Health Insurance Portability and Accountability Act-compliant, Institutional Review Board-approved retrospective study, the 3D U-net based deep-learning model was trained using the Lung Image Database Consortium and Image Database Resource Initiative dataset. For internal model validation, we used 89 chest CT scans that were not used for model training. For external model validation, we used 450 chest CT scans taken at an urban university hospital in Japan. Each case included at least one nodule of >5 mm identified by an experienced radiologist. We evaluated model accuracy using the competition performance metric (CPM) (average sensitivity at 1/8, 1/4, 1/2, 1, 2, 4, and 8 false-positives per scan). The 95% confidence interval (CI) was computed by bootstrapping 1000 times. RESULTS In the internal validation, the CPM was 94.7% (95% CI: 89.1%-98.6%). In the external validation, the CPM was 83.3% (95% CI: 79.4%-86.1%). CONCLUSION The modified 3D U-net deep-learning model showed high performance in both internal and external validation.
Collapse
Affiliation(s)
- Kazuhiro Suzuki
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan.
| | - Yujiro Otsuka
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan; Plusmann LLC, Tokyo, Japan; Milliman, Inc., Tokyo, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Kanako K Kumamaru
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan
| | - Ryohei Kuwatsuru
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan
| | - Shigeki Aoki
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan
| |
Collapse
|
24
|
Lindsay WD, Sachs N, Gee JC, Mortani Barbosa EJ. Transparent Machine Learning Models to Diagnose Suspicious Thoracic Lesions Leveraging CT Guided Biopsy Data. Acad Radiol 2022; 29 Suppl 2:S156-S164. [PMID: 34373194 DOI: 10.1016/j.acra.2021.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 06/28/2021] [Accepted: 07/03/2021] [Indexed: 11/30/2022]
Abstract
RATIONALE AND OBJECTIVES To train and validate machine learning models capable of classifying suspicious thoracic lesions as benign or malignant and to further classify malignant lesions by pathologic subtype while quantifying feature importance for each classification. MATERIALS AND METHODS 796 patients who had undergone CT guided thoracic biopsy for a concerning thoracic lesion (79.3% lung, 11.4% mediastinum, 6.5% pleura, 2.7% chest wall) were retrospectively enrolled. Lesions were classified as malignant or benign based on ground-truth pathology result, and malignant lesions were classified as primary or secondary cancer. Clinical variables were extracted from EMR and radiology reports. Supervised binary and multiclass classification models were trained to classify lesions based on the input features and evaluated on a held-out test set. Model specific feature analyses were performed to identify variables most predictive of each class, as well as to assess the independent importance of clinical, and imaging features. RESULTS Binary classification models achieved a top accuracy of 80.6%, with predictive features included smoking history, age, lesion size, and lesion location. Multiclass classification models achieved a top weighted average f1-score of 0.73. Features predictive of primary cancer included smoking history, race, and age, while features predictive of secondary cancer included lesion location, and a history of cancer. CONCLUSION Machine learning models enable classification of suspicious thoracic lesions based on clinical and imaging variables, achieving clinically useful performance while identifying importance of individual input features on a pathology-proven dataset. We believe models such as these are more likely to be trusted and adopted by clinicians.
Collapse
Affiliation(s)
- William D Lindsay
- Perelman School of Medicine, University of Pennsylvania Health System, Philadelphia, Pennsylvania; Department of Bioengineering, School of Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Nicholas Sachs
- Perelman School of Medicine, University of Pennsylvania Health System, Philadelphia, Pennsylvania
| | - James C Gee
- Department of Bioengineering, School of Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Eduardo J Mortani Barbosa
- Department of Bioengineering, School of Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania.
| |
Collapse
|
25
|
Deep Learning Applications in Computed Tomography Images for Pulmonary Nodule Detection and Diagnosis: A Review. Diagnostics (Basel) 2022; 12:diagnostics12020298. [PMID: 35204388 PMCID: PMC8871398 DOI: 10.3390/diagnostics12020298] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/21/2022] [Accepted: 01/22/2022] [Indexed: 12/26/2022] Open
Abstract
Lung cancer has one of the highest mortality rates of all cancers and poses a severe threat to people’s health. Therefore, diagnosing lung nodules at an early stage is crucial to improving patient survival rates. Numerous computer-aided diagnosis (CAD) systems have been developed to detect and classify such nodules in their early stages. Currently, CAD systems for pulmonary nodules comprise data acquisition, pre-processing, lung segmentation, nodule detection, false-positive reduction, segmentation, and classification. A number of review articles have considered various components of such systems, but this review focuses on segmentation and classification parts. Specifically, categorizing segmentation parts based on lung nodule type and network architectures, i.e., general neural network and multiview convolution neural network (CNN) architecture. Moreover, this work organizes related literature for classification of parts based on nodule or non-nodule and benign or malignant. The essential CT lung datasets and evaluation metrics used in the detection and diagnosis of lung nodules have been systematically summarized as well. Thus, this review provides a baseline understanding of the topic for interested readers.
Collapse
|
26
|
Balagurunathan Y, Beers A, McNitt-Gray M, Hadjiiski L, Napel S, Goldgof D, Perez G, Arbelaez P, Mehrtash A, Kapur T, Yang E, Moon JW, Bernardino G, Delgado-Gonzalo R, Farhangi MM, Amini AA, Ni R, Feng X, Bagari A, Vaidhya K, Veasey B, Safta W, Frigui H, Enguehard J, Gholipour A, Castillo LS, Daza LA, Pinsky P, Kalpathy-Cramer J, Farahani K. Lung Nodule Malignancy Prediction in Sequential CT Scans: Summary of ISBI 2018 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3748-3761. [PMID: 34264825 PMCID: PMC9531053 DOI: 10.1109/tmi.2021.3097665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).
Collapse
Affiliation(s)
| | | | | | | | - Sandy Napel
- Dept. of Radiology, School of Medicine, Stanford University (SU), CA
| | | | - Gustavo Perez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Pablo Arbelaez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Alireza Mehrtash
- Robotics and Control Laboratory (RCL), Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Tina Kapur
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Ehwa Yang
- Sungkyunkwan University School of Medicine, Seoul 06351, Korea
| | - Jung Won Moon
- Human Medical Imaging & Intervention Center, Seoul 06524, Korea
| | - Gabriel Bernardino
- Centre Suisse d’Électronique et de Microtechnique, Neuchâtel, Switzerland
| | | | - M. Mehdi Farhangi
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Computer Engineering and Computer Science, University of Louisville
| | - Amir A. Amini
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | | | - Xue Feng
- Spingbok Inc
- Department of Biomedical Engineering, University of Virginia, Charlottesville
| | | | | | - Benjamin Veasey
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | - Wiem Safta
- Computer Engineering and Computer Science, University of Louisville
| | - Hichem Frigui
- Computer Engineering and Computer Science, University of Louisville
| | - Joseph Enguehard
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | - Ali Gholipour
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | | | - Laura Alexandra Daza
- Department of Biomedical Engineering, Universidad de los Andes, Bogota, Colombia
| | - Paul Pinsky
- Divsion of Cancer Prevention, National Cancer Institute (NCI), Washington DC
| | | | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NCI), Washington DC
| |
Collapse
|
27
|
Guo Z, Zhao L, Yuan J, Yu H. MSANet Multi-Scale Aggregation Network Integrating Spatial and Channel Information for Lung Nodule Detection. IEEE J Biomed Health Inform 2021; 26:2547-2558. [PMID: 34847048 DOI: 10.1109/jbhi.2021.3131671] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
AbstractImproving the detection accuracy of pulmonary nodules plays an important role in the diagnosis and early treatment of lung cancer. In this paper, a multiscale aggregation network (MSANet), which integrates spatial and channel information, is proposed for 3D pulmonary nodule detection. MSANet is designed to improve the network's ability to extract information and realize multiscale information fusion. First, multiscale aggregation interaction strategies are used to extract multilevel features and avoid feature fusion interference caused by large resolution differences. These strategies can effectively integrate the contextual information of adjacent resolutions and help to detect different sized nodules. Second, the feature extraction module is designed for efficient channel attention and self-calibrated convolutions (ECA-SC) to enhance the interchannel and local spatial information. ECA-SC also recalibrates the features in the feature extraction process, which can realize adaptive learning of feature weights and enhance the information extraction ability of features. Third, the distribution ranking (DR) loss is introduced as the classification loss function to solve the problem of imbalanced data between positive and negative samples. The proposed MSANet is comprehensively compared with other pulmonary nodule detection networks on the LUNA16 dataset, and a CPM score of 0.920 is obtained. The results show that the sensitivity for detecting pulmonary nodules is improved and that the average number of false-positives is effectively reduced. The proposed method has advantages in pulmonary nodule detection and can effectively assist radiologists in pulmonary nodule detection.
Collapse
|
28
|
Retico A, Avanzo M, Boccali T, Bonacorsi D, Botta F, Cuttone G, Martelli B, Salomoni D, Spiga D, Trianni A, Stasi M, Iori M, Talamonti C. Enhancing the impact of Artificial Intelligence in Medicine: A joint AIFM-INFN Italian initiative for a dedicated cloud-based computing infrastructure. Phys Med 2021; 91:140-150. [PMID: 34801873 DOI: 10.1016/j.ejmp.2021.10.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 10/04/2021] [Accepted: 10/05/2021] [Indexed: 12/23/2022] Open
Abstract
Artificial Intelligence (AI) techniques have been implemented in the field of Medical Imaging for more than forty years. Medical Physicists, Clinicians and Computer Scientists have been collaborating since the beginning to realize software solutions to enhance the informative content of medical images, including AI-based support systems for image interpretation. Despite the recent massive progress in this field due to the current emphasis on Radiomics, Machine Learning and Deep Learning, there are still some barriers to overcome before these tools are fully integrated into the clinical workflows to finally enable a precision medicine approach to patients' care. Nowadays, as Medical Imaging has entered the Big Data era, innovative solutions to efficiently deal with huge amounts of data and to exploit large and distributed computing resources are urgently needed. In the framework of a collaboration agreement between the Italian Association of Medical Physicists (AIFM) and the National Institute for Nuclear Physics (INFN), we propose a model of an intensive computing infrastructure, especially suited for training AI models, equipped with secure storage systems, compliant with data protection regulation, which will accelerate the development and extensive validation of AI-based solutions in the Medical Imaging field of research. This solution can be developed and made operational by Physicists and Computer Scientists working on complementary fields of research in Physics, such as High Energy Physics and Medical Physics, who have all the necessary skills to tailor the AI-technology to the needs of the Medical Imaging community and to shorten the pathway towards the clinical applicability of AI-based decision support systems.
Collapse
Affiliation(s)
- Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy
| | - Michele Avanzo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy
| | - Tommaso Boccali
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy
| | - Daniele Bonacorsi
- University of Bologna, 40126 Bologna, Italy; INFN, Bologna Division, 40126 Bologna, Italy
| | - Francesca Botta
- Medical Physics Unit, Istituto Europeo di oncologia IRCCS, 20141 Milan, Italy
| | - Giacomo Cuttone
- INFN, Southern National Laboratory (LNS), 95123 Catania, Italy
| | | | | | | | - Annalisa Trianni
- Medical Physics Unit, Ospedale Santa Chiara APSS, 38122 Trento, Italy
| | - Michele Stasi
- Medical Physics Unit, A.O. Ordine Mauriziano di Torino, 10128 Torino, Italy
| | - Mauro Iori
- Medical Physics Unit, Azienda USL-IRCCS di Reggio Emilia, 42122 Reggio Emilia, Italy.
| | - Cinzia Talamonti
- Department Biomedical Experimental and Clinical Science "Mario Serio", University of Florence, 50134 Florence, Italy; INFN, Florence Division, 50134 Florence, Italy
| |
Collapse
|
29
|
Fusco R, Grassi R, Granata V, Setola SV, Grassi F, Cozzi D, Pecori B, Izzo F, Petrillo A. Artificial Intelligence and COVID-19 Using Chest CT Scan and Chest X-ray Images: Machine Learning and Deep Learning Approaches for Diagnosis and Treatment. J Pers Med 2021; 11:993. [PMID: 34683133 PMCID: PMC8540782 DOI: 10.3390/jpm11100993] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 09/22/2021] [Accepted: 09/28/2021] [Indexed: 12/26/2022] Open
Abstract
OBJECTIVE To report an overview and update on Artificial Intelligence (AI) and COVID-19 using chest Computed Tomography (CT) scan and chest X-ray images (CXR). Machine Learning and Deep Learning Approaches for Diagnosis and Treatment were identified. METHODS Several electronic datasets were analyzed. The search covered the years from January 2019 to June 2021. The inclusion criteria were studied evaluating the use of AI methods in COVID-19 disease reporting performance results in terms of accuracy or precision or area under Receiver Operating Characteristic (ROC) curve (AUC). RESULTS Twenty-two studies met the inclusion criteria: 13 papers were based on AI in CXR and 10 based on AI in CT. The summarized mean value of the accuracy and precision of CXR in COVID-19 disease were 93.7% ± 10.0% of standard deviation (range 68.4-99.9%) and 95.7% ± 7.1% of standard deviation (range 83.0-100.0%), respectively. The summarized mean value of the accuracy and specificity of CT in COVID-19 disease were 89.1% ± 7.3% of standard deviation (range 78.0-99.9%) and 94.5 ± 6.4% of standard deviation (range 86.0-100.0%), respectively. No statistically significant difference in summarized accuracy mean value between CXR and CT was observed using the Chi square test (p value > 0.05). CONCLUSIONS Summarized accuracy of the selected papers is high but there was an important variability; however, less in CT studies compared to CXR studies. Nonetheless, AI approaches could be used in the identification of disease clusters, monitoring of cases, prediction of the future outbreaks, mortality risk, COVID-19 diagnosis, and disease management.
Collapse
Affiliation(s)
- Roberta Fusco
- IGEA SpA Medical Division—Oncology, Via Casarea 65, Casalnuovo di Napoli, 80013 Naples, Italy;
| | - Roberta Grassi
- Division of Radiology, Università Degli Studi Della Campania Luigi Vanvitelli, 80138 Naples, Italy; (R.G.); (F.G.)
- Italian Society of Medical and Interventional Radiology (SIRM), SIRM Foundation, 20122 Milan, Italy
| | - Vincenza Granata
- Division of Radiology, Istituto Nazionale Tumori IRCCS Fondazione Pascale—IRCCS di Napoli, 80131 Naples, Italy; (S.V.S.); (A.P.)
| | - Sergio Venanzio Setola
- Division of Radiology, Istituto Nazionale Tumori IRCCS Fondazione Pascale—IRCCS di Napoli, 80131 Naples, Italy; (S.V.S.); (A.P.)
| | - Francesca Grassi
- Division of Radiology, Università Degli Studi Della Campania Luigi Vanvitelli, 80138 Naples, Italy; (R.G.); (F.G.)
| | - Diletta Cozzi
- Division of Radiology, Azienda Ospedaliera Universitaria Careggi, 50134 Florence, Italy;
| | - Biagio Pecori
- Division of Radiotherapy and Innovative Technologies, Istituto Nazionale Tumori IRCCS Fondazione Pascale—IRCCS di Napoli, 80131 Naples, Italy;
| | - Francesco Izzo
- Division of Hepatobiliary Surgery, Istituto Nazionale Tumori IRCCS Fondazione Pascale—IRCCS di Napoli, 80131 Naples, Italy;
| | - Antonella Petrillo
- Division of Radiology, Istituto Nazionale Tumori IRCCS Fondazione Pascale—IRCCS di Napoli, 80131 Naples, Italy; (S.V.S.); (A.P.)
| |
Collapse
|
30
|
Gu Y, Chi J, Liu J, Yang L, Zhang B, Yu D, Zhao Y, Lu X. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput Biol Med 2021; 137:104806. [PMID: 34461501 DOI: 10.1016/j.compbiomed.2021.104806] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 08/23/2021] [Accepted: 08/23/2021] [Indexed: 12/17/2022]
Abstract
Lung cancer has one of the highest mortalities of all cancers. According to the National Lung Screening Trial, patients who underwent low-dose computed tomography (CT) scanning once a year for 3 years showed a 20% decline in lung cancer mortality. To further improve the survival rate of lung cancer patients, computer-aided diagnosis (CAD) technology shows great potential. In this paper, we summarize existing CAD approaches applying deep learning to CT scan data for pre-processing, lung segmentation, false positive reduction, lung nodule detection, segmentation, classification and retrieval. Selected papers are drawn from academic journals and conferences up to November 2020. We discuss the development of deep learning, describe several important aspects of lung nodule CAD systems and assess the performance of the selected studies on various datasets, which include LIDC-IDRI, LUNA16, LIDC, DSB2017, NLST, TianChi, and ELCAP. Overall, in the detection studies reviewed, the sensitivity of these techniques is found to range from 61.61% to 98.10%, and the value of the FPs per scan is between 0.125 and 32. In the selected classification studies, the accuracy ranges from 75.01% to 97.58%. The precision of the selected retrieval studies is between 71.43% and 87.29%. Based on performance, deep learning based CAD technologies for detection and classification of pulmonary nodules achieve satisfactory results. However, there are still many challenges and limitations remaining including over-fitting, lack of interpretability and insufficient annotated data. This review helps researchers and radiologists to better understand CAD technology for pulmonary nodule detection, segmentation, classification and retrieval. We summarize the performance of current techniques, consider the challenges, and propose directions for future high-impact research.
Collapse
Affiliation(s)
- Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jingqian Chi
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jiaqi Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Xiaoqi Lu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China; College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China
| |
Collapse
|
31
|
Li W, Liang Y, Zhang X, Liu C, He L, Miao L, Sun W. A deep learning approach to automatic gingivitis screening based on classification and localization in RGB photos. Sci Rep 2021; 11:16831. [PMID: 34413332 PMCID: PMC8376991 DOI: 10.1038/s41598-021-96091-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 07/26/2021] [Indexed: 12/22/2022] Open
Abstract
Routine dental visit is the most common approach to detect the gingivitis. However, such diagnosis can sometimes be unavailable due to the limited medical resources in certain areas and costly for low-income populations. This study proposes to screen the existence of gingivitis and its irritants, i.e., dental calculus and soft deposits, from oral photos with a novel Multi-Task Learning convolutional neural network (CNN) model. The study can be meaningful for promoting the public dental health, since it sheds light on a cost-effective and ubiquitous solution for the early detection of dental issues. With 625 patients included in this study, the classification Area Under the Curve (AUC) for detecting gingivitis, dental calculus and soft deposits were 87.11%, 80.11%, and 78.57%, respectively; Meanwhile, according to our experiments, the model can also localize the three types of findings on oral photos with moderate accuracy, which enables the model to explain the screen results. By comparing to general-purpose CNNs, we showed our model significantly outperformed on both classification and localization tasks, which indicates the effectiveness of Multi-Task Learning on dental disease detection. In all, the study shows the potential of deep learning for enabling the screening of dental diseases among large populations.
Collapse
Affiliation(s)
- Wen Li
- Department of Endodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, No.30 Zhongyang Road, Xuanwu District, Nanjing, Jiangsu, People's Republic of China
| | - Yuan Liang
- University of California, Los Angeles, USA
| | - Xuan Zhang
- Department of Periodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, No.30 Zhongyang Road, Xuanwu District, Nanjing, Jiangsu, People's Republic of China
| | - Chao Liu
- Department of Orthodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, People's Republic of China
| | - Lei He
- University of California, Los Angeles, USA
| | - Leiying Miao
- Department of Endodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, No.30 Zhongyang Road, Xuanwu District, Nanjing, Jiangsu, People's Republic of China.
| | - Weibin Sun
- Department of Periodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, No.30 Zhongyang Road, Xuanwu District, Nanjing, Jiangsu, People's Republic of China.
| |
Collapse
|
32
|
Lung Nodule Detection from Feature Engineering to Deep Learning in Thoracic CT Images: a Comprehensive Review. J Digit Imaging 2021; 33:655-677. [PMID: 31997045 DOI: 10.1007/s10278-020-00320-6] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
This paper presents a systematic review of the literature focused on the lung nodule detection in chest computed tomography (CT) images. Manual detection of lung nodules by the radiologist is a sequential and time-consuming process. The detection is subjective and depends on the radiologist's experiences. Owing to the variation in shapes and appearances of a lung nodule, it is very difficult to identify the proper location of the nodule from a huge number of slices generated by the CT scanner. Small nodules (< 10 mm in diameter) may be missed by this manual detection process. Therefore, computer-aided diagnosis (CAD) system acts as a "second opinion" for the radiologists, by making final decision quickly with higher accuracy and greater confidence. The goal of this survey work is to present the current state of the artworks and their progress towards lung nodule detection to the researchers and readers in this domain. This review paper has covered the published works from 2009 to April 2018. Different nodule detection approaches are described elaborately in this work. Recently, it is observed that deep learning (DL)-based approaches are applied extensively for nodule detection and characterization. Therefore, emphasis has been given to convolutional neural network (CNN)-based DL approaches by describing different CNN-based networks.
Collapse
|
33
|
Zuo W, Zhou F, He Y. An Embedded Multi-branch 3D Convolution Neural Network for False Positive Reduction in Lung Nodule Detection. J Digit Imaging 2021; 33:846-857. [PMID: 32095944 DOI: 10.1007/s10278-020-00326-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Numerous lung nodule candidates can be produced through an automated lung nodule detection system. Classifying these candidates to reduce false positives is an important step in the detection process. The objective during this paper is to predict real nodules from a large number of pulmonary nodule candidates. Facing the challenge of the classification task, we propose a novel 3D convolution neural network (CNN) to reduce false positives in lung nodule detection. The novel 3D CNN includes embedded multiple branches in its structure. Each branch processes a feature map from a layer with different depths. All of these branches are cascaded at their ends; thus, features from different depth layers are combined to predict the categories of candidates. The proposed method obtains a competitive score in lung nodule candidate classification on LUNA16 dataset with an accuracy of 0.9783, a sensitivity of 0.8771, a precision of 0.9426, and a specificity of 0.9925. Moreover, a good performance on the competition performance metric (CPM) is also obtained with a score of 0.830. As a 3D CNN, the proposed model can learn complete and three-dimensional discriminative information about nodules and non-nodules to avoid some misidentification problems caused due to lack of spatial correlation information extracted from traditional methods or 2D networks. As an embedded multi-branch structure, the model is also more effective in recognizing the nodules of various shapes and sizes. As a result, the proposed method gains a competitive score on the false positive reduction in lung nodule detection and can be used as a reference for classifying nodule candidates.
Collapse
Affiliation(s)
- Wangxia Zuo
- The School of Instrumentation and Optoelectronics Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100083, China.,The College of Electrical Engineering, University of South China, Hengyang, 421001, Hunan, China
| | - Fuqiang Zhou
- The School of Instrumentation and Optoelectronics Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100083, China.
| | - Yuzhu He
- The School of Instrumentation and Optoelectronics Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100083, China
| |
Collapse
|
34
|
Huang G, Wei X, Tang H, Bai F, Lin X, Xue D. A systematic review and meta-analysis of diagnostic performance and physicians' perceptions of artificial intelligence (AI)-assisted CT diagnostic technology for the classification of pulmonary nodules. J Thorac Dis 2021; 13:4797-4811. [PMID: 34527320 PMCID: PMC8411165 DOI: 10.21037/jtd-21-810] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 07/09/2021] [Indexed: 12/26/2022]
Abstract
BACKGROUND Lung cancer was the second most commonly diagnosed cancer and the leading cause of cancer death in 2020. Although artificial intelligence (AI)-assisted diagnostic technologies have shown promise and has been used in clinical practice in recent years, no products related to AI-assisted CT diagnostic technologies for the classification of pulmonary nodules have been approved by the National Medical Products Administration in China. The objective of this article was to systematically review the diagnostic performance of AI-assisted CT diagnostic technology for the classification of pulmonary nodules as benign or malignant and to analyze physicians' perceptions of this technology in China. METHODS All relevant studies from 6 literature databases were searched and screened according to the inclusion and exclusion criteria. Data were extracted and the study quality was assessed by two reviewers. The study heterogeneity and publication bias were estimated. A questionnaire survey on the perceptions of physicians was conducted in 9 public tertiary hospitals in China. A meta-analysis, meta-regression and univariate logistic model were used in the systematic review and to explore the association of physicians' perceptions with their rate of support for the clinical application of the technology. RESULTS Twenty-seven studies with 5,727 pulmonary nodules were finally included in the meta-analysis. We found that the quality of the included studies was generally acceptable and that the pooled sensitivity and specificity of AI-assisted CT diagnostic technology for the classification of pulmonary nodules as benign or malignant were 0.90 and 0.89, respectively. The pooled diagnostic odds ratio (DOR) was 70.33. The majority of the surveyed physicians in China perceived "reduced workload for radiologists" and "improved diagnostic efficiency" as the important benefits of this technology. In addition, diagnostic accuracy (including misdiagnosis) and practical experience were significantly associated with whether physicians supported its clinical application. CONCLUSIONS In the context of lung cancer diagnosis, AI-assisted CT diagnostic technology for the classification of pulmonary nodules as benign or malignant has good diagnostic performance, but its specificity needs to be improved.
Collapse
Affiliation(s)
- Guo Huang
- NHC Key Laboratory of Health Technology Assessment (Fudan University), Department of Hospital Management, School of Public Health, Fudan University, Shanghai, China
| | - Xuefeng Wei
- Health Commission of Gansu Province, Lanzhou, China
| | - Huiqin Tang
- Health Commission of Hubei Province, Wuhan, China
| | - Fei Bai
- National Center for Medical Service Administration, Beijing, China
| | - Xia Lin
- National Center for Medical Service Administration, Beijing, China
| | - Di Xue
- NHC Key Laboratory of Health Technology Assessment (Fudan University), Department of Hospital Management, School of Public Health, Fudan University, Shanghai, China
| |
Collapse
|
35
|
Morozov SP, Gombolevskiy VA, Elizarov AB, Gusev MA, Novik VP, Prokudaylo SB, Bardin AS, Popov EV, Ledikhova NV, Chernina VY, Blokhin IA, Nikolaev AE, Reshetnikov RV, Vladzymyrskyy AV, Kulberg NS. A simplified cluster model and a tool adapted for collaborative labeling of lung cancer CT scans. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106111. [PMID: 33957377 DOI: 10.1016/j.cmpb.2021.106111] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 04/07/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer is the most common type of cancer with a high mortality rate. Early detection using medical imaging is critically important for the long-term survival of the patients. Computer-aided diagnosis (CAD) tools can potentially reduce the number of incorrect interpretations of medical image data by radiologists. Datasets with adequate sample size, annotation, and truth are the dominant factors in developing and training effective CAD algorithms. The objective of this study was to produce a practical approach and a tool for the creation of medical image datasets. METHODS The proposed model uses the modified maximum transverse diameter approach to mark a putative lung nodule. The modification involves the possibility to use a set of overlapping spheres of appropriate size to approximate the shape of the nodule. The algorithm embedded in the model also groups the marks made by different readers for the same lesion. We used the data of 536 randomly selected patients of Moscow outpatient clinics to create a dataset of standard-dose chest computed tomography (CT) scans utilizing the double-reading approach with arbitration. Six volunteer radiologists independently produced a report for each scan using the proposed model with the main focus on the detection of lesions with sizes ranging from 3 to 30 mm. After this, an arbitrator reviewed their marks and annotations. RESULTS The maximum transverse diameter approach outperformed the alternative methods (3D box, ellipsoid, and complete outline construction) in a study of 10,000 computer-generated tumor models of different shapes in terms of accuracy and speed of nodule shape approximation. The markup and annotation of the CTLungCa-500 dataset revealed 72 studies containing no lung nodules. The remaining 464 CT scans contained 3151 lesions marked by at least one radiologist: 56%, 14%, and 29% of the lesions were malignant, benign, and non-nodular, respectively. 2887 lesions have the target size of 3-30 mm. Only 70 nodules were uniformly identified by all the six readers. An increase in the number of independent readers providing CT scans interpretations led to an accuracy increase associated with a decrease in agreement. The dataset markup process took three working weeks. CONCLUSIONS The developed cluster model simplifies the collaborative and crowdsourced creation of image repositories and makes it time-efficient. Our proof-of-concept dataset provides a valuable source of annotated medical imaging data for training CAD algorithms aimed at early detection of lung nodules. The tool and the dataset are publicly available at https://github.com/Center-of-Diagnostics-and-Telemedicine/FAnTom.git and https://mosmed.ai/en/datasets/ct_lungcancer_500/, respectively.
Collapse
Affiliation(s)
- S P Morozov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - V A Gombolevskiy
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - A B Elizarov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - M A Gusev
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia; Federal State Budgetary Educational Institution of Higher Education "Moscow Polytechnic University", Tverskaya str., 11, Moscow, 125993, Russia
| | - V P Novik
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - S B Prokudaylo
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - A S Bardin
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - E V Popov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - N V Ledikhova
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - V Y Chernina
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - I A Blokhin
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - A E Nikolaev
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - R V Reshetnikov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia; Institute of Molecular Medicine, Sechenov First Moscow State Medical University, Trubetskaya str. 8-2, Moscow, 119991, Russia
| | - A V Vladzymyrskyy
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - N S Kulberg
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia; Federal Research Center "Computer Science and Control" of Russian Academy of Sciences, Vavilova str., 44/2, Moscow, 119333, Russia.
| |
Collapse
|
36
|
Shen L, Liu M, Wang C, Guo C, Meijering E, Wang Y. Efficient 3D Junction Detection in Biomedical Images Based on a Circular Sampling Model and Reverse Mapping. IEEE J Biomed Health Inform 2021; 25:1612-1623. [PMID: 33166258 DOI: 10.1109/jbhi.2020.3036743] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Detection and localization of terminations and junctions is a key step in the morphological reconstruction of tree-like structures in images. Previously, a ray-shooting model was proposed to detect termination points automatically. In this paper, we propose an automatic method for 3D junction points detection in biomedical images, relying on a circular sampling model and a 2D-to-3D reverse mapping approach. First, the existing ray-shooting model is improved to a circular sampling model to extract the pixel intensity distribution feature across the potential branches around the point of interest. The computation cost can be reduced dramatically compared to the existing ray-shooting model. Then, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is employed to detect 2D junction points in maximum intensity projections (MIPs) of sub-volume images in a given 3D image, by determining the number of branches in the candidate junction region. Further, a 2D-to-3D reverse mapping approach is used to map these detected 2D junction points in MIPs to the 3D junction points in the original 3D images. The proposed 3D junction point detection method is implemented as a build-in tool in the Vaa3D platform. Experiments on multiple 2D images and 3D images show average precision and recall rates of 87.11% and 88.33% respectively. In addition, the proposed algorithm is dozens of times faster than the existing deep-learning based model. The proposed method has excellent performance in both detection precision and computation efficiency for junction detection even in large-scale biomedical images.
Collapse
|
37
|
Schreuder A, Scholten ET, van Ginneken B, Jacobs C. Artificial intelligence for detection and characterization of pulmonary nodules in lung cancer CT screening: ready for practice? Transl Lung Cancer Res 2021; 10:2378-2388. [PMID: 34164285 PMCID: PMC8182724 DOI: 10.21037/tlcr-2020-lcs-06] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Lung cancer computed tomography (CT) screening trials using low-dose CT have repeatedly demonstrated a reduction in the number of lung cancer deaths in the screening group compared to a control group. With various countries currently considering the implementation of lung cancer screening, recurring discussion points are, among others, the potentially high false positive rates, cost-effectiveness, and the availability of radiologists for scan interpretation. Artificial intelligence (AI) has the potential to increase the efficiency of lung cancer screening. We discuss the performance levels of AI algorithms for various tasks related to the interpretation of lung screening CT scans, how they compare to human experts, and how AI and humans may complement each other. We discuss how AI may be used in the lung cancer CT screening workflow according to the current evidence and describe the additional research that will be required before AI can take a more prominent role in the analysis of lung screening CT scans.
Collapse
Affiliation(s)
- Anton Schreuder
- Department of Radiology, Nuclear Medicine, and Anatomy, Radboudumc, Nijmegen, The Netherlands
| | - Ernst T Scholten
- Department of Radiology, Nuclear Medicine, and Anatomy, Radboudumc, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Department of Radiology, Nuclear Medicine, and Anatomy, Radboudumc, Nijmegen, The Netherlands.,Fraunhofer MEVIS, Bremen, Germany
| | - Colin Jacobs
- Department of Radiology, Nuclear Medicine, and Anatomy, Radboudumc, Nijmegen, The Netherlands
| |
Collapse
|
38
|
Grassi R, Belfiore MP, Montanelli A, Patelli G, Urraro F, Giacobbe G, Fusco R, Granata V, Petrillo A, Sacco P, Mazzei MA, Feragalli B, Reginelli A, Cappabianca S. COVID-19 pneumonia: computer-aided quantification of healthy lung parenchyma, emphysema, ground glass and consolidation on chest computed tomography (CT). LA RADIOLOGIA MEDICA 2021; 126:553-560. [PMID: 33206301 PMCID: PMC7673247 DOI: 10.1007/s11547-020-01305-9] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Accepted: 10/29/2020] [Indexed: 01/08/2023]
Abstract
OBJECTIVE To calculate by means of a computer-aided tool the volumes of healthy residual lung parenchyma, of emphysema, of ground glass opacity (GGO) and of consolidation on chest computed tomography (CT) in patients with suspected viral pneumonia by COVID-19. MATERIALS AND METHODS This study included 116 patients that for suspected COVID-19 infection were subjected to the reverse transcription real-time fluorescence polymerase chain reaction (RT-PCR) test. A computer-aided tool was used to calculate on chest CT images healthy residual lung parenchyma, emphysema, GGO and consolidation volumes for both right and left lung. Expert radiologists, in consensus, assessed the CT images using a structured report and attributed a radiological severity score at the disease pulmonary involvement using a scale of five levels. Nonparametric test was performed to assess differences statistically significant among groups. RESULTS GGO was the most represented feature in suspected CT by COVID-19 infection; it is present in 102/109 (93.6%) patients with a volume percentage value of 19.50% and a median value of 0.64 L, while the emphysema and consolidation volumes were low (0.01 L and 0.03 L, respectively). Among quantified volume, only GGO volume had a difference statistically significant between the group of patients with suspected versus non-suspected CT for COVID-19 (p < < 0.01). There were differences statistically significant among the groups based on radiological severity score in terms of healthy residual parenchyma volume, of GGO volume and of consolidations volume (p < < 0.001). CONCLUSION We demonstrated that, using a computer-aided tool, the COVID-19 pneumonia was mirrored with a percentage median value of GGO of 19.50% and that only GGO volume had a difference significant between the patients with suspected or non-suspected CT for COVID-19 infection.
Collapse
Affiliation(s)
- Roberto Grassi
- Division of Radiodiagnostic, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy.
| | - Maria Paola Belfiore
- Division of Radiodiagnostic, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy
| | | | | | - Fabrizio Urraro
- Division of Radiodiagnostic, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy
| | - Giuliana Giacobbe
- Division of Radiodiagnostic, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy
| | - Roberta Fusco
- Radiology Division, Istituto Nazionale Tumori IRCCS Fondazione Pascale - IRCCS di Napoli, Naples, Italy
| | - Vincenza Granata
- Radiology Division, Istituto Nazionale Tumori IRCCS Fondazione Pascale - IRCCS di Napoli, Naples, Italy
| | - Antonella Petrillo
- Radiology Division, Istituto Nazionale Tumori IRCCS Fondazione Pascale - IRCCS di Napoli, Naples, Italy
| | - Palmino Sacco
- Department of Radiological Sciences, Diagnostic Imaging Unit, Azienda Ospedaliera Universitaria Senese, Siena, Italy
| | - Maria Antonietta Mazzei
- Department of Radiological Sciences, Diagnostic Imaging Unit, Azienda Ospedaliera Universitaria Senese, Siena, Italy
| | - Beatrice Feragalli
- Department of Medical, Oral and Biotechnological Sciences -Radiology Unit "G. D'Annunzio", University of Chieti-Pescara, Chieti, Italy
| | - Alfonso Reginelli
- Division of Radiodiagnostic, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy
| | - Salvatore Cappabianca
- Division of Radiodiagnostic, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy
| |
Collapse
|
39
|
Pedrosa J, Aresta G, Ferreira C, Atwal G, Phoulady HA, Chen X, Chen R, Li J, Wang L, Galdran A, Bouchachia H, Kaluva KC, Vaidhya K, Chunduru A, Tarai S, Nadimpalli SPP, Vaidya S, Kim I, Rassadin A, Tian Z, Sun Z, Jia Y, Men X, Ramos I, Cunha A, Campilho A. LNDb challenge on automatic lung cancer patient management. Med Image Anal 2021; 70:102027. [PMID: 33740739 DOI: 10.1016/j.media.2021.102027] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 01/18/2021] [Accepted: 02/26/2021] [Indexed: 12/21/2022]
Abstract
Lung cancer is the deadliest type of cancer worldwide and late detection is the major factor for the low survival rate of patients. Low dose computed tomography has been suggested as a potential screening tool but manual screening is costly and time-consuming. This has fuelled the development of automatic methods for the detection, segmentation and characterisation of pulmonary nodules. In spite of promising results, the application of automatic methods to clinical routine is not straightforward and only a limited number of studies have addressed the problem in a holistic way. With the goal of advancing the state of the art, the Lung Nodule Database (LNDb) Challenge on automatic lung cancer patient management was organized. The LNDb Challenge addressed lung nodule detection, segmentation and characterization as well as prediction of patient follow-up according to the 2017 Fleischner society pulmonary nodule guidelines. 294 CT scans were thus collected retrospectively at the Centro Hospitalar e Universitrio de So Joo in Porto, Portugal and each CT was annotated by at least one radiologist. Annotations comprised nodule centroids, segmentations and subjective characterization. 58 CTs and the corresponding annotations were withheld as a separate test set. A total of 947 users registered for the challenge and 11 successful submissions for at least one of the sub-challenges were received. For patient follow-up prediction, a maximum quadratic weighted Cohen's kappa of 0.580 was obtained. In terms of nodule detection, a sensitivity below 0.4 (and 0.7) at 1 false positive per scan was obtained for nodules identified by at least one (and two) radiologist(s). For nodule segmentation, a maximum Jaccard score of 0.567 was obtained, surpassing the interobserver variability. In terms of nodule texture characterization, a maximum quadratic weighted Cohen's kappa of 0.733 was obtained, with part solid nodules being particularly challenging to classify correctly. Detailed analysis of the proposed methods and the differences in performance allow to identify the major challenges remaining and future directions - data collection, augmentation/generation and evaluation of under-represented classes, the incorporation of scan-level information for better decision-making and the development of tools and challenges with clinical-oriented goals. The LNDb Challenge and associated data remain publicly available so that future methods can be tested and benchmarked, promoting the development of new algorithms in lung cancer medical image analysis and patient follow-up recommendation.
Collapse
Affiliation(s)
- João Pedrosa
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal.
| | - Guilherme Aresta
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal; Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal
| | - Carlos Ferreira
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal; Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal
| | - Gurraj Atwal
- Department of Computer Science, California State University, Sacramento, USA
| | | | - Xiaoyu Chen
- Department of Computer Science, School of Informatics, Xiamen University, China
| | - Rongzhen Chen
- Department of Computer Science, School of Informatics, Xiamen University, China
| | - Jiaoliang Li
- Department of Computer Science, School of Informatics, Xiamen University, China
| | - Liansheng Wang
- Department of Computer Science, School of Informatics, Xiamen University, China
| | - Adrian Galdran
- Department of Computing and Informatics, Bournemouth University, UK
| | - Hamid Bouchachia
- Department of Computing and Informatics, Bournemouth University, UK
| | | | | | | | | | | | | | - Ildoo Kim
- Kakao Brain, Seongnam-si, South Korea
| | | | - Zhenhuan Tian
- Department of Thoracic Surgery, Peking Union Medical College Hospital, Peking Union Medical College, Beijing, China
| | | | - Yizhuan Jia
- Mediclouds Medical Technology, Beijing, China
| | - Xuejun Men
- Mediclouds Medical Technology, Beijing, China
| | - Isabel Ramos
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal; Department of Radiology, Centro Hospitalar e Universitário de S. João, Porto, Portugal
| | - António Cunha
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal; University of Trás-os-Montes e Alto Douro (UTAD), Vila Real, Portugal
| | - Aurélio Campilho
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal; Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal
| |
Collapse
|
40
|
Su Y, Li D, Chen X. Lung Nodule Detection based on Faster R-CNN Framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105866. [PMID: 33309304 DOI: 10.1016/j.cmpb.2020.105866] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 11/17/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND Lung cancer is a worldwide high-risk disease, and lung nodules are the main manifestation of early lung cancer. Automatic detection of lung nodules reduces the workload of radiologists, the rate of misdiagnosis and missed diagnosis. For this purpose, we propose a Faster R-CNN algorithm for the detection of these lung nodules. METHOD Faster R-CNN algorithm can detect lung nodules, and the training set is used to prove the feasibility of this technique. In theory, parameter optimization can improve network structure, as well as detection accuracy. RESULT Through experiments, the best parameters are that the basic learning rate is 0.001, step size is 70,000, attenuation coefficient is 0.1, the value of Dropout is 0.5, and the value of Batch Size is 64. Compared with other networks for detecting lung nodules, the optimized and improved algorithm proposed in this paper generally improves detection accuracy by more than 20% when compared with the other traditional algorithms. CONCLUSION Our experimental results have proved that the method of detecting lung nodules based on Faster R-CNN algorithm has good accuracy and therefore, presents potential clinical value in lung disease diagnosis. This method can further assist radiologists, and also for researchers in the design and development of the detection system for lung nodules.
Collapse
Affiliation(s)
- Ying Su
- Department of Nursing, Shengjing Hospital of China Medical University, Shenyang, Liaoning, 110000, China
| | - Dan Li
- Department of Nursing, Shengjing Hospital of China Medical University, Shenyang, Liaoning, 110000, China
| | - Xiaodong Chen
- Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, Liaoning, 110000, China.
| |
Collapse
|
41
|
Shirokikh B, Shevtsov A, Dalechina A, Krivov E, Kostjuchenko V, Golanov A, Gombolevskiy V, Morozov S, Belyaev M. Accelerating 3D Medical Image Segmentation by Adaptive Small-Scale Target Localization. J Imaging 2021; 7:35. [PMID: 34460634 PMCID: PMC8321270 DOI: 10.3390/jimaging7020035] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 01/28/2021] [Accepted: 02/05/2021] [Indexed: 11/25/2022] Open
Abstract
The prevailing approach for three-dimensional (3D) medical image segmentation is to use convolutional networks. Recently, deep learning methods have achieved human-level performance in several important applied problems, such as volumetry for lung-cancer diagnosis or delineation for radiation therapy planning. However, state-of-the-art architectures, such as U-Net and DeepMedic, are computationally heavy and require workstations accelerated with graphics processing units for fast inference. However, scarce research has been conducted concerning enabling fast central processing unit computations for such networks. Our paper fills this gap. We propose a new segmentation method with a human-like technique to segment a 3D study. First, we analyze the image at a small scale to identify areas of interest and then process only relevant feature-map patches. Our method not only reduces the inference time from 10 min to 15 s but also preserves state-of-the-art segmentation quality, as we illustrate in the set of experiments with two large datasets.
Collapse
Affiliation(s)
- Boris Shirokikh
- Center for Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia; (A.S.); (M.B.)
| | - Alexey Shevtsov
- Center for Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia; (A.S.); (M.B.)
- Sector of Data Analysis for Neuroscience, Kharkevich Institute for Information Transmission Problems, 127051 Moscow, Russia;
- Department of Radio Engineering and Cybernetics, Moscow Institute of Physics and Technology, 141701 Moscow, Russia
| | | | - Egor Krivov
- Sector of Data Analysis for Neuroscience, Kharkevich Institute for Information Transmission Problems, 127051 Moscow, Russia;
- Department of Radio Engineering and Cybernetics, Moscow Institute of Physics and Technology, 141701 Moscow, Russia
| | | | - Andrey Golanov
- Department of Radiosurgery and Radiation, Burdenko Neurosurgery Institute, 125047 Moscow, Russia;
| | - Victor Gombolevskiy
- Medical Research Department, Research and Practical Clinical Center of Diagnostics and Telemedicine Technologies of the Department of Health Care of Moscow, 127051 Moscow, Russia; (V.G.); (S.M.)
| | - Sergey Morozov
- Medical Research Department, Research and Practical Clinical Center of Diagnostics and Telemedicine Technologies of the Department of Health Care of Moscow, 127051 Moscow, Russia; (V.G.); (S.M.)
| | - Mikhail Belyaev
- Center for Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia; (A.S.); (M.B.)
| |
Collapse
|
42
|
Transfer-to-Transfer Learning Approach for Computer Aided Detection of COVID-19 in Chest Radiographs. AI 2020. [DOI: 10.3390/ai1040032] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) global pandemic has severely impacted lives across the globe. Respiratory disorders in COVID-19 patients are caused by lung opacities similar to viral pneumonia. A Computer-Aided Detection (CAD) system for the detection of COVID-19 using chest radiographs would provide a second opinion for radiologists. For this research, we utilize publicly available datasets that have been marked by radiologists into two-classes (COVID-19 and non-COVID-19). We address the class imbalance problem associated with the training dataset by proposing a novel transfer-to-transfer learning approach, where we break a highly imbalanced training dataset into a group of balanced mini-sets and apply transfer learning between these. We demonstrate the efficacy of the method using well-established deep convolutional neural networks. Our proposed training mechanism is more robust to limited training data and class imbalance. We study the performance of our algorithm(s) based on 10-fold cross validation and two hold-out validation experiments to demonstrate its efficacy. We achieved an overall sensitivity of 0.94 for the hold-out validation experiments containing 2265 and 2139 marked as COVID-19 chest radiographs, respectively. For the 10-fold cross validation experiment, we achieve an overall Area under the Receiver Operating Characteristic curve (AUC) value of 0.996 for COVID-19 detection. This paper serves as a proof-of-concept that an automated detection approach can be developed with a limited set of COVID-19 images, and in areas with scarcity of trained radiologists.
Collapse
|
43
|
Liu Z, Li L, Li T, Luo D, Wang X, Luo D. Does a Deep Learning-Based Computer-Assisted Diagnosis System Outperform Conventional Double Reading by Radiologists in Distinguishing Benign and Malignant Lung Nodules? Front Oncol 2020; 10:545862. [PMID: 33163395 PMCID: PMC7581733 DOI: 10.3389/fonc.2020.545862] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 09/14/2020] [Indexed: 01/10/2023] Open
Abstract
Background In differentiating indeterminate pulmonary nodules, multiple studies indicated the superiority of deep learning–based computer-assisted diagnosis system (DL-CADx) over conventional double reading by radiologists, which needs external validation. Therefore, our aim was to externally validate the performance of a commercial DL-CADx in differentiating benign and malignant lung nodules. Methods In this retrospective study, 233 patients with 261 pathologically confirmed lung nodules were enrolled. Double reading was used to rate each nodule using a four-scale malignancy score system, including unlikely (0–25%), malignancy cannot be completely excluded (25–50%), highly likely (50–75%), and considered as malignant (75–100%), with any disagreement resolved through discussion. DL-CADx automatically rated each nodule with a malignancy likelihood ranging from 0 to 100%, which was then quadrichotomized accordingly. Intraclass correlation coefficient (ICC) was used to evaluate the agreement in malignancy risk rating between DL-CADx and double reading, with ICC value of <0.5, 0.5 to 0.75, 0.75 to 0.9, and >0.9 indicating poor, moderate, good, and perfect agreement, respectively. With malignancy likelihood >50% as cut-off value for malignancy and pathological results as gold standard, sensitivity, specificity, and accuracy were calculated for double reading and DL-CADx, separately. Results Among the 261 nodules, 247 nodules were successfully detected by DL-CADx with detection rate of 94.7%. Regarding malignancy rating, DL-CADx was in moderate agreement with double reading (ICC = 0.555, 95% CI 0.424 to 0.655). DL-CADx misdiagnosed 40 true malignant nodules as benign nodules and 30 true benign nodules as malignant nodules with sensitivity, specificity, and accuracy of 79.2, 45.5, and 71.7%, respectively. In contrast, double reading achieved better performance with 16 true malignant nodules misdiagnosed as benign nodules and 26 true benign nodules as malignant nodules with sensitivity, specificity, and accuracy of 91.7, 52.7, and 83.0%, respectively. Conclusion Compared with double reading, DL-CADx we used still shows inferior performance in differentiating malignant and benign nodules.
Collapse
Affiliation(s)
- Zhou Liu
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China
| | - Li Li
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China
| | - Tianran Li
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China
| | - Douqiang Luo
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China.,Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Xiaoliang Wang
- Department of Pathology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China
| | - Dehong Luo
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China.,Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| |
Collapse
|
44
|
Gao R, Tang Y, Xu K, Huo Y, Bao S, Antic SL, Epstein ES, Deppen S, Paulson AB, Sandler KL, Massion PP, Landman BA. Time-distanced gates in long short-term memory networks. Med Image Anal 2020; 65:101785. [PMID: 32745977 PMCID: PMC7484010 DOI: 10.1016/j.media.2020.101785] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2020] [Revised: 06/26/2020] [Accepted: 07/16/2020] [Indexed: 01/02/2023]
Abstract
The Long Short-Term Memory (LSTM) network is widely used in modeling sequential observations in fields ranging from natural language processing to medical imaging. The LSTM has shown promise for interpreting computed tomography (CT) in lung screening protocols. Yet, traditional image-based LSTM models ignore interval differences, while recently proposed interval-modeled LSTM variants are limited in their ability to interpret temporal proximity. Meanwhile, clinical imaging acquisition may be irregularly sampled, and such sampling patterns may be commingled with clinical usages. In this paper, we propose the Distanced LSTM (DLSTM) by introducing time-distanced (i.e., time distance to the last scan) gates with a temporal emphasis model (TEM) targeting at lung cancer diagnosis (i.e., evaluating the malignancy of pulmonary nodules). Briefly, (1) the time distance of every scan to the last scan is modeled explicitly, (2) time-distanced input and forget gates in DLSTM are introduced across regular and irregular sampling sequences, and (3) the newer scan in serial data is emphasized by the TEM. The DLSTM algorithm is evaluated with both simulated data and real CT images (from 1794 National Lung Screening Trial (NLST) patients with longitudinal scans and 1420 clinical studied patients). Experimental results on simulated data indicate the DLSTM can capture families of temporal relationships that cannot be detected with traditional LSTM. Cross-validation on empirical CT datasets demonstrates that DLSTM achieves leading performance on both regularly and irregularly sampled data (e.g., improving LSTM from 0.6785 to 0.7085 on F1 score in NLST). In external-validation on irregularly acquired data, the benchmarks achieved 0.8350 (CNN feature) and 0.8380 (with LSTM) on AUC score, while the proposed DLSTM achieves 0.8905. In conclusion, the DLSTM approach is shown to be compatible with families of linear, quadratic, exponential, and log-exponential temporal models. The DLSTM can be readily extended with other temporal dependence interactions while hardly increasing overall model complexity.
Collapse
Affiliation(s)
- Riqiang Gao
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA.
| | - Yucheng Tang
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Kaiwen Xu
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Yuankai Huo
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Shunxing Bao
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Sanja L Antic
- Medicine, Vanderbilt University School of Medicine, Nashville, TN 37235, USA
| | - Emily S Epstein
- Medicine, Vanderbilt University School of Medicine, Nashville, TN 37235, USA
| | - Steve Deppen
- Thoracic Surgery, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| | - Alexis B Paulson
- Radiology, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| | - Kim L Sandler
- Radiology, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| | - Pierre P Massion
- Medicine, Vanderbilt University School of Medicine, Nashville, TN 37235, USA
| | - Bennett A Landman
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA; Radiology, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| |
Collapse
|
45
|
Farhat H, Sakr GE, Kilany R. Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19. MACHINE VISION AND APPLICATIONS 2020; 31:53. [PMID: 32834523 PMCID: PMC7386599 DOI: 10.1007/s00138-020-01101-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 06/21/2020] [Accepted: 07/07/2020] [Indexed: 05/07/2023]
Abstract
Shortly after deep learning algorithms were applied to Image Analysis, and more importantly to medical imaging, their applications increased significantly to become a trend. Likewise, deep learning applications (DL) on pulmonary medical images emerged to achieve remarkable advances leading to promising clinical trials. Yet, coronavirus can be the real trigger to open the route for fast integration of DL in hospitals and medical centers. This paper reviews the development of deep learning applications in medical image analysis targeting pulmonary imaging and giving insights of contributions to COVID-19. It covers more than 160 contributions and surveys in this field, all issued between February 2017 and May 2020 inclusively, highlighting various deep learning tasks such as classification, segmentation, and detection, as well as different pulmonary pathologies like airway diseases, lung cancer, COVID-19 and other infections. It summarizes and discusses the current state-of-the-art approaches in this research domain, highlighting the challenges, especially with COVID-19 pandemic current situation.
Collapse
Affiliation(s)
- Hanan Farhat
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - George E. Sakr
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - Rima Kilany
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| |
Collapse
|
46
|
Gao R, Huo Y, Bao S, Tang Y, Antic SL, Epstein ES, Deppen S, Paulson AB, Sandler KL, Massion PP, Landman BA. Multi-path x-D Recurrent Neural Networks for Collaborative Image Classification. Neurocomputing 2020; 397:48-59. [PMID: 32863584 PMCID: PMC7454345 DOI: 10.1016/j.neucom.2020.02.033] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
With the rapid development of image acquisition and storage, multiple images per class are commonly available for computer vision tasks (e.g., face recognition, object detection, medical imaging, etc.). Recently, the recurrent neural network (RNN) has been widely integrated with convolutional neural networks (CNN) to perform image classification on ordered (sequential) data. In this paper, by permutating multiple images as multiple dummy orders, we generalize the ordered "RNN+CNN" design (longitudinal) to a novel unordered fashion, called Multi-path x-D Recurrent Neural Network (MxDRNN) for image classification. To the best of our knowledge, few (if any) existing studies have deployed the RNN framework to unordered intra-class images to leverage classification performance. Specifically, multiple learning paths are introduced in the MxDRNN to extract discriminative features by permutating input dummy orders. Eight datasets from five different fields (MNIST, 3D-MNIST, CIFAR, VGGFace2, and lung screening computed tomography) are included to evaluate the performance of our method. The proposed MxDRNN improves the baseline performance by a large margin across the different application fields (e.g., accuracy from 46.40% to 76.54% in VGGFace2 test pose set, AUC from 0.7418 to 0.8162 in NLST lung dataset). Additionally, empirical experiments show the MxDRNN is more robust to category-irrelevant attributes (e.g., expression, pose in face images), which may introduce difficulties for image classification and algorithm generalizability. The code is publicly available.
Collapse
Affiliation(s)
- Riqiang Gao
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Yuankai Huo
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Shunxing Bao
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Yucheng Tang
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Sanja L Antic
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Emily S Epstein
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Steve Deppen
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Alexis B Paulson
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Kim L Sandler
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Pierre P Massion
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Bennett A Landman
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37235, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| |
Collapse
|
47
|
Perez G, Arbelaez P. Automated lung cancer diagnosis using three-dimensional convolutional neural networks. Med Biol Eng Comput 2020; 58:1803-1815. [PMID: 32504345 DOI: 10.1007/s11517-020-02197-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 05/20/2020] [Indexed: 11/24/2022]
Abstract
Lung cancer is the deadliest cancer worldwide. It has been shown that early detection using low-dose computer tomography (LDCT) scans can reduce deaths caused by this disease. We present a general framework for the detection of lung cancer in chest LDCT images. Our method consists of a nodule detector trained on the LIDC-IDRI dataset followed by a cancer predictor trained on the Kaggle DSB 2017 dataset and evaluated on the IEEE International Symposium on Biomedical Imaging (ISBI) 2018 Lung Nodule Malignancy Prediction test set. Our candidate extraction approach is effective to produce accurate candidates with a recall of 99.6%. In addition, our false positive reduction stage classifies successfully the candidates and increases precision by a factor of 2000. Our cancer predictor obtained a ROC AUC of 0.913 and was ranked 1st place at the ISBI 2018 Lung Nodule Malignancy Prediction challenge. Graphical abstract.
Collapse
Affiliation(s)
- Gustavo Perez
- Universidad de los Andes, Cra 1 N 18A-12, Bogota, 111711, Colombia.
| | - Pablo Arbelaez
- Universidad de los Andes, Cra 1 N 18A-12, Bogota, 111711, Colombia
| |
Collapse
|
48
|
Gao XW, James-Reynolds C, Currie E. Analysis of tuberculosis severity levels from CT pulmonary images based on enhanced residual deep learning architecture. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.12.086] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
49
|
|
50
|
Tan Y, Liu M, Chen W, Wang X, Peng H, Wang Y. DeepBranch: Deep Neural Networks for Branch Point Detection in Biomedical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1195-1205. [PMID: 31603774 DOI: 10.1109/tmi.2019.2945980] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Morphology reconstruction of tree-like structures in volumetric images, such as neurons, retinal blood vessels, and bronchi, is of fundamental interest for biomedical research. 3D branch points play an important role in many reconstruction applications, especially for graph-based or seed-based reconstruction methods and can help to visualize the morphology structures. There are a few hand-crafted models proposed to detect the branch points. However, they are highly dependent on the empirical setting of the parameters for different images. In this paper, we propose a DeepBranch model for branch point detection with two-level designed convolutional networks, a candidate region segmenter and a false positive reducer. On the first level, an improved 3D U-Net model with anisotropic convolution kernels is employed to detect initial candidates. Compared with the traditional sliding window strategy, the improved 3D U-Net can avoid massive redundant computations and dramatically speed up the detection process by employing dense-inference with fully convolutional neural networks (FCN). On the second level, a method based on multi-scale multi-view convolutional neural networks (MSMV-Net) is proposed for false positive reduction by feeding multi-scale views of 3D volumes into multiple streams of 2D convolution neural networks (CNNs), which can take full advantage of spatial contextual information as well as fit different sizes. Experiments on multiple 3D biomedical images of neurons, retinal blood vessels and bronchi confirm that the proposed 3D branch point detection method outperforms other state-of-the-art detection methods, and is helpful for graph-based or seed-based reconstruction methods.
Collapse
|