1
|
Sun D, Hadjiiski L, Gormley J, Chan HP, Caoili E, Cohan R, Alva A, Bruno G, Mihalcea R, Zhou C, Gulani V. Outcome Prediction Using Multi-Modal Information: Integrating Large Language Model-Extracted Clinical Information and Image Analysis. Cancers (Basel) 2024; 16:2402. [PMID: 39001463 PMCID: PMC11240460 DOI: 10.3390/cancers16132402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 06/21/2024] [Accepted: 06/27/2024] [Indexed: 07/16/2024] Open
Abstract
Survival prediction post-cystectomy is essential for the follow-up care of bladder cancer patients. This study aimed to evaluate artificial intelligence (AI)-large language models (LLMs) for extracting clinical information and improving image analysis, with an initial application involving predicting five-year survival rates of patients after radical cystectomy for bladder cancer. Data were retrospectively collected from medical records and CT urograms (CTUs) of bladder cancer patients between 2001 and 2020. Of 781 patients, 163 underwent chemotherapy, had pre- and post-chemotherapy CTUs, underwent radical cystectomy, and had an available post-surgery five-year survival follow-up. Five AI-LLMs (Dolly-v2, Vicuna-13b, Llama-2.0-13b, GPT-3.5, and GPT-4.0) were used to extract clinical descriptors from each patient's medical records. As a reference standard, clinical descriptors were also extracted manually. Radiomics and deep learning descriptors were extracted from CTU images. The developed multi-modal predictive model, CRD, was based on the clinical (C), radiomics (R), and deep learning (D) descriptors. The LLM retrieval accuracy was assessed. The performances of the survival predictive models were evaluated using AUC and Kaplan-Meier analysis. For the 163 patients (mean age 64 ± 9 years; M:F 131:32), the LLMs achieved extraction accuracies of 74%~87% (Dolly), 76%~83% (Vicuna), 82%~93% (Llama), 85%~91% (GPT-3.5), and 94%~97% (GPT-4.0). For a test dataset of 64 patients, the CRD model achieved AUCs of 0.89 ± 0.04 (manually extracted information), 0.87 ± 0.05 (Dolly), 0.83 ± 0.06~0.84 ± 0.05 (Vicuna), 0.81 ± 0.06~0.86 ± 0.05 (Llama), 0.85 ± 0.05~0.88 ± 0.05 (GPT-3.5), and 0.87 ± 0.05~0.88 ± 0.05 (GPT-4.0). This study demonstrates the use of LLM model-extracted clinical information, in conjunction with imaging analysis, to improve the prediction of clinical outcomes, with bladder cancer as an initial example.
Collapse
Affiliation(s)
- Di Sun
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.C.); (R.C.); (G.B.); (C.Z.); (V.G.)
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.C.); (R.C.); (G.B.); (C.Z.); (V.G.)
| | - John Gormley
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.C.); (R.C.); (G.B.); (C.Z.); (V.G.)
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.C.); (R.C.); (G.B.); (C.Z.); (V.G.)
| | - Elaine Caoili
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.C.); (R.C.); (G.B.); (C.Z.); (V.G.)
| | - Richard Cohan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.C.); (R.C.); (G.B.); (C.Z.); (V.G.)
| | - Ajjai Alva
- Department of Internal Medicine-Hematology/Oncology, University of Michigan, Ann Arbor, MI 48109, USA;
| | - Grace Bruno
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.C.); (R.C.); (G.B.); (C.Z.); (V.G.)
| | - Rada Mihalcea
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, USA;
| | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.C.); (R.C.); (G.B.); (C.Z.); (V.G.)
| | - Vikas Gulani
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.C.); (R.C.); (G.B.); (C.Z.); (V.G.)
| |
Collapse
|
2
|
Roy R, Mazumdar S, Chowdhury AS. ADGAN: Attribute-Driven Generative Adversarial Network for Synthesis and Multiclass Classification of Pulmonary Nodules. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2484-2495. [PMID: 35853058 DOI: 10.1109/tnnls.2022.3190331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Lung cancer is the leading cause of cancer-related deaths worldwide. According to the American Cancer Society, early diagnosis of pulmonary nodules in computed tomography (CT) scans can improve the five-year survival rate up to 70% with proper treatment planning. In this article, we propose an attribute-driven Generative Adversarial Network (ADGAN) for synthesis and multiclass classification of Pulmonary Nodules. A self-attention U-Net (SaUN) architecture is proposed to improve the generation mechanism of the network. The generator is designed with two modules, namely, self-attention attribute module (SaAM) and a self-attention spatial module (SaSM). SaAM generates a nodule image based on given attributes whereas SaSM specifies the nodule region of the input image to be altered. A reconstruction loss along with an attention localization loss (AL) is used to produce an attention map prioritizing the nodule regions. To avoid resemblance between a generated image and a real image, we further introduce an adversarial loss containing a regularization term based on KL divergence. The discriminator part of the proposed model is designed to achieve the multiclass nodule classification task. Our proposed approach is validated over two challenging publicly available datasets, namely LIDC-IDRI and LUNGX. Exhaustive experimentation on these two datasets clearly indicate that we have achieved promising classification accuracy as compared to other state-of-the-art methods.
Collapse
|
3
|
Zhang X, Yang P, Tian J, Wen F, Chen X, Muhammad T. Classification of benign and malignant pulmonary nodule based on local-global hybrid network. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:689-706. [PMID: 38277335 DOI: 10.3233/xst-230291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2024]
Abstract
BACKGROUND The accurate classification of pulmonary nodules has great application value in assisting doctors in diagnosing conditions and meeting clinical needs. However, the complexity and heterogeneity of pulmonary nodules make it difficult to extract valuable characteristics of pulmonary nodules, so it is still challenging to achieve high-accuracy classification of pulmonary nodules. OBJECTIVE In this paper, we propose a local-global hybrid network (LGHNet) to jointly model local and global information to improve the classification ability of benign and malignant pulmonary nodules. METHODS First, we introduce the multi-scale local (MSL) block, which splits the input tensor into multiple channel groups, utilizing dilated convolutions with different dilation rates and efficient channel attention to extract fine-grained local information at different scales. Secondly, we design the hybrid attention (HA) block to capture long-range dependencies in spatial and channel dimensions to enhance the representation of global features. RESULTS Experiments are carried out on the publicly available LIDC-IDRI and LUNGx datasets, and the accuracy, sensitivity, precision, specificity, and area under the curve (AUC) of the LIDC-IDRI dataset are 94.42%, 94.25%, 93.05%, 92.87%, and 97.26%, respectively. The AUC on the LUNGx dataset was 79.26%. CONCLUSION The above classification results are superior to the state-of-the-art methods, indicating that the network has better classification performance and generalization ability.
Collapse
Affiliation(s)
- Xin Zhang
- Smart City College, Beijing Union University, Beijing, China
| | - Ping Yang
- Smart City College, Beijing Union University, Beijing, China
| | - Ji Tian
- Smart City College, Beijing Union University, Beijing, China
| | - Fan Wen
- Smart City College, Beijing Union University, Beijing, China
| | - Xi Chen
- Smart City College, Beijing Union University, Beijing, China
| | - Tayyab Muhammad
- School of Electrical and Electronic Engineering, North China Electric Power University, Beijing, China
| |
Collapse
|
4
|
Armato SG, Drukker K, Hadjiiski L. AI in medical imaging grand challenges: translation from competition to research benefit and patient care. Br J Radiol 2023; 96:20221152. [PMID: 37698542 PMCID: PMC10546459 DOI: 10.1259/bjr.20221152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/24/2023] [Accepted: 07/11/2023] [Indexed: 09/13/2023] Open
Abstract
Artificial intelligence (AI), in one form or another, has been a part of medical imaging for decades. The recent evolution of AI into approaches such as deep learning has dramatically accelerated the application of AI across a wide range of radiologic settings. Despite the promises of AI, developers and users of AI technology must be fully aware of its potential biases and pitfalls, and this knowledge must be incorporated throughout the AI system development pipeline that involves training, validation, and testing. Grand challenges offer an opportunity to advance the development of AI methods for targeted applications and provide a mechanism for both directing and facilitating the development of AI systems. In the process, a grand challenge centralizes (with the challenge organizers) the burden of providing a valid benchmark test set to assess performance and generalizability of participants' models and the collection and curation of image metadata, clinical/demographic information, and the required reference standard. The most relevant grand challenges are those designed to maximize the open-science nature of the competition, with code and trained models deposited for future public access. The ultimate goal of AI grand challenges is to foster the translation of AI systems from competition to research benefit and patient care. Rather than reference the many medical imaging grand challenges that have been organized by groups such as MICCAI, RSNA, AAPM, and grand-challenge.org, this review assesses the role of grand challenges in promoting AI technologies for research advancement and for eventual clinical implementation, including their promises and limitations.
Collapse
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Karen Drukker
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
5
|
Sun D, Hadjiiski L, Gormley J, Chan HP, Caoili EM, Cohan RH, Alva A, Gulani V, Zhou C. Survival Prediction of Patients with Bladder Cancer after Cystectomy Based on Clinical, Radiomics, and Deep-Learning Descriptors. Cancers (Basel) 2023; 15:4372. [PMID: 37686647 PMCID: PMC10486459 DOI: 10.3390/cancers15174372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/25/2023] [Accepted: 08/28/2023] [Indexed: 09/10/2023] Open
Abstract
Accurate survival prediction for bladder cancer patients who have undergone radical cystectomy can improve their treatment management. However, the existing predictive models do not take advantage of both clinical and radiological imaging data. This study aimed to fill this gap by developing an approach that leverages the strengths of clinical (C), radiomics (R), and deep-learning (D) descriptors to improve survival prediction. The dataset comprised 163 patients, including clinical, histopathological information, and CT urography scans. The data were divided by patient into training, validation, and test sets. We analyzed the clinical data by a nomogram and the image data by radiomics and deep-learning models. The descriptors were input into a BPNN model for survival prediction. The AUCs on the test set were (C): 0.82 ± 0.06, (R): 0.73 ± 0.07, (D): 0.71 ± 0.07, (CR): 0.86 ± 0.05, (CD): 0.86 ± 0.05, and (CRD): 0.87 ± 0.05. The predictions based on D and CRD descriptors showed a significant difference (p = 0.007). For Kaplan-Meier survival analysis, the deceased and alive groups were stratified successfully by C (p < 0.001) and CRD (p < 0.001), with CRD predicting the alive group more accurately. The results highlight the potential of combining C, R, and D descriptors to accurately predict the survival of bladder cancer patients after cystectomy.
Collapse
Affiliation(s)
- Di Sun
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - John Gormley
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Elaine M. Caoili
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Richard H. Cohan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Ajjai Alva
- Department of Internal Medicine-Hematology/Oncology, University of Michigan, Ann Arbor, MI 48109, USA;
| | - Vikas Gulani
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (L.H.); (J.G.); (H.-P.C.); (E.M.C.); (R.H.C.); (V.G.); (C.Z.)
| |
Collapse
|
6
|
Brocki L, Chung NC. Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models. Cancers (Basel) 2023; 15:2459. [PMID: 37173930 PMCID: PMC10177141 DOI: 10.3390/cancers15092459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/13/2023] [Accepted: 04/19/2023] [Indexed: 05/15/2023] Open
Abstract
Despite the unprecedented performance of deep neural networks (DNNs) in computer vision, their clinical application in the diagnosis and prognosis of cancer using medical imaging has been limited. One of the critical challenges for integrating diagnostic DNNs into radiological and oncological applications is their lack of interpretability, preventing clinicians from understanding the model predictions. Therefore, we studied and propose the integration of expert-derived radiomics and DNN-predicted biomarkers in interpretable classifiers, which we refer to as ConRad, for computerized tomography (CT) scans of lung cancer. Importantly, the tumor biomarkers can be predicted from a concept bottleneck model (CBM) such that once trained, our ConRad models do not require labor-intensive and time-consuming biomarkers. In our evaluation and practical application, the only input to ConRad is a segmented CT scan. The proposed model was compared to convolutional neural networks (CNNs) which act as a black box classifier. We further investigated and evaluated all combinations of radiomics, predicted biomarkers and CNN features in five different classifiers. We found the ConRad models using nonlinear SVM and the logistic regression with the Lasso outperformed the others in five-fold cross-validation, with the interpretability of ConRad being its primary advantage. The Lasso is used for feature selection, which substantially reduces the number of nonzero weights while increasing the accuracy. Overall, the proposed ConRad model combines CBM-derived biomarkers and radiomics features in an interpretable ML model which demonstrates excellent performance for lung nodule malignancy classification.
Collapse
Affiliation(s)
- Lennart Brocki
- Institute of Informatics, University of Warsaw, Banacha 2, 02-097 Warsaw, Poland
| | | |
Collapse
|
7
|
Alhares H, Tanha J, Balafar MA. AMTLDC: a new adversarial multi-source transfer learning framework to diagnosis of COVID-19. EVOLVING SYSTEMS 2023; 14:1-15. [PMID: 38625255 PMCID: PMC9838404 DOI: 10.1007/s12530-023-09484-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
In recent years, deep learning techniques have been widely used to diagnose diseases. However, in some tasks, such as the diagnosis of COVID-19 disease, due to insufficient data, the model is not properly trained and as a result, the generalizability of the model decreases. For example, if the model is trained on a CT scan dataset and tested on another CT scan dataset, it predicts near-random results. To address this, data from several different sources can be combined using transfer learning, taking into account the intrinsic and natural differences in existing datasets obtained with different medical imaging tools and approaches. In this paper, to improve the transfer learning technique and better generalizability between multiple data sources, we propose a multi-source adversarial transfer learning model, namely AMTLDC. In AMTLDC, representations are learned that are similar among the sources. In other words, extracted representations are general and not dependent on the particular dataset domain. We apply the AMTLDC to predict Covid-19 from medical images using a convolutional neural network. We show that accuracy can be improved using the AMTLDC framework, and surpass the results of current successful transfer learning approaches. In particular, we show that the AMTLDC works well when using different dataset domains, or when there is insufficient data.
Collapse
Affiliation(s)
- Hadi Alhares
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Jafar Tanha
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Mohammad Ali Balafar
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| |
Collapse
|
8
|
Liu J, Cao L, Akin O, Tian Y. Robust and accurate pulmonary nodule detection with self-supervised feature learning on domain adaptation. FRONTIERS IN RADIOLOGY 2022; 2:1041518. [PMID: 37492669 PMCID: PMC10365286 DOI: 10.3389/fradi.2022.1041518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Accepted: 11/28/2022] [Indexed: 07/27/2023]
Abstract
Medical imaging data annotation is expensive and time-consuming. Supervised deep learning approaches may encounter overfitting if trained with limited medical data, and further affect the robustness of computer-aided diagnosis (CAD) on CT scans collected by various scanner vendors. Additionally, the high false-positive rate in automatic lung nodule detection methods prevents their applications in daily clinical routine diagnosis. To tackle these issues, we first introduce a novel self-learning schema to train a pre-trained model by learning rich feature representatives from large-scale unlabeled data without extra annotation, which guarantees a consistent detection performance over novel datasets. Then, a 3D feature pyramid network (3DFPN) is proposed for high-sensitivity nodule detection by extracting multi-scale features, where the weights of the backbone network are initialized by the pre-trained model and then fine-tuned in a supervised manner. Further, a High Sensitivity and Specificity (HS2 ) network is proposed to reduce false positives by tracking the appearance changes among continuous CT slices on Location History Images (LHI) for the detected nodule candidates. The proposed method's performance and robustness are evaluated on several publicly available datasets, including LUNA16, SPIE-AAPM, LungTIME, and HMS. Our proposed detector achieves the state-of-the-art result of 90.6 % sensitivity at 1 / 8 false positive per scan on the LUNA16 dataset. The proposed framework's generalizability has been evaluated on three additional datasets (i.e., SPIE-AAPM, LungTIME, and HMS) captured by different types of CT scanners.
Collapse
Affiliation(s)
- Jingya Liu
- The City College of New York, New York, NY, USA
| | | | - Oguz Akin
- Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Yingli Tian
- The City College of New York, New York, NY, USA
| |
Collapse
|
9
|
Katase S, Ichinose A, Hayashi M, Watanabe M, Chin K, Takeshita Y, Shiga H, Tateishi H, Onozawa S, Shirakawa Y, Yamashita K, Shudo J, Nakamura K, Nakanishi A, Kuroki K, Yokoyama K. Development and performance evaluation of a deep learning lung nodule detection system. BMC Med Imaging 2022; 22:203. [PMID: 36419044 PMCID: PMC9682774 DOI: 10.1186/s12880-022-00938-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 11/14/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Lung cancer is the leading cause of cancer-related deaths throughout the world. Chest computed tomography (CT) is now widely used in the screening and diagnosis of lung cancer due to its effectiveness. Radiologists must identify each small nodule shadow from 3D volume images, which is very burdensome and often results in missed nodules. To address these challenges, we developed a computer-aided detection (CAD) system that automatically detects lung nodules in CT images. METHODS A total of 1997 chest CT scans were collected for algorithm development. The algorithm was designed using deep learning technology. In addition to evaluating detection performance on various public datasets, its robustness to changes in radiation dose was assessed by a phantom study. To investigate the clinical usefulness of the CAD system, a reader study was conducted with 10 doctors, including inexperienced and expert readers. This study investigated whether the use of the CAD as a second reader could prevent nodular lesions in lungs that require follow-up examinations from being overlooked. Analysis was performed using the Jackknife Free-Response Receiver-Operating Characteristic (JAFROC). RESULTS The CAD system achieved sensitivity of 0.98/0.96 at 3.1/7.25 false positives per case on two public datasets. Sensitivity did not change within the range of practical doses for a study using a phantom. A second reader study showed that the use of this system significantly improved the detection ability of nodules that could be picked up clinically (p = 0.026). CONCLUSIONS We developed a deep learning-based CAD system that is robust to imaging conditions. Using this system as a second reader increased detection performance.
Collapse
Affiliation(s)
- Shichiro Katase
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Akimichi Ichinose
- grid.410862.90000 0004 1770 2279Imaging Technology Center, ICT Strategy Division, Fujifilm Corporation, 2-26-30, Nishi-Azabu, Minato-ku, Tokyo, Japan
| | - Mahiro Hayashi
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Masanaka Watanabe
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Kinka Chin
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Yuhei Takeshita
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Hisae Shiga
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Hidekatsu Tateishi
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Shiro Onozawa
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Yuya Shirakawa
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Koji Yamashita
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Jun Shudo
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Keigo Nakamura
- grid.410862.90000 0004 1770 2279Imaging Technology Center, ICT Strategy Division, Fujifilm Corporation, 2-26-30, Nishi-Azabu, Minato-ku, Tokyo, Japan
| | - Akihito Nakanishi
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Kazunori Kuroki
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Kenichi Yokoyama
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| |
Collapse
|
10
|
Niu C, Wang G. Unsupervised contrastive learning based transformer for lung nodule detection. Phys Med Biol 2022; 67:10.1088/1361-6560/ac92ba. [PMID: 36113445 PMCID: PMC10040209 DOI: 10.1088/1361-6560/ac92ba] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 09/16/2022] [Indexed: 11/12/2022]
Abstract
Objective.Early detection of lung nodules with computed tomography (CT) is critical for the longer survival of lung cancer patients and better quality of life. Computer-aided detection/diagnosis (CAD) is proven valuable as a second or concurrent reader in this context. However, accurate detection of lung nodules remains a challenge for such CAD systems and even radiologists due to not only the variability in size, location, and appearance of lung nodules but also the complexity of lung structures. This leads to a high false-positive rate with CAD, compromising its clinical efficacy.Approach.Motivated by recent computer vision techniques, here we present a self-supervised region-based 3D transformer model to identify lung nodules among a set of candidate regions. Specifically, a 3D vision transformer is developed that divides a CT volume into a sequence of non-overlap cubes, extracts embedding features from each cube with an embedding layer, and analyzes all embedding features with a self-attention mechanism for the prediction. To effectively train the transformer model on a relatively small dataset, the region-based contrastive learning method is used to boost the performance by pre-training the 3D transformer with public CT images.Results.Our experiments show that the proposed method can significantly improve the performance of lung nodule screening in comparison with the commonly used 3D convolutional neural networks.Significance.This study demonstrates a promising direction to improve the performance of current CAD systems for lung nodule detection.
Collapse
Affiliation(s)
- Chuang Niu
- Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| | - Ge Wang
- Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
11
|
Saihood A, Karshenas H, Nilchi ARN. Deep fusion of gray level co-occurrence matrices for lung nodule classification. PLoS One 2022; 17:e0274516. [PMID: 36174073 PMCID: PMC9521911 DOI: 10.1371/journal.pone.0274516] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 08/28/2022] [Indexed: 11/19/2022] Open
Abstract
Lung cancer is a serious threat to human health, with millions dying because of its late diagnosis. The computerized tomography (CT) scan of the chest is an efficient method for early detection and classification of lung nodules. The requirement for high accuracy in analyzing CT scan images is a significant challenge in detecting and classifying lung cancer. In this paper, a new deep fusion structure based on the long short-term memory (LSTM) has been introduced, which is applied to the texture features computed from lung nodules through new volumetric grey-level-co-occurrence-matrices (GLCMs), classifying the nodules into benign, malignant, and ambiguous. Also, an improved Otsu segmentation method combined with the water strider optimization algorithm (WSA) is proposed to detect the lung nodules. WSA-Otsu thresholding can overcome the fixed thresholds and time requirement restrictions in previous thresholding methods. Extended experiments are used to assess this fusion structure by considering 2D-GLCM based on 2D-slices and approximating the proposed 3D-GLCM computations based on volumetric 2.5D-GLCMs. The proposed methods are trained and assessed through the LIDC-IDRI dataset. The accuracy, sensitivity, and specificity obtained for 2D-GLCM fusion are 94.4%, 91.6%, and 95.8%, respectively. For 2.5D-GLCM fusion, the accuracy, sensitivity, and specificity are 97.33%, 96%, and 98%, respectively. For 3D-GLCM, the accuracy, sensitivity, and specificity of the proposed fusion structure reached 98.7%, 98%, and 99%, respectively, outperforming most state-of-the-art counterparts. The results and analysis also indicate that the WSA-Otsu method requires a shorter execution time and yields a more accurate thresholding process.
Collapse
Affiliation(s)
- Ahmed Saihood
- Artificial Intelligence Department, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran
- Faculty of Computer Science and Mathematics, University of Thi-Qar, Nasiriyah, Thi-Qar, Iraq
| | - Hossein Karshenas
- Artificial Intelligence Department, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran
| | - Ahmad Reza Naghsh Nilchi
- Artificial Intelligence Department, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran
| |
Collapse
|
12
|
Vliegenthart R, Fouras A, Jacobs C, Papanikolaou N. Innovations in thoracic imaging: CT, radiomics, AI and x-ray velocimetry. Respirology 2022; 27:818-833. [PMID: 35965430 PMCID: PMC9546393 DOI: 10.1111/resp.14344] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/08/2022] [Indexed: 12/11/2022]
Abstract
In recent years, pulmonary imaging has seen enormous progress, with the introduction, validation and implementation of new hardware and software. There is a general trend from mere visual evaluation of radiological images to quantification of abnormalities and biomarkers, and assessment of ‘non visual’ markers that contribute to establishing diagnosis or prognosis. Important catalysts to these developments in thoracic imaging include new indications (like computed tomography [CT] lung cancer screening) and the COVID‐19 pandemic. This review focuses on developments in CT, radiomics, artificial intelligence (AI) and x‐ray velocimetry for imaging of the lungs. Recent developments in CT include the potential for ultra‐low‐dose CT imaging for lung nodules, and the advent of a new generation of CT systems based on photon‐counting detector technology. Radiomics has demonstrated potential towards predictive and prognostic tasks particularly in lung cancer, previously not achievable by visual inspection by radiologists, exploiting high dimensional patterns (mostly texture related) on medical imaging data. Deep learning technology has revolutionized the field of AI and as a result, performance of AI algorithms is approaching human performance for an increasing number of specific tasks. X‐ray velocimetry integrates x‐ray (fluoroscopic) imaging with unique image processing to produce quantitative four dimensional measurement of lung tissue motion, and accurate calculations of lung ventilation. See relatedEditorial
Collapse
Affiliation(s)
- Rozemarijn Vliegenthart
- Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands.,Data Science in Health (DASH), University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | | | - Colin Jacobs
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Nickolas Papanikolaou
- Champalimaud Research, Champalimaud Foundation, Lisbon, Portugal.,AI Hub, The Royal Marsden NHS Foundation Trust, London, UK.,The Institute of Cancer Research, London, UK
| |
Collapse
|
13
|
Jassim MM, Jaber MM. Systematic review for lung cancer detection and lung nodule classification: Taxonomy, challenges, and recommendation future works. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Nowadays, lung cancer is one of the most dangerous diseases that require early diagnosis. Artificial intelligence has played an essential role in the medical field in general and in analyzing medical images and diagnosing diseases in particular, as it can reduce human errors that can occur with the medical expert when analyzing medical image. In this research study, we have done a systematic survey of the research published during the last 5 years in the diagnosis of lung cancer classification of lung nodules in 4 reliable databases (Science Direct, Scopus, web of science, and IEEE), and we selected 50 research paper using systematic literature review. The goal of this review work is to provide a concise overview of recent advancements in lung cancer diagnosis issues by machine learning and deep learning algorithms. This article summarizes the present state of knowledge on the subject. Addressing the findings offered in recent research publications gives the researchers a better grasp of the topic. We checked all the characteristics, such as challenges, recommendations for future work were analyzed in detail, and the published datasets and their source were presented to facilitate the researchers’ access to them and use it to develop the results achieved previously.
Collapse
Affiliation(s)
- Mustafa Mohammed Jassim
- Department of Computer Science, Informatics Institute for Postgraduate Studies (IIPS), Iraqi Commission for Computers and Informatics (ICCI) , Baghdad , Iraq
| | - Mustafa Musa Jaber
- Department of Medical Instruments Engineering Techniques, Dijlah University College , Baghdad , 10021 , Iraq
- Department of Medical Instruments Engineering Techniques, Al-Farahidi University , Baghdad , 10021 , Iraq
| |
Collapse
|
14
|
Bianconi F, Palumbo I, Fravolini ML, Rondini M, Minestrini M, Pascoletti G, Nuvoli S, Spanu A, Scialpi M, Aristei C, Palumbo B. Form Factors as Potential Imaging Biomarkers to Differentiate Benign vs. Malignant Lung Lesions on CT Scans. SENSORS (BASEL, SWITZERLAND) 2022; 22:5044. [PMID: 35808538 PMCID: PMC9269784 DOI: 10.3390/s22135044] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 06/28/2022] [Accepted: 07/02/2022] [Indexed: 06/15/2023]
Abstract
Indeterminate lung nodules detected on CT scans are common findings in clinical practice. Their correct assessment is critical, as early diagnosis of malignancy is crucial to maximise the treatment outcome. In this work, we evaluated the role of form factors as imaging biomarkers to differentiate benign vs. malignant lung lesions on CT scans. We tested a total of three conventional imaging features, six form factors, and two shape features for significant differences between benign and malignant lung lesions on CT scans. The study population consisted of 192 lung nodules from two independent datasets, containing 109 (38 benign, 71 malignant) and 83 (42 benign, 41 malignant) lung lesions, respectively. The standard of reference was either histological evaluation or stability on radiological followup. The statistical significance was determined via the Mann-Whitney U nonparametric test, and the ability of the form factors to discriminate a benign vs. a malignant lesion was assessed through multivariate prediction models based on Support Vector Machines. The univariate analysis returned four form factors (Angelidakis compactness and flatness, Kong flatness, and maximum projection sphericity) that were significantly different between the benign and malignant group in both datasets. In particular, we found that the benign lesions were on average flatter than the malignant ones; conversely, the malignant ones were on average more compact (isotropic) than the benign ones. The multivariate prediction models showed that adding form factors to conventional imaging features improved the prediction accuracy by up to 14.5 pp. We conclude that form factors evaluated on lung nodules on CT scans can improve the differential diagnosis between benign and malignant lesions.
Collapse
Affiliation(s)
- Francesco Bianconi
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06125 Perugia, Italy;
| | - Isabella Palumbo
- Section of Radiation Oncology, Department of Medicine and Surgery, Università degli Studi di Perugia, Piazza Lucio Severi 1, 06132 Perugia, Italy; (I.P.); (C.A.)
| | - Mario Luca Fravolini
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06125 Perugia, Italy;
| | - Maria Rondini
- Unit of Nuclear Medicine, Department of Medical, Surgical and Experimental Sciences, Università degli Studi di Sassari, Viale San Pietro 8, 07100 Sassari, Italy; (M.R.); (S.N.); (A.S.)
| | - Matteo Minestrini
- Section of Nuclear Medicine and Health Physics, Department of Medicine and Surgery, Università degli Studi di Perugia, Piazza Lucio Severi 1, 06132 Perugia, Italy; (M.M.); (B.P.)
| | - Giulia Pascoletti
- Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Torino, Italy;
| | - Susanna Nuvoli
- Unit of Nuclear Medicine, Department of Medical, Surgical and Experimental Sciences, Università degli Studi di Sassari, Viale San Pietro 8, 07100 Sassari, Italy; (M.R.); (S.N.); (A.S.)
| | - Angela Spanu
- Unit of Nuclear Medicine, Department of Medical, Surgical and Experimental Sciences, Università degli Studi di Sassari, Viale San Pietro 8, 07100 Sassari, Italy; (M.R.); (S.N.); (A.S.)
| | - Michele Scialpi
- Division of Diagnostic Imaging, Department of Medicine and Surgery, Piazza Lucio Severi 1, 06132 Perugia, Italy;
| | - Cynthia Aristei
- Section of Radiation Oncology, Department of Medicine and Surgery, Università degli Studi di Perugia, Piazza Lucio Severi 1, 06132 Perugia, Italy; (I.P.); (C.A.)
| | - Barbara Palumbo
- Section of Nuclear Medicine and Health Physics, Department of Medicine and Surgery, Università degli Studi di Perugia, Piazza Lucio Severi 1, 06132 Perugia, Italy; (M.M.); (B.P.)
| |
Collapse
|
15
|
Liao Z, Xie Y, Hu S, Xia Y. Learning From Ambiguous Labels for Lung Nodule Malignancy Prediction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1874-1884. [PMID: 35130152 DOI: 10.1109/tmi.2022.3149344] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lung nodule malignancy prediction is an essential step in the early diagnosis of lung cancer. Besides the difficulties commonly discussed, the challenges of this task also come from the ambiguous labels provided by annotators, since deep learning models have in some cases been found to reproduce or amplify human biases. In this paper, we propose a multi-view 'divide-and-rule' (MV-DAR) model to learn from both reliable and ambiguous annotations for lung nodule malignancy prediction on chest CT scans. According to the consistency and reliability of their annotations, we divide nodules into three sets: a consistent and reliable set (CR-Set), an inconsistent set (IC-Set), and a low reliable set (LR-Set). The nodule in IC-Set is annotated by multiple radiologists inconsistently, and the nodule in LR-Set is annotated by only one radiologist. Although ambiguous, inconsistent labels tell which label(s) is consistently excluded by all annotators, and the unreliable labels of a cohort of nodules are largely correct from the statistical point of view. Hence, both IC-Set and LR-Set can be used to facilitate the training of MV-DAR. Our MV-DAR contains three DAR models to characterize a lung nodule from three orthographic views and is trained following a two-stage procedure. Each DAR consists of three networks with the same architecture, including a prediction network (Prd-Net), a counterfactual network (CF-Net), and a low reliable network (LR-Net), which are trained on CR-Set, IC-Set, and LR-Set respectively in the pretraining phase. In the fine-tuning phase, the image representation ability learned by CF-Net and LR-Net is transferred to Prd-Net by negative-attention module (NA-Module) and consistent-attention module (CA-Module), aiming to boost the prediction ability of Prd-Net. The MV-DAR model has been evaluated on the LIDC-IDRI dataset and LUNGx dataset. Our results indicate not only the effectiveness of the MV-DAR in learning from ambiguous labels but also its superiority over present noisy label-learning models in lung nodule malignancy prediction.
Collapse
|
16
|
An improved CNN-based architecture for automatic lung nodule classification. Med Biol Eng Comput 2022; 60:1977-1986. [PMID: 35524089 DOI: 10.1007/s11517-022-02578-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2021] [Accepted: 04/22/2022] [Indexed: 10/18/2022]
Abstract
Lung cancer is one of the most critical diseases due to its significant death rate compared to all other types of cancer. The early diagnosis of lung cancer that improves the patient's chance of surviving is mostly done in two phases: screening through CT scan imaging modality and, more importantly the medical expert's reading of the scan, which is a time-consuming task and is vulnerable to errors. It is difficult to differentiate between malignant and benign nodules and biopsies are highly invasive, and patients with benign nodules may undergo unnecessary procedures. In this study, we propose a CNN-based computer-aided diagnosis system to automatically classify pulmonary nodules into benign or malignant. The proposed network architecture is based on AlexNet architecture that experiments with several types of layer ordering, hyperparameters, and functions for the various sides of the network. To build a well-trained model, several pre-processing steps are applied to the entire dataset, for instance segmentation, normalization, and zero centering. Finally, the proposed system obtained results with 98.7% accuracy, 98.6% sensitivity, and 98.9% specificity. The proposed model achieved superior performance compared to the AlexNet. The modifications in the original AlexNet is done to get a reasonable structure that has high nodule analysis sensitivity.
Collapse
|
17
|
Awassa L, Jdey I, Dhahri H, Hcini G, Mahmood A, Othman E, Haneef M. Study of Different Deep Learning Methods for Coronavirus (COVID-19) Pandemic: Taxonomy, Survey and Insights. SENSORS (BASEL, SWITZERLAND) 2022; 22:1890. [PMID: 35271037 PMCID: PMC8915023 DOI: 10.3390/s22051890] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/12/2022] [Accepted: 02/21/2022] [Indexed: 12/15/2022]
Abstract
COVID-19 has evolved into one of the most severe and acute illnesses. The number of deaths continues to climb despite the development of vaccines and new strains of the virus have appeared. The early and precise recognition of COVID-19 are key in viably treating patients and containing the pandemic on the whole. Deep learning technology has been shown to be a significant tool in diagnosing COVID-19 and in assisting radiologists to detect anomalies and numerous diseases during this epidemic. This research seeks to provide an overview of novel deep learning-based applications for medical imaging modalities, computer tomography (CT) and chest X-rays (CXR), for the detection and classification COVID-19. First, we give an overview of the taxonomy of medical imaging and present a summary of types of deep learning (DL) methods. Then, utilizing deep learning techniques, we present an overview of systems created for COVID-19 detection and classification. We also give a rundown of the most well-known databases used to train these networks. Finally, we explore the challenges of using deep learning algorithms to detect COVID-19, as well as future research prospects in this field.
Collapse
Affiliation(s)
- Lamia Awassa
- Faculty of Sciences and Technology of Sidi Bouzid, University of Kairouan, Kairouan 3100, Tunisia; (L.A.); (I.J.); (G.H.)
| | - Imen Jdey
- Faculty of Sciences and Technology of Sidi Bouzid, University of Kairouan, Kairouan 3100, Tunisia; (L.A.); (I.J.); (G.H.)
| | - Habib Dhahri
- Faculty of Sciences and Technology of Sidi Bouzid, University of Kairouan, Kairouan 3100, Tunisia; (L.A.); (I.J.); (G.H.)
- Department of Information Science, College of Applied Computer Sciences, King Saud University, Riyadh 11451, Saudi Arabia; (A.M.); (E.O.)
| | - Ghazala Hcini
- Faculty of Sciences and Technology of Sidi Bouzid, University of Kairouan, Kairouan 3100, Tunisia; (L.A.); (I.J.); (G.H.)
| | - Awais Mahmood
- Department of Information Science, College of Applied Computer Sciences, King Saud University, Riyadh 11451, Saudi Arabia; (A.M.); (E.O.)
| | - Esam Othman
- Department of Information Science, College of Applied Computer Sciences, King Saud University, Riyadh 11451, Saudi Arabia; (A.M.); (E.O.)
| | - Muhammad Haneef
- Department of Electrical Engineering, Foundation University Islamabad, Islamabad 44000, Pakistan;
| |
Collapse
|
18
|
Aria M, Nourani E, Golzari Oskouei A. ADA-COVID: Adversarial Deep Domain Adaptation-Based Diagnosis of COVID-19 from Lung CT Scans Using Triplet Embeddings. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2564022. [PMID: 35154300 PMCID: PMC8826267 DOI: 10.1155/2022/2564022] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/08/2021] [Accepted: 01/07/2022] [Indexed: 12/12/2022]
Abstract
Rapid diagnosis of COVID-19 with high reliability is essential in the early stages. To this end, recent research often uses medical imaging combined with machine vision methods to diagnose COVID-19. However, the scarcity of medical images and the inherent differences in existing datasets that arise from different medical imaging tools, methods, and specialists may affect the generalization of machine learning-based methods. Also, most of these methods are trained and tested on the same dataset, reducing the generalizability and causing low reliability of the obtained model in real-world applications. This paper introduces an adversarial deep domain adaptation-based approach for diagnosing COVID-19 from lung CT scan images, termed ADA-COVID. Domain adaptation-based training process receives multiple datasets with different input domains to generate domain-invariant representations for medical images. Also, due to the excessive structural similarity of medical images compared to other image data in machine vision tasks, we use the triplet loss function to generate similar representations for samples of the same class (infected cases). The performance of ADA-COVID is evaluated and compared with other state-of-the-art COVID-19 diagnosis algorithms. The obtained results indicate that ADA-COVID achieves classification improvements of at least 3%, 20%, 20%, and 11% in accuracy, precision, recall, and F1 score, respectively, compared to the best results of competitors, even without directly training on the same data. The implementation source code of the ADA-COVID is publicly available at https://github.com/MehradAria/ADA-COVID.
Collapse
Affiliation(s)
- Mehrad Aria
- Faculty of Information Technology and Computer Engineering, Azarbaijan Shahid Madani University, Tabriz, Iran
| | - Esmaeil Nourani
- Faculty of Information Technology and Computer Engineering, Azarbaijan Shahid Madani University, Tabriz, Iran
| | - Amin Golzari Oskouei
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| |
Collapse
|
19
|
Cloud-Based Lung Tumor Detection and Stage Classification Using Deep Learning Techniques. BIOMED RESEARCH INTERNATIONAL 2022; 2022:4185835. [PMID: 35047635 PMCID: PMC8763490 DOI: 10.1155/2022/4185835] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 11/30/2021] [Accepted: 12/07/2021] [Indexed: 02/01/2023]
Abstract
Artificial intelligence (AI), Internet of Things (IoT), and the cloud computing have recently become widely used in the healthcare sector, which aid in better decision-making for a radiologist. PET imaging or positron emission tomography is one of the most reliable approaches for a radiologist to diagnosing many cancers, including lung tumor. In this work, we proposed stage classification of lung tumor which is a more challenging task in computer-aided diagnosis. As a result, a modified computer-aided diagnosis is being considered as a way to reduce the heavy workloads and second opinion to radiologists. In this paper, we present a strategy for classifying and validating different stages of lung tumor progression, as well as a deep neural model and data collection using cloud system for categorizing phases of pulmonary illness. The proposed system presents a Cloud-based Lung Tumor Detector and Stage Classifier (Cloud-LTDSC) as a hybrid technique for PET/CT images. The proposed Cloud-LTDSC initially developed the active contour model as lung tumor segmentation, and multilayer convolutional neural network (M-CNN) for classifying different stages of lung cancer has been modelled and validated with standard benchmark images. The performance of the presented technique is evaluated using a benchmark image LIDC-IDRI dataset of 50 low doses and also utilized the lung CT DICOM images. Compared with existing techniques in the literature, our proposed method achieved good result for the performance metrics accuracy, recall, and precision evaluated. Under numerous aspects, our proposed approach produces superior outcomes on all of the applied dataset images. Furthermore, the experimental result achieves an average lung tumor stage classification accuracy of 97%-99.1% and an average of 98.6% which is significantly higher than the other existing techniques.
Collapse
|
20
|
Abstract
PURPOSE OF REVIEW In this article, we focus on the role of artificial intelligence in the management of lung cancer. We summarized commonly used algorithms, current applications and challenges of artificial intelligence in lung cancer. RECENT FINDINGS Feature engineering for tabular data and computer vision for image data are commonly used algorithms in lung cancer research. Furthermore, the use of artificial intelligence in lung cancer has extended to the entire clinical pathway including screening, diagnosis and treatment. Lung cancer screening mainly focuses on two aspects: identifying high-risk populations and the automatic detection of lung nodules. Artificial intelligence diagnosis of lung cancer covers imaging diagnosis, pathological diagnosis and genetic diagnosis. The artificial intelligence clinical decision-support system is the main application of artificial intelligence in lung cancer treatment. Currently, the challenges of artificial intelligence applications in lung cancer mainly focus on the interpretability of artificial intelligence models and limited annotated datasets; and recent advances in explainable machine learning, transfer learning and federated learning might solve these problems. SUMMARY Artificial intelligence shows great potential in many aspects of the management of lung cancer, especially in screening and diagnosis. Future studies on interpretability and privacy are needed for further application of artificial intelligence in lung cancer.
Collapse
Affiliation(s)
- Kai Zhang
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | | |
Collapse
|
21
|
Balagurunathan Y, Beers A, McNitt-Gray M, Hadjiiski L, Napel S, Goldgof D, Perez G, Arbelaez P, Mehrtash A, Kapur T, Yang E, Moon JW, Bernardino G, Delgado-Gonzalo R, Farhangi MM, Amini AA, Ni R, Feng X, Bagari A, Vaidhya K, Veasey B, Safta W, Frigui H, Enguehard J, Gholipour A, Castillo LS, Daza LA, Pinsky P, Kalpathy-Cramer J, Farahani K. Lung Nodule Malignancy Prediction in Sequential CT Scans: Summary of ISBI 2018 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3748-3761. [PMID: 34264825 PMCID: PMC9531053 DOI: 10.1109/tmi.2021.3097665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).
Collapse
Affiliation(s)
| | | | | | | | - Sandy Napel
- Dept. of Radiology, School of Medicine, Stanford University (SU), CA
| | | | - Gustavo Perez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Pablo Arbelaez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Alireza Mehrtash
- Robotics and Control Laboratory (RCL), Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Tina Kapur
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Ehwa Yang
- Sungkyunkwan University School of Medicine, Seoul 06351, Korea
| | - Jung Won Moon
- Human Medical Imaging & Intervention Center, Seoul 06524, Korea
| | - Gabriel Bernardino
- Centre Suisse d’Électronique et de Microtechnique, Neuchâtel, Switzerland
| | | | - M. Mehdi Farhangi
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Computer Engineering and Computer Science, University of Louisville
| | - Amir A. Amini
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | | | - Xue Feng
- Spingbok Inc
- Department of Biomedical Engineering, University of Virginia, Charlottesville
| | | | | | - Benjamin Veasey
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | - Wiem Safta
- Computer Engineering and Computer Science, University of Louisville
| | - Hichem Frigui
- Computer Engineering and Computer Science, University of Louisville
| | - Joseph Enguehard
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | - Ali Gholipour
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | | | - Laura Alexandra Daza
- Department of Biomedical Engineering, Universidad de los Andes, Bogota, Colombia
| | - Paul Pinsky
- Divsion of Cancer Prevention, National Cancer Institute (NCI), Washington DC
| | | | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NCI), Washington DC
| |
Collapse
|
22
|
Gürsoy Çoruh A, Yenigün B, Uzun Ç, Kahya Y, Büyükceran EU, Elhan A, Orhan K, Kayı Cangır A. A comparison of the fusion model of deep learning neural networks with human observation for lung nodule detection and classification. Br J Radiol 2021; 94:20210222. [PMID: 34111976 PMCID: PMC8248221 DOI: 10.1259/bjr.20210222] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 04/21/2021] [Accepted: 04/26/2021] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVES To compare the diagnostic performance of a newly developed artificial intelligence (AI) algorithm derived from the fusion of convolution neural networks (CNN) versus human observers in the estimation of malignancy risk in pulmonary nodules. METHODS The study population consists of 158 nodules from 158 patients. All nodules (81 benign and 77 malignant) were determined to be malignant or benign by a radiologist based on pathologic assessment and/or follow-up imaging. Two radiologists and an AI platform analyzed the nodules based on the Lung-RADS classification. The two observers also noted the size, location, and morphologic features of the nodules. An intraclass correlation coefficient was calculated for both observers and the AI; ROC curve analysis was performed to determine diagnostic performances. RESULTS Nodule size, presence of spiculation, and presence of fat were significantly different between the malignant and benign nodules (p < 0.001, for all three). Eighteen (11.3%) nodules were not detected and analyzed by the AI. Observer 1, observer 2, and the AI had an AUC of 0.917 ± 0.023, 0.870 ± 0.033, and 0.790 ± 0.037 in the ROC analysis of malignity probability, respectively. The observers were in almost perfect agreement for localization, nodule size, and lung-RADS classification [κ (95% CI)=0.984 (0.961-1.000), 0.978 (0.970-0.984), and 0.924 (0.878-0.970), respectively]. CONCLUSION The performance of the fusion AI algorithm in estimating the risk of malignancy was slightly lower than the performance of the observers. Fusion AI algorithms might be applied in an assisting role, especially for inexperienced radiologists. ADVANCES IN KNOWLEDGE In this study, we proposed a fusion model using four state-of-art object detectors for lung nodule detection and discrimination. The use of fusion of deep learning neural networks might be used in a supportive role for radiologists when interpreting lung nodule discrimination.
Collapse
Affiliation(s)
| | - Bülent Yenigün
- Department of Thoracic Surgery, School of Medicine, Ankara University, Ankara, Turkey
| | - Çağlar Uzun
- Department of Radiology, School of Medicine, Ankara University, Ankara, Turkey
| | - Yusuf Kahya
- Department of Thoracic Surgery, School of Medicine, Ankara University, Ankara, Turkey
| | | | - Atilla Elhan
- Department of Biostatistics, School of Medicine, Ankara University, Ankara, Turkey
| | - Kaan Orhan
- Dentomaxillofacial Radiology, Ankara University, Faculty of Dentistry and Ankara University Medical Design Application and Research Center, Ankara, Turkey
| | - Ayten Kayı Cangır
- Department of Thoracic Surgery, School of Medicine, Ankara University, Ankara, Turkey
| |
Collapse
|
23
|
Morozov SP, Gombolevskiy VA, Elizarov AB, Gusev MA, Novik VP, Prokudaylo SB, Bardin AS, Popov EV, Ledikhova NV, Chernina VY, Blokhin IA, Nikolaev AE, Reshetnikov RV, Vladzymyrskyy AV, Kulberg NS. A simplified cluster model and a tool adapted for collaborative labeling of lung cancer CT scans. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106111. [PMID: 33957377 DOI: 10.1016/j.cmpb.2021.106111] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 04/07/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer is the most common type of cancer with a high mortality rate. Early detection using medical imaging is critically important for the long-term survival of the patients. Computer-aided diagnosis (CAD) tools can potentially reduce the number of incorrect interpretations of medical image data by radiologists. Datasets with adequate sample size, annotation, and truth are the dominant factors in developing and training effective CAD algorithms. The objective of this study was to produce a practical approach and a tool for the creation of medical image datasets. METHODS The proposed model uses the modified maximum transverse diameter approach to mark a putative lung nodule. The modification involves the possibility to use a set of overlapping spheres of appropriate size to approximate the shape of the nodule. The algorithm embedded in the model also groups the marks made by different readers for the same lesion. We used the data of 536 randomly selected patients of Moscow outpatient clinics to create a dataset of standard-dose chest computed tomography (CT) scans utilizing the double-reading approach with arbitration. Six volunteer radiologists independently produced a report for each scan using the proposed model with the main focus on the detection of lesions with sizes ranging from 3 to 30 mm. After this, an arbitrator reviewed their marks and annotations. RESULTS The maximum transverse diameter approach outperformed the alternative methods (3D box, ellipsoid, and complete outline construction) in a study of 10,000 computer-generated tumor models of different shapes in terms of accuracy and speed of nodule shape approximation. The markup and annotation of the CTLungCa-500 dataset revealed 72 studies containing no lung nodules. The remaining 464 CT scans contained 3151 lesions marked by at least one radiologist: 56%, 14%, and 29% of the lesions were malignant, benign, and non-nodular, respectively. 2887 lesions have the target size of 3-30 mm. Only 70 nodules were uniformly identified by all the six readers. An increase in the number of independent readers providing CT scans interpretations led to an accuracy increase associated with a decrease in agreement. The dataset markup process took three working weeks. CONCLUSIONS The developed cluster model simplifies the collaborative and crowdsourced creation of image repositories and makes it time-efficient. Our proof-of-concept dataset provides a valuable source of annotated medical imaging data for training CAD algorithms aimed at early detection of lung nodules. The tool and the dataset are publicly available at https://github.com/Center-of-Diagnostics-and-Telemedicine/FAnTom.git and https://mosmed.ai/en/datasets/ct_lungcancer_500/, respectively.
Collapse
Affiliation(s)
- S P Morozov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - V A Gombolevskiy
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - A B Elizarov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - M A Gusev
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia; Federal State Budgetary Educational Institution of Higher Education "Moscow Polytechnic University", Tverskaya str., 11, Moscow, 125993, Russia
| | - V P Novik
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - S B Prokudaylo
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - A S Bardin
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - E V Popov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - N V Ledikhova
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - V Y Chernina
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - I A Blokhin
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - A E Nikolaev
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - R V Reshetnikov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia; Institute of Molecular Medicine, Sechenov First Moscow State Medical University, Trubetskaya str. 8-2, Moscow, 119991, Russia
| | - A V Vladzymyrskyy
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia
| | - N S Kulberg
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka str., 24, Moscow, 127051, Russia; Federal Research Center "Computer Science and Control" of Russian Academy of Sciences, Vavilova str., 44/2, Moscow, 119333, Russia.
| |
Collapse
|
24
|
Schreuder A, Scholten ET, van Ginneken B, Jacobs C. Artificial intelligence for detection and characterization of pulmonary nodules in lung cancer CT screening: ready for practice? Transl Lung Cancer Res 2021; 10:2378-2388. [PMID: 34164285 PMCID: PMC8182724 DOI: 10.21037/tlcr-2020-lcs-06] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Lung cancer computed tomography (CT) screening trials using low-dose CT have repeatedly demonstrated a reduction in the number of lung cancer deaths in the screening group compared to a control group. With various countries currently considering the implementation of lung cancer screening, recurring discussion points are, among others, the potentially high false positive rates, cost-effectiveness, and the availability of radiologists for scan interpretation. Artificial intelligence (AI) has the potential to increase the efficiency of lung cancer screening. We discuss the performance levels of AI algorithms for various tasks related to the interpretation of lung screening CT scans, how they compare to human experts, and how AI and humans may complement each other. We discuss how AI may be used in the lung cancer CT screening workflow according to the current evidence and describe the additional research that will be required before AI can take a more prominent role in the analysis of lung screening CT scans.
Collapse
Affiliation(s)
- Anton Schreuder
- Department of Radiology, Nuclear Medicine, and Anatomy, Radboudumc, Nijmegen, The Netherlands
| | - Ernst T Scholten
- Department of Radiology, Nuclear Medicine, and Anatomy, Radboudumc, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Department of Radiology, Nuclear Medicine, and Anatomy, Radboudumc, Nijmegen, The Netherlands.,Fraunhofer MEVIS, Bremen, Germany
| | - Colin Jacobs
- Department of Radiology, Nuclear Medicine, and Anatomy, Radboudumc, Nijmegen, The Netherlands
| |
Collapse
|
25
|
Choi W, Nadeem S, Alam SR, Deasy JO, Tannenbaum A, Lu W. Reproducible and Interpretable Spiculation Quantification for Lung Cancer Screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105839. [PMID: 33221055 PMCID: PMC7920914 DOI: 10.1016/j.cmpb.2020.105839] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 11/08/2020] [Indexed: 05/28/2023]
Abstract
Spiculations are important predictors of lung cancer malignancy, which are spikes on the surface of the pulmonary nodules. In this study, we proposed an interpretable and parameter-free technique to quantify the spiculation using area distortion metric obtained by the conformal (angle-preserving) spherical parameterization. We exploit the insight that for an angle-preserved spherical mapping of a given nodule, the corresponding negative area distortion precisely characterizes the spiculations on that nodule. We introduced novel spiculation scores based on the area distortion metric and spiculation measures. We also semi-automatically segment lung nodule (for reproducibility) as well as vessel and wall attachment to differentiate the real spiculations from lobulation and attachment. A simple pathological malignancy prediction model is also introduced. We used the publicly-available LIDC-IDRI dataset pathologists (strong-label) and radiologists (weak-label) ratings to train and test radiomics models containing this feature, and then externally validate the models. We achieved AUC = 0.80 and 0.76, respectively, with the models trained on the 811 weakly-labeled LIDC datasets and tested on the 72 strongly-labeled LIDC and 73 LUNGx datasets; the previous best model for LUNGx had AUC = 0.68. The number-of-spiculations feature was found to be highly correlated (Spearman's rank correlation coefficient ρ=0.44) with the radiologists' spiculation score. We developed a reproducible and interpretable, parameter-free technique for quantifying spiculations on nodules. The spiculation quantification measures was then applied to the radiomics framework for pathological malignancy prediction with reproducible semi-automatic segmentation of nodule. Using our interpretable features (size, attachment, spiculation, lobulation), we were able to achieve higher performance than previous models. In the future, we will exhaustively test our model for lung cancer screening in the clinic.
Collapse
Affiliation(s)
- Wookjin Choi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA; Department of Engineering and Computer Science, Virginia State University, 1 Hayden St, Petersburg, VA 23806, USA
| | - Saad Nadeem
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA.
| | - Sadegh R Alam
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
| | - Allen Tannenbaum
- Departments of Computer Science and Applied Mathematics & Statistics, Stony Brook University, Stony Brook, NY 11790, USA
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
| |
Collapse
|
26
|
A novel technology to integrate imaging and clinical markers for non-invasive diagnosis of lung cancer. Sci Rep 2021; 11:4597. [PMID: 33633213 PMCID: PMC7907202 DOI: 10.1038/s41598-021-83907-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 02/09/2021] [Indexed: 12/17/2022] Open
Abstract
This study presents a non-invasive, automated, clinical diagnostic system for early diagnosis of lung cancer that integrates imaging data from a single computed tomography scan and breath bio-markers obtained from a single exhaled breath to quickly and accurately classify lung nodules. CT imaging and breath volatile organic compounds data were collected from 47 patients. Spherical Harmonics-based shape features to quantify the shape complexity of the pulmonary nodules, 7th-Order Markov Gibbs Random Field based appearance model to describe the spatial non-homogeneities in the pulmonary nodule, and volumetric features (size) of pulmonary nodules were calculated from CT images. 27 VOCs in exhaled breath were captured by a micro-reactor approach and quantied using mass spectrometry. CT and breath markers were input into a deep-learning autoencoder classifier with a leave-one-subject-out cross validation for nodule classification. To mitigate the limitation of a small sample size and validate the methodology for individual markers, retrospective CT scans from 467 patients with 727 pulmonary nodules, and breath samples from 504 patients were analyzed. The CAD system achieved 97.8% accuracy, 97.3% sensitivity, 100% specificity, and 99.1% area under curve in classifying pulmonary nodules.
Collapse
|
27
|
Javaheri T, Homayounfar M, Amoozgar Z, Reiazi R, Homayounieh F, Abbas E, Laali A, Radmard AR, Gharib MH, Mousavi SAJ, Ghaemi O, Babaei R, Mobin HK, Hosseinzadeh M, Jahanban-Esfahlan R, Seidi K, Kalra MK, Zhang G, Chitkushev LT, Haibe-Kains B, Malekzadeh R, Rawassizadeh R. CovidCTNet: an open-source deep learning approach to diagnose covid-19 using small cohort of CT images. NPJ Digit Med 2021; 4:29. [PMID: 33603193 PMCID: PMC7893172 DOI: 10.1038/s41746-021-00399-3] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 12/10/2020] [Indexed: 12/21/2022] Open
Abstract
Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase-polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method; however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source framework, CovidCTNet, composed of a set of deep learning algorithms that accurately differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 95% compared to radiologists (70%). CovidCTNet is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. To facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and model parameter details as open-source. Open-source sharing of CovidCTNet enables developers to rapidly improve and optimize services while preserving user privacy and data ownership.
Collapse
Affiliation(s)
- Tahereh Javaheri
- Health Informatics Lab, Metropolitan College, Boston University, Boston, USA
| | - Morteza Homayounfar
- Department of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Zohreh Amoozgar
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Reza Reiazi
- Princess Margaret Cancer Centre, University of Toronto, Toronto, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
- Department of Medical Physics, School of Medicine, Iran university of Medical Sciences, Tehran, Iran
| | - Fatemeh Homayounieh
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Engy Abbas
- Joint Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Azadeh Laali
- Department of Infectious Diseases, Firoozgar Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Amir Reza Radmard
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Hadi Gharib
- Department of Radiology and Golestan Rheumatology Research Center, Golestan University of Medical Sciences, Gorgan, Iran
| | | | - Omid Ghaemi
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Rosa Babaei
- Department of Radiology, Iran University of Medical Sciences, Tehran, Iran
| | - Hadi Karimi Mobin
- Department of Radiology, Iran University of Medical Sciences, Tehran, Iran
| | - Mehdi Hosseinzadeh
- Institute of Research and Development, Duy Tan University, Da Nang, Vietnam
- Health Management and Economics Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Rana Jahanban-Esfahlan
- Department of Medical Biotechnology, School of Advanced Medical Sciences, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Khaled Seidi
- Department of Medical Biotechnology, School of Advanced Medical Sciences, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Guanglan Zhang
- Health Informatics Lab, Metropolitan College, Boston University, Boston, USA
- Department of Computer Science, Metropolitan College, Boston University, Boston, USA
| | - L T Chitkushev
- Health Informatics Lab, Metropolitan College, Boston University, Boston, USA
- Department of Computer Science, Metropolitan College, Boston University, Boston, USA
| | - Benjamin Haibe-Kains
- Princess Margaret Cancer Centre, University of Toronto, Toronto, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
- Ontario Institute for Cancer Research, Toronto, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Reza Malekzadeh
- Digestive Disease Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Reza Rawassizadeh
- Health Informatics Lab, Metropolitan College, Boston University, Boston, USA.
- Department of Computer Science, Metropolitan College, Boston University, Boston, USA.
| |
Collapse
|
28
|
Islam MM, Karray F, Alhajj R, Zeng J. A Review on Deep Learning Techniques for the Diagnosis of Novel Coronavirus (COVID-19). IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:30551-30572. [PMID: 34976571 PMCID: PMC8675557 DOI: 10.1109/access.2021.3058537] [Citation(s) in RCA: 114] [Impact Index Per Article: 38.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 02/06/2021] [Indexed: 05/03/2023]
Abstract
Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. The aim of this paper is to facilitate experts (medical or otherwise) and technicians in understanding the ways deep learning techniques are used in this regard and how they can be potentially further utilized to combat the outbreak of COVID-19.
Collapse
Affiliation(s)
- Md. Milon Islam
- Centre for Pattern Analysis and Machine IntelligenceDepartment of Electrical and Computer EngineeringUniversity of WaterlooWaterlooONN2L 3G1Canada
| | - Fakhri Karray
- Centre for Pattern Analysis and Machine IntelligenceDepartment of Electrical and Computer EngineeringUniversity of WaterlooWaterlooONN2L 3G1Canada
| | - Reda Alhajj
- Department of Computer ScienceUniversity of CalgaryCalgaryABT2N 1N4Canada
| | - Jia Zeng
- Institute for Personalized Cancer TherapyMD Anderson Cancer CenterHoustonTX77030USA
| |
Collapse
|
29
|
Halder A, Chatterjee S, Dey D, Kole S, Munshi S. An adaptive morphology based segmentation technique for lung nodule detection in thoracic CT image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105720. [PMID: 32877818 DOI: 10.1016/j.cmpb.2020.105720] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 08/19/2020] [Indexed: 05/13/2023]
Abstract
Lung cancer is one of the most life-threatening cancers mostly indicated by the presence of nodules in the lung. Doctors and radiological experts use High-Resolution Computed Tomography (HRCT) images for nodule detection and further decision making from visual inspection. Manual detection of lung nodules is a time-consuming process. Therefore, Computer-aided detection (CADe) systems have been developed for accurate nodule detection and segmentation. CADe-based systems assist radiologists to detect lung nodules with greater confidence and a lesser amount of time and have a significant impact on the accurate, uniform, and early-stage diagnosis of lung cancer. In this research work, an adaptive morphology-based segmentation technique (AMST) has been introduced by designing an adaptive morphological filter for improved segmentation of the lung nodule region. The adaptive morphological filter detects candidate nodule regions by employing adaptive structuring element (ASE) and at the same time improves nodule detection accuracy by reducing false positives (FPs) from the Computed Tomography (CT) slices. The detected nodule candidate regions are then processed for feature extraction. In this study, morphological, texture and intensity-based features have been used with support vector machine (SVM) classifier for lung nodule detection. The performance of the proposed framework has been evaluated by incorporating a 10-fold cross-validation technique on Lung Image Database Consortium-Image Database Resource Initiative (LIDC/IDRI) dataset and on a private dataset, collected from a consultant radiologist. It has been observed that the proposed automated computer-aided detection system has achieved overall classification performance indices with 94.88% sensitivity, 93.45% specificity and 94.27% detection accuracy with 1.8 FPs/scan on LIDC/IDRI dataset and 91.43% sensitivity, 90.45% specificity, 92.83% accuracy with 3.2 FPs/scan on a private dataset. The results show that the proposed CADe system presented in this paper outperforms the other state-of-the-art methods for automatic nodule detection from the HRCT image.
Collapse
Affiliation(s)
- Amitava Halder
- Computer Science and Engineering Department, Supreme Knowledge Foundation Group of Institutions, Hooghly 712139, India.
| | | | - Debangshu Dey
- Electrical Engineering Department, Jadavpur University, Kolkata 700032, India
| | - Surajit Kole
- Theism Ultrasound Centre, 14 B Dumdum Rd., Kolkata 700030, India
| | - Sugata Munshi
- Electrical Engineering Department, Jadavpur University, Kolkata 700032, India
| |
Collapse
|
30
|
Yu KH, Lee TLM, Yen MH, Kou SC, Rosen B, Chiang JH, Kohane IS. Reproducible Machine Learning Methods for Lung Cancer Detection Using Computed Tomography Images: Algorithm Development and Validation. J Med Internet Res 2020; 22:e16709. [PMID: 32755895 PMCID: PMC7439139 DOI: 10.2196/16709] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 05/25/2020] [Accepted: 06/11/2020] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Chest computed tomography (CT) is crucial for the detection of lung cancer, and many automated CT evaluation methods have been proposed. Due to the divergent software dependencies of the reported approaches, the developed methods are rarely compared or reproduced. OBJECTIVE The goal of the research was to generate reproducible machine learning modules for lung cancer detection and compare the approaches and performances of the award-winning algorithms developed in the Kaggle Data Science Bowl. METHODS We obtained the source codes of all award-winning solutions of the Kaggle Data Science Bowl Challenge, where participants developed automated CT evaluation methods to detect lung cancer (training set n=1397, public test set n=198, final test set n=506). The performance of the algorithms was evaluated by the log-loss function, and the Spearman correlation coefficient of the performance in the public and final test sets was computed. RESULTS Most solutions implemented distinct image preprocessing, segmentation, and classification modules. Variants of U-Net, VGGNet, and residual net were commonly used in nodule segmentation, and transfer learning was used in most of the classification algorithms. Substantial performance variations in the public and final test sets were observed (Spearman correlation coefficient = .39 among the top 10 teams). To ensure the reproducibility of results, we generated a Docker container for each of the top solutions. CONCLUSIONS We compared the award-winning algorithms for lung cancer detection and generated reproducible Docker images for the top solutions. Although convolutional neural networks achieved decent accuracy, there is plenty of room for improvement regarding model generalizability.
Collapse
Affiliation(s)
- Kun-Hsing Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States.,Department of Statistics, Harvard University, Cambridge, MA, United States.,Department of Pathology, Brigham and Women's Hospital, Boston, MA, United States
| | | | - Ming-Hsuan Yen
- Graduate Program of Multimedia Systems and Intelligent Computing, National Cheng Kung University and Academia Sinica, Tainan, Taiwan.,Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - S C Kou
- Department of Statistics, Harvard University, Cambridge, MA, United States
| | - Bruce Rosen
- Department of Radiology, Athinoula A Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, United States.,Division of Health Sciences and Technology, Harvard-Massachusetts Institute of Technology, Boston, MA, United States
| | - Jung-Hsien Chiang
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Isaac S Kohane
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States.,Division of Health Sciences and Technology, Harvard-Massachusetts Institute of Technology, Boston, MA, United States
| |
Collapse
|
31
|
Massion PP, Antic S, Ather S, Arteta C, Brabec J, Chen H, Declerck J, Dufek D, Hickes W, Kadir T, Kunst J, Landman BA, Munden RF, Novotny P, Peschl H, Pickup LC, Santos C, Smith GT, Talwar A, Gleeson F. Assessing the Accuracy of a Deep Learning Method to Risk Stratify Indeterminate Pulmonary Nodules. Am J Respir Crit Care Med 2020; 202:241-249. [PMID: 32326730 PMCID: PMC7365375 DOI: 10.1164/rccm.201903-0505oc] [Citation(s) in RCA: 94] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 04/21/2020] [Indexed: 12/11/2022] Open
Abstract
Rationale: The management of indeterminate pulmonary nodules (IPNs) remains challenging, resulting in invasive procedures and delays in diagnosis and treatment. Strategies to decrease the rate of unnecessary invasive procedures and optimize surveillance regimens are needed.Objectives: To develop and validate a deep learning method to improve the management of IPNs.Methods: A Lung Cancer Prediction Convolutional Neural Network model was trained using computed tomography images of IPNs from the National Lung Screening Trial, internally validated, and externally tested on cohorts from two academic institutions.Measurements and Main Results: The areas under the receiver operating characteristic curve in the external validation cohorts were 83.5% (95% confidence interval [CI], 75.4-90.7%) and 91.9% (95% CI, 88.7-94.7%), compared with 78.1% (95% CI, 68.7-86.4%) and 81.9 (95% CI, 76.1-87.1%), respectively, for a commonly used clinical risk model for incidental nodules. Using 5% and 65% malignancy thresholds defining low- and high-risk categories, the overall net reclassifications in the validation cohorts for cancers and benign nodules compared with the Mayo model were 0.34 (Vanderbilt) and 0.30 (Oxford) as a rule-in test, and 0.33 (Vanderbilt) and 0.58 (Oxford) as a rule-out test. Compared with traditional risk prediction models, the Lung Cancer Prediction Convolutional Neural Network was associated with improved accuracy in predicting the likelihood of disease at each threshold of management and in our external validation cohorts.Conclusions: This study demonstrates that this deep learning algorithm can correctly reclassify IPNs into low- or high-risk categories in more than a third of cancers and benign nodules when compared with conventional risk models, potentially reducing the number of unnecessary invasive procedures and delays in diagnosis.
Collapse
Affiliation(s)
- Pierre P. Massion
- Cancer Early Detection and Prevention Initiative, Vanderbilt Ingram Cancer Center, Division of Allergy, Pulmonary and Critical Care Medicine
- Pulmonary and Critical Care Section, Medical Service, Veterans Affairs, and
| | - Sanja Antic
- Cancer Early Detection and Prevention Initiative, Vanderbilt Ingram Cancer Center, Division of Allergy, Pulmonary and Critical Care Medicine
| | - Sarim Ather
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | | | - Jan Brabec
- Faculty of Medicine, Masaryk University, Brno, Czech Republic
| | | | | | - David Dufek
- Faculty of Medicine, Masaryk University, Brno, Czech Republic
| | - William Hickes
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | | | - Jonas Kunst
- Faculty of Medicine, Masaryk University, Brno, Czech Republic
| | - Bennett A. Landman
- Department of Electrical Engineering, Vanderbilt University, Nashville, Tennessee; and
| | - Reginald F. Munden
- Department of Radiology, Wake Forest Baptist Health, Winston Salem, North Carolina
| | | | - Heiko Peschl
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | | | | | - Gary T. Smith
- Department of Radiology, Vanderbilt University School of Medicine, Nashville, Tennessee
- Department of Radiology, Tennessee Valley Healthcare System, Nashville, Tennessee
| | - Ambika Talwar
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | - Fergus Gleeson
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| |
Collapse
|
32
|
Abstract
The 2010's saw demonstration of the power of lung cancer screening to reduce mortality. However, with implementation of lung cancer screening comes the challenge of diagnosing millions of lung nodules every year. When compared to other cancers with widespread screening strategies (breast, colorectal, cervical, prostate, and skin), obtaining a lung nodule tissue biopsy to confirm a positive screening test remains associated with higher morbidity and cost. Therefore, non-invasive diagnostic biomarkers may have a unique opportunity in lung cancer to greatly improve the management of patients at risk. This review covers recent advances in the field of liquid biomarkers and computed tomographic imaging features, with special attention to new methods for combination of biomarkers as well as the use of artificial intelligence for the discrimination of benign from malignant nodules.
Collapse
Affiliation(s)
- Michael N Kammer
- Department of Chemistry, Vanderbilt University, Nashville, TN, USA.,Division of Allergy, Pulmonary, and Critical Care Medicine, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Pierre P Massion
- Division of Allergy, Pulmonary, and Critical Care Medicine, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA.,Cancer Early Detection and Prevention Initiative, Vanderbilt Ingram Cancer Center, Nashville, TN, USA.,Medical Service, Tennessee Valley Healthcare Systems, Nashville Campus, Nashville, TN, USA
| |
Collapse
|
33
|
Nakai H, Nishio M, Yamashita R, Ono A, Nakao KK, Fujimoto K, Togashi K. Quantitative and Qualitative Evaluation of Convolutional Neural Networks with a Deeper U-Net for Sparse-View Computed Tomography Reconstruction. Acad Radiol 2020; 27:563-574. [PMID: 31281082 DOI: 10.1016/j.acra.2019.05.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 05/23/2019] [Accepted: 05/25/2019] [Indexed: 12/16/2022]
Abstract
RATIONALE AND OBJECTIVES To evaluate the utility of a convolutional neural network (CNN) with an increased number of contracting and expanding paths of U-net for sparse-view CT reconstruction. MATERIALS AND METHODS This study used 60 anonymized chest CT cases from a public database called "The Cancer Imaging Archive". Eight thousand images from 40 cases were used for training. Eight hundred and 80 images from another 20 cases were used for quantitative and qualitative evaluation, respectively. Sparse-view CT images subsampled by a factor of 20 were simulated, and two CNNs were trained to create denoised images from the sparse-view CT. A CNN based on U-net with residual learning with four contracting and expanding paths (the preceding CNN) was compared with another CNN with eight contracting and expanding paths (the proposed CNN) both quantitatively (peak signal to noise ratio, structural similarity index), and qualitatively (the scores given by two radiologists for anatomical visibility, artifact and noise, and overall image quality) using the Wilcoxon signed-rank test. Nodule and emphysema appearance were also evaluated qualitatively. RESULTS The proposed CNN was significantly better than the preceding CNN both quantitatively and qualitatively (overall image quality interquartile range, 3.0-3.5 versus 1.0-1.0 reported from the preceding CNN; p < 0.001). However, only 2 of 22 cases used for emphysematous evaluation (2 CNNs for every 11 cases with emphysema) had an average score of ≥ 2 (on a 3 point scale). CONCLUSION Increasing contracting and expanding paths may be useful for sparse-view CT reconstruction with CNN. However, poor reproducibility of emphysema appearance should also be noted.
Collapse
|
34
|
Evolving the pulmonary nodules diagnosis from classical approaches to deep learning-aided decision support: three decades' development course and future prospect. J Cancer Res Clin Oncol 2019; 146:153-185. [PMID: 31786740 DOI: 10.1007/s00432-019-03098-5] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 11/25/2019] [Indexed: 02/06/2023]
Abstract
PURPOSE Lung cancer is the commonest cause of cancer deaths worldwide, and its mortality can be reduced significantly by performing early diagnosis and screening. Since the 1960s, driven by the pressing needs to accurately and effectively interpret the massive volume of chest images generated daily, computer-assisted diagnosis of pulmonary nodule has opened up new opportunities to relax the limitation from physicians' subjectivity, experiences and fatigue. And the fair access to the reliable and affordable computer-assisted diagnosis will fight the inequalities in incidence and mortality between populations. It has been witnessed that significant and remarkable advances have been achieved since the 1980s, and consistent endeavors have been exerted to deal with the grand challenges on how to accurately detect the pulmonary nodules with high sensitivity at low false-positive rate as well as on how to precisely differentiate between benign and malignant nodules. There is a lack of comprehensive examination of the techniques' development which is evolving the pulmonary nodules diagnosis from classical approaches to machine learning-assisted decision support. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the computer-assisted nodules detection and benign-malignant classification techniques developed over three decades, which have evolved from the complicated ad hoc analysis pipeline of conventional approaches to the simplified seamlessly integrated deep learning techniques. This review also identifies challenges and highlights opportunities for future work in learning models, learning algorithms and enhancement schemes for bridging current state to future prospect and satisfying future demand. CONCLUSION It is the first literature review of the past 30 years' development in computer-assisted diagnosis of lung nodules. The challenges indentified and the research opportunities highlighted in this survey are significant for bridging current state to future prospect and satisfying future demand. The values of multifaceted driving forces and multidisciplinary researches are acknowledged that will make the computer-assisted diagnosis of pulmonary nodules enter into the main stream of clinical medicine and raise the state-of-the-art clinical applications as well as increase both welfares of physicians and patients. We firmly hold the vision that fair access to the reliable, faithful, and affordable computer-assisted diagnosis for early cancer diagnosis would fight the inequalities in incidence and mortality between populations, and save more lives.
Collapse
|
35
|
Lung Nodule: Imaging Features and Evaluation in the Age of Machine Learning. CURRENT PULMONOLOGY REPORTS 2019. [DOI: 10.1007/s13665-019-00229-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
36
|
Uthoff J, Stephens MJ, Newell JD, Hoffman EA, Larson J, Koehn N, De Stefano FA, Lusk CM, Wenzlaff AS, Watza D, Neslund-Dudas C, Carr LL, Lynch DA, Schwartz AG, Sieren JC. Machine learning approach for distinguishing malignant and benign lung nodules utilizing standardized perinodular parenchymal features from CT. Med Phys 2019; 46:3207-3216. [PMID: 31087332 DOI: 10.1002/mp.13592] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Revised: 04/25/2019] [Accepted: 05/07/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Computed tomography (CT) is an effective method for detecting and characterizing lung nodules in vivo. With the growing use of chest CT, the detection frequency of lung nodules is increasing. Noninvasive methods to distinguish malignant from benign nodules have the potential to decrease the clinical burden, risk, and cost involved in follow-up procedures on the large number of false-positive lesions detected. This study examined the benefit of including perinodular parenchymal features in machine learning (ML) tools for pulmonary nodule assessment. METHODS Lung nodule cases with pathology confirmed diagnosis (74 malignant, 289 benign) were used to extract quantitative imaging characteristics from computed tomography scans of the nodule and perinodular parenchyma tissue. A ML tool development pipeline was employed using k-medoids clustering and information theory to determine efficient predictor sets for different amounts of parenchyma inclusion and build an artificial neural network classifier. The resulting ML tool was validated using an independent cohort (50 malignant, 50 benign). RESULTS The inclusion of parenchymal imaging features improved the performance of the ML tool over exclusively nodular features (P < 0.01). The best performing ML tool included features derived from nodule diameter-based surrounding parenchyma tissue quartile bands. We demonstrate similar high-performance values on the independent validation cohort (AUC-ROC = 0.965). A comparison using the independent validation cohort with the Fleischner pulmonary nodule follow-up guidelines demonstrated a theoretical reduction in recommended follow-up imaging and procedures. CONCLUSIONS Radiomic features extracted from the parenchyma surrounding lung nodules contain valid signals with spatial relevance for the task of lung cancer risk classification. Through standardization of feature extraction regions from the parenchyma, ML tool validation performance of 100% sensitivity and 96% specificity was achieved.
Collapse
Affiliation(s)
- Johanna Uthoff
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, 52240, USA.,Department of Radiology, University of Iowa, Iowa City, IA, 52242, USA
| | - Matthew J Stephens
- Department of Radiology, University of Cincinnati, Cincinnati, OH, 45267, USA
| | - John D Newell
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, 52240, USA.,Department of Radiology, University of Iowa, Iowa City, IA, 52242, USA
| | - Eric A Hoffman
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, 52240, USA.,Department of Radiology, University of Iowa, Iowa City, IA, 52242, USA
| | - Jared Larson
- Department of Radiology, University of Iowa, Iowa City, IA, 52242, USA
| | - Nicholas Koehn
- Department of Radiology, University of Iowa, Iowa City, IA, 52242, USA
| | | | - Chrissy M Lusk
- Karmanos Cancer Institute, Wayne State University, Detroit, MI, 48201, USA
| | - Angela S Wenzlaff
- Karmanos Cancer Institute, Wayne State University, Detroit, MI, 48201, USA
| | - Donovan Watza
- Karmanos Cancer Institute, Wayne State University, Detroit, MI, 48201, USA
| | | | - Laurie L Carr
- Department of Medicine, National Jewish Health, Denver, CO, 80206, USA
| | - David A Lynch
- Department of Radiology, National Jewish Health, Denver, CO, 80206, USA
| | - Ann G Schwartz
- Karmanos Cancer Institute, Wayne State University, Detroit, MI, 48201, USA
| | - Jessica C Sieren
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, 52240, USA.,Department of Radiology, University of Iowa, Iowa City, IA, 52242, USA
| |
Collapse
|
37
|
Rattan R, Kataria T, Banerjee S, Goyal S, Gupta D, Pandita A, Bisht S, Narang K, Mishra SR. Artificial intelligence in oncology, its scope and future prospects with specific reference to radiation oncology. BJR Open 2019; 1:20180031. [PMID: 33178922 PMCID: PMC7592433 DOI: 10.1259/bjro.20180031] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2018] [Revised: 04/01/2019] [Accepted: 04/12/2019] [Indexed: 01/02/2023] Open
Abstract
OBJECTIVE Artificial intelligence (AI) seems to be bridging the gap between the acquisition of data and its meaningful interpretation. These approaches, have shown outstanding capabilities, outperforming most classification and regression methods to date and the ability to automatically learn the most suitable data representation for the task at hand and present it for better correlation. This article tries to sensitize the practising radiation oncologists to understand where the potential role of AI lies and what further can be achieved with it. METHODS AND MATERIALS Contemporary literature was searched and the available literature was sorted and an attempt at writing a comprehensive non-systematic review was made. RESULTS The article addresses various areas in oncology, especially in the field of radiation oncology, where the work based on AI has been done. Whether it's the screening modalities, or diagnosis or the prognostic assays, AI has come with more accurately defining results and survival of patients. Various steps and protocols in radiation oncology are now using AI-based methods, like in the steps of planning, segmentation and delivery of radiation. Benefit of AI across all the platforms of health sector may lead to a more refined and personalized medicine in near future. CONCLUSION AI with the use of machine learning and artificial neural networks has come up with faster and more accurate solutions for the problems faced by oncologist. The uses of AI,are likely to get increased exponentially . However, concerns regarding demographic discrepancies in relation to patients, disease and their natural history and reports of manipulation of AI, the ultimate responsibility will rest on the treating physicians.
Collapse
Affiliation(s)
- Rajit Rattan
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Tejinder Kataria
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Susovan Banerjee
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Shikha Goyal
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Deepak Gupta
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Akshi Pandita
- Department of Dermatology, P. N. Behl Skin Institute, New Delhi, India
| | - Shyam Bisht
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
- Department of Dermatology, P. N. Behl Skin Institute, New Delhi, India
| | - Kushal Narang
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | | |
Collapse
|
38
|
Xie Y, Xia Y, Zhang J, Song Y, Feng D, Fulham M, Cai W. Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:991-1004. [PMID: 30334786 DOI: 10.1109/tmi.2018.2876510] [Citation(s) in RCA: 180] [Impact Index Per Article: 36.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The accurate identification of malignant lung nodules on chest CT is critical for the early detection of lung cancer, which also offers patients the best chance of cure. Deep learning methods have recently been successfully introduced to computer vision problems, although substantial challenges remain in the detection of malignant nodules due to the lack of large training data sets. In this paper, we propose a multi-view knowledge-based collaborative (MV-KBC) deep model to separate malignant from benign nodules using limited chest CT data. Our model learns 3-D lung nodule characteristics by decomposing a 3-D nodule into nine fixed views. For each view, we construct a knowledge-based collaborative (KBC) submodel, where three types of image patches are designed to fine-tune three pre-trained ResNet-50 networks that characterize the nodules' overall appearance, voxel, and shape heterogeneity, respectively. We jointly use the nine KBC submodels to classify lung nodules with an adaptive weighting scheme learned during the error back propagation, which enables the MV-KBC model to be trained in an end-to-end manner. The penalty loss function is used for better reduction of the false negative rate with a minimal effect on the overall performance of the MV-KBC model. We tested our method on the benchmark LIDC-IDRI data set and compared it to the five state-of-the-art classification approaches. Our results show that the MV-KBC model achieved an accuracy of 91.60% for lung nodule classification with an AUC of 95.70%. These results are markedly superior to the state-of-the-art approaches.
Collapse
|
39
|
Horst C, Nair A, Janes SM. Lessons on managing pulmonary nodules from NELSON: we have come a long way. Thorax 2019; 74:427-429. [PMID: 30842256 DOI: 10.1136/thoraxjnl-2018-212783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/28/2019] [Indexed: 11/03/2022]
Affiliation(s)
- Carolyn Horst
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, United Kingdom
| | - Arjun Nair
- Department of Radiology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Sam M Janes
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| |
Collapse
|
40
|
Wang X, Mao K, Wang L, Yang P, Lu D, He P. An Appraisal of Lung Nodules Automatic Classification Algorithms for CT Images. SENSORS (BASEL, SWITZERLAND) 2019; 19:E194. [PMID: 30621101 PMCID: PMC6338921 DOI: 10.3390/s19010194] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Revised: 12/28/2018] [Accepted: 12/31/2018] [Indexed: 12/23/2022]
Abstract
Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key step, especially since automatic classification can help clinicians by providing a valuable opinion. Modern computer vision and machine learning technologies allow very fast and reliable CT image classification. This research area has become very hot for its high efficiency and labor saving. The paper aims to draw a systematic review of the state of the art of automatic classification of lung nodules. This research paper covers published works selected from the Web of Science, IEEEXplore, and DBLP databases up to June 2018. Each paper is critically reviewed based on objective, methodology, research dataset, and performance evaluation. Mainstream algorithms are conveyed and generic structures are summarized. Our work reveals that lung nodule classification based on deep learning becomes dominant for its excellent performance. It is concluded that the consistency of the research objective and integration of data deserves more attention. Moreover, collaborative works among developers, clinicians, and other parties should be strengthened.
Collapse
Affiliation(s)
- Xinqi Wang
- School of Software, Northeastern University, Shenyang 110004, China.
| | - Keming Mao
- School of Software, Northeastern University, Shenyang 110004, China.
| | - Lizhe Wang
- Norman Bethune Health Science Center of Jilin University, No. 2699 Qianjin Street, Changchun 130012, China.
| | - Peiyi Yang
- School of Software, Northeastern University, Shenyang 110004, China.
| | - Duo Lu
- School of Software, Northeastern University, Shenyang 110004, China.
| | - Ping He
- School of Computer Science and Engineering, Northeastern University, Shenyang 110004, China.
| |
Collapse
|
41
|
Zhang B, Wang H, Ma X, Zhai H, Liao Y, Wu Y, Chen N, Zhang S. Statistical survey of open source medical image databases on the Internet. ACTA ACUST UNITED AC 2019. [DOI: 10.4103/digm.digm_1_19] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
42
|
Armato SG, Huisman H, Drukker K, Hadjiiski L, Kirby JS, Petrick N, Redmond G, Giger ML, Cha K, Mamonov A, Kalpathy-Cramer J, Farahani K. PROSTATEx Challenges for computerized classification of prostate lesions from multiparametric magnetic resonance images. J Med Imaging (Bellingham) 2018; 5:044501. [PMID: 30840739 DOI: 10.1117/1.jmi.5.4.044501] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 10/10/2018] [Indexed: 12/18/2022] Open
Abstract
Grand challenges stimulate advances within the medical imaging research community; within a competitive yet friendly environment, they allow for a direct comparison of algorithms through a well-defined, centralized infrastructure. The tasks of the two-part PROSTATEx Challenges (the PROSTATEx Challenge and the PROSTATEx-2 Challenge) are (1) the computerized classification of clinically significant prostate lesions and (2) the computerized determination of Gleason Grade Group in prostate cancer, both based on multiparametric magnetic resonance images. The challenges incorporate well-vetted cases for training and testing, a centralized performance assessment process to evaluate results, and an established infrastructure for case dissemination, communication, and result submission. In the PROSTATEx Challenge, 32 groups apply their computerized methods (71 methods total) to 208 prostate lesions in the test set. The area under the receiver operating characteristic curve for these methods in the task of differentiating between lesions that are and are not clinically significant ranged from 0.45 to 0.87; statistically significant differences in performance among the top-performing methods, however, are not observed. In the PROSTATEx-2 Challenge, 21 groups apply their computerized methods (43 methods total) to 70 prostate lesions in the test set. When compared with the reference standard, the quadratic-weighted kappa values for these methods in the task of assigning a five-point Gleason Grade Group to each lesion range from - 0.24 to 0.27; superiority to random guessing can be established for only two methods. When approached with a sense of commitment and scientific rigor, challenges foster interest in the designated task and encourage innovation in the field.
Collapse
Affiliation(s)
- Samuel G Armato
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Henkjan Huisman
- Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen, The Netherlands
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Lubomir Hadjiiski
- University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States
| | - Justin S Kirby
- Frederick National Laboratory for Cancer Research, Cancer Imaging Program, Frederick, Maryland, United States
| | - Nicholas Petrick
- U.S. Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - George Redmond
- National Cancer Institute, Cancer Imaging Program, Division of Cancer Treatment and Diagnosis, Bethesda, Maryland, United States
| | - Maryellen L Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Kenny Cha
- University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States.,U.S. Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Artem Mamonov
- MGH/Harvard Medical School, Boston, Massachusetts, United States
| | | | - Keyvan Farahani
- National Cancer Institute, Cancer Imaging Program, Division of Cancer Treatment and Diagnosis, Bethesda, Maryland, United States
| |
Collapse
|
43
|
Kadir T, Gleeson F. Lung cancer prediction using machine learning and advanced imaging techniques. Transl Lung Cancer Res 2018; 7:304-312. [PMID: 30050768 DOI: 10.21037/tlcr.2018.05.15] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Machine learning based lung cancer prediction models have been proposed to assist clinicians in managing incidental or screen detected indeterminate pulmonary nodules. Such systems may be able to reduce variability in nodule classification, improve decision making and ultimately reduce the number of benign nodules that are needlessly followed or worked-up. In this article, we provide an overview of the main lung cancer prediction approaches proposed to date and highlight some of their relative strengths and weaknesses. We discuss some of the challenges in the development and validation of such techniques and outline the path to clinical adoption.
Collapse
Affiliation(s)
| | - Fergus Gleeson
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| |
Collapse
|
44
|
Nishio M, Nishizawa M, Sugiyama O, Kojima R, Yakami M, Kuroda T, Togashi K. Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization. PLoS One 2018; 13:e0195875. [PMID: 29672639 PMCID: PMC5908232 DOI: 10.1371/journal.pone.0195875] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Accepted: 03/31/2018] [Indexed: 12/23/2022] Open
Abstract
We aimed to evaluate a computer-aided diagnosis (CADx) system for lung nodule classification focussing on (i) usefulness of the conventional CADx system (hand-crafted imaging feature + machine learning algorithm), (ii) comparison between support vector machine (SVM) and gradient tree boosting (XGBoost) as machine learning algorithms, and (iii) effectiveness of parameter optimization using Bayesian optimization and random search. Data on 99 lung nodules (62 lung cancers and 37 benign lung nodules) were included from public databases of CT images. A variant of the local binary pattern was used for calculating a feature vector. SVM or XGBoost was trained using the feature vector and its corresponding label. Tree Parzen Estimator (TPE) was used as Bayesian optimization for parameters of SVM and XGBoost. Random search was done for comparison with TPE. Leave-one-out cross-validation was used for optimizing and evaluating the performance of our CADx system. Performance was evaluated using area under the curve (AUC) of receiver operating characteristic analysis. AUC was calculated 10 times, and its average was obtained. The best averaged AUC of SVM and XGBoost was 0.850 and 0.896, respectively; both were obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters for achieving high AUC were obtained with fewer numbers of trials when using TPE, compared with random search. Bayesian optimization of SVM and XGBoost parameters was more efficient than random search. Based on observer study, AUC values of two board-certified radiologists were 0.898 and 0.822. The results show that diagnostic accuracy of our CADx system was comparable to that of radiologists with respect to classifying lung nodules.
Collapse
Affiliation(s)
- Mizuho Nishio
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Shogoin, Sakyo-ku, Kyoto, Kyoto, Japan
- Preemptive Medicine and Lifestyle Disease Research Center, Kyoto University Hospital, Shogoin, Sakyo-ku, Kyoto, Kyoto, Japan
- * E-mail: ,
| | - Mitsuo Nishizawa
- Department of Radiology, Osaka Medical College, Takatsuki, Osaka, Japan
| | - Osamu Sugiyama
- Preemptive Medicine and Lifestyle Disease Research Center, Kyoto University Hospital, Shogoin, Sakyo-ku, Kyoto, Kyoto, Japan
| | - Ryosuke Kojima
- Department of Biomedical Data Intelligence, Kyoto University Graduate School of Medicine, Sakyo-ku, Kyoto, Kyoto, Japan
| | - Masahiro Yakami
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Shogoin, Sakyo-ku, Kyoto, Kyoto, Japan
- Preemptive Medicine and Lifestyle Disease Research Center, Kyoto University Hospital, Shogoin, Sakyo-ku, Kyoto, Kyoto, Japan
| | - Tomohiro Kuroda
- Division of Medical Information Technology and Administrative Plannnig, Kyoto University Hospital, Shogoin, Sakyo-ku, Kyoto, Kyoto, Japan
| | - Kaori Togashi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Shogoin, Sakyo-ku, Kyoto, Kyoto, Japan
| |
Collapse
|
45
|
Choi W, Oh JH, Riyahi S, Liu C, Jiang F, Chen W, White C, Rimner A, Mechalakos JG, Deasy JO, Lu W. Radiomics analysis of pulmonary nodules in low-dose CT for early detection of lung cancer. Med Phys 2018; 45:1537-1549. [PMID: 29457229 PMCID: PMC5903960 DOI: 10.1002/mp.12820] [Citation(s) in RCA: 88] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 02/05/2018] [Accepted: 02/07/2018] [Indexed: 01/13/2023] Open
Abstract
PURPOSE To develop a radiomics prediction model to improve pulmonary nodule (PN) classification in low-dose CT. To compare the model with the American College of Radiology (ACR) Lung CT Screening Reporting and Data System (Lung-RADS) for early detection of lung cancer. METHODS We examined a set of 72 PNs (31 benign and 41 malignant) from the Lung Image Database Consortium image collection (LIDC-IDRI). One hundred three CT radiomic features were extracted from each PN. Before the model building process, distinctive features were identified using a hierarchical clustering method. We then constructed a prediction model by using a support vector machine (SVM) classifier coupled with a least absolute shrinkage and selection operator (LASSO). A tenfold cross-validation (CV) was repeated ten times (10 × 10-fold CV) to evaluate the accuracy of the SVM-LASSO model. Finally, the best model from the 10 × 10-fold CV was further evaluated using 20 × 5- and 50 × 2-fold CVs. RESULTS The best SVM-LASSO model consisted of only two features: the bounding box anterior-posterior dimension (BB_AP) and the standard deviation of inverse difference moment (SD_IDM). The BB_AP measured the extension of a PN in the anterior-posterior direction and was highly correlated (r = 0.94) with the PN size. The SD_IDM was a texture feature that measured the directional variation of the local homogeneity feature IDM. Univariate analysis showed that both features were statistically significant and discriminative (P = 0.00013 and 0.000038, respectively). PNs with larger BB_AP or smaller SD_IDM were more likely malignant. The 10 × 10-fold CV of the best SVM model using the two features achieved an accuracy of 84.6% and 0.89 AUC. By comparison, Lung-RADS achieved an accuracy of 72.2% and 0.77 AUC using four features (size, type, calcification, and spiculation). The prediction improvement of SVM-LASSO comparing to Lung-RADS was statistically significant (McNemar's test P = 0.026). Lung-RADS misclassified 19 cases because it was mainly based on PN size, whereas the SVM-LASSO model correctly classified 10 of these cases by combining a size (BB_AP) feature and a texture (SD_IDM) feature. The performance of the SVM-LASSO model was stable when leaving more patients out with five- and twofold CVs (accuracy 84.1% and 81.6%, respectively). CONCLUSION We developed an SVM-LASSO model to predict malignancy of PNs with two CT radiomic features. We demonstrated that the model achieved an accuracy of 84.6%, which was 12.4% higher than Lung-RADS.
Collapse
Affiliation(s)
- Wookjin Choi
- Department of Medical
PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNY10065USA
| | - Jung Hun Oh
- Department of Medical
PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNY10065USA
| | - Sadegh Riyahi
- Department of Medical
PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNY10065USA
| | - Chia‐Ju Liu
- Department of
RadiologyMemorial Sloan Kettering Cancer CenterNew YorkNY10065USA
| | - Feng Jiang
- Department of
PathologyUniversity of Maryland School of MedicineBaltimoreMD21201USA
| | - Wengen Chen
- Department of Diagnostic Radiology
and Nuclear MedicineUniversity of Maryland School of MedicineBaltimoreMD21201USA
| | - Charles White
- Department of Diagnostic Radiology
and Nuclear MedicineUniversity of Maryland School of MedicineBaltimoreMD21201USA
| | - Andreas Rimner
- Department of Radiation
OncologyMemorial Sloan Kettering Cancer CenterNew YorkNY10065USA
| | - James G. Mechalakos
- Department of Medical
PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNY10065USA
| | - Joseph O. Deasy
- Department of Medical
PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNY10065USA
| | - Wei Lu
- Department of Medical
PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNY10065USA
| |
Collapse
|
46
|
Liu S, Xie Y, Jirapatnakul A, Reeves AP. Pulmonary nodule classification in lung cancer screening with three-dimensional convolutional neural networks. J Med Imaging (Bellingham) 2017; 4:041308. [PMID: 29181428 PMCID: PMC5685809 DOI: 10.1117/1.jmi.4.4.041308] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 10/23/2017] [Indexed: 12/17/2022] Open
Abstract
A three-dimensional (3-D) convolutional neural network (CNN) trained from scratch is presented for the classification of pulmonary nodule malignancy from low-dose chest CT scans. Recent approval of lung cancer screening in the United States provides motivation for determining the likelihood of malignancy of pulmonary nodules from the initial CT scan finding to minimize the number of follow-up actions. Classifier ensembles of different combinations of the 3-D CNN and traditional machine learning models based on handcrafted 3-D image features are also explored. The dataset consisting of 326 nodules is constructed with balanced size and class distribution with the malignancy status pathologically confirmed. The results show that both the 3-D CNN single model and the ensemble models with 3-D CNN outperform the respective counterparts constructed using only traditional models. Moreover, complementary information can be learned by the 3-D CNN and the conventional models, which together are combined to construct an ensemble model with statistically superior performance compared with the single traditional model. The performance of the 3-D CNN model demonstrates the potential for improving the lung cancer screening follow-up protocol, which currently mainly depends on the nodule size.
Collapse
Affiliation(s)
- Shuang Liu
- Cornell University, School of Electrical and Computer Engineering, Ithaca, New York, United States
| | - Yiting Xie
- Cornell University, School of Electrical and Computer Engineering, Ithaca, New York, United States
| | - Artit Jirapatnakul
- Icahn School of Medicine at Mount Sinai, Department of Radiology, New York, United States
| | - Anthony P. Reeves
- Cornell University, School of Electrical and Computer Engineering, Ithaca, New York, United States
| |
Collapse
|
47
|
Nordstrom RJ. Special Section Guest Editorial: Quantitative Imaging and the Pioneering Efforts of Laurence P. Clarke. J Med Imaging (Bellingham) 2017; 5:011001. [PMID: 28924577 DOI: 10.1117/1.jmi.5.1.011001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
This guest editorial introduces the Special Section honoring Dr. Laurence P. Clarke.
Collapse
Affiliation(s)
- Robert J Nordstrom
- National Cancer Institute, Chief, Image Guided Interventions, Cancer Imaging Program, Division of Cancer Treatment and Diagnosis
| |
Collapse
|
48
|
Armato SG, Drukker K, Li F, Hadjiiski L, Tourassi GD, Engelmann RM, Giger ML, Redmond G, Farahani K, Kirby JS, Petrick NA. Letter to the Editor: Use of Publicly Available Image Resources. Acad Radiol 2017; 24:916-917. [PMID: 28506513 DOI: 10.1016/j.acra.2017.03.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Accepted: 03/16/2017] [Indexed: 10/19/2022]
|