1
|
Chowdary S, Purushotaman SB. An Improved Archimedes Optimization-aided Multi-scale Deep Learning Segmentation with dilated ensemble CNN classification for detecting lung cancer using CT images. NETWORK (BRISTOL, ENGLAND) 2024:1-39. [PMID: 38975771 DOI: 10.1080/0954898x.2024.2373127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 06/22/2024] [Indexed: 07/09/2024]
Abstract
Early detection of lung cancer is necessary to prevent deaths caused by lung cancer. But, the identification of cancer in lungs using Computed Tomography (CT) scan based on some deep learning algorithms does not provide accurate results. A novel adaptive deep learning is developed with heuristic improvement. The proposed framework constitutes three sections as (a) Image acquisition, (b) Segmentation of Lung nodule, and (c) Classifying lung cancer. The raw CT images are congregated through standard data sources. It is then followed by nodule segmentation process, which is conducted by Adaptive Multi-Scale Dilated Trans-Unet3+. For increasing the segmentation accuracy, the parameters in this model is optimized by proposing Modified Transfer Operator-based Archimedes Optimization (MTO-AO). At the end, the segmented images are subjected to classification procedure, namely, Advanced Dilated Ensemble Convolutional Neural Networks (ADECNN), in which it is constructed with Inception, ResNet and MobileNet, where the hyper parameters is tuned by MTO-AO. From the three networks, the final result is estimated by high ranking-based classification. Hence, the performance is investigated using multiple measures and compared among different approaches. Thus, the findings of model demonstrate to prove the system's efficiency of detecting cancer and help the patient to get the appropriate treatment.
Collapse
Affiliation(s)
- Shalini Chowdary
- ECE, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu, India
| | | |
Collapse
|
2
|
Yun C, Tang F, Gao Z, Wang W, Bai F, Miller JD, Liu H, Lee Y, Lou Q. Construction of Risk Prediction Model of Type 2 Diabetic Kidney Disease Based on Deep Learning. Diabetes Metab J 2024; 48:771-779. [PMID: 38685670 PMCID: PMC11307115 DOI: 10.4093/dmj.2023.0033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 05/27/2023] [Indexed: 05/02/2024] Open
Abstract
BACKGRUOUND This study aimed to develop a diabetic kidney disease (DKD) prediction model using long short term memory (LSTM) neural network and evaluate its performance using accuracy, precision, recall, and area under the curve (AUC) of the receiver operating characteristic (ROC) curve. METHODS The study identified DKD risk factors through literature review and physician focus group, and collected 7 years of data from 6,040 type 2 diabetes mellitus patients based on the risk factors. Pytorch was used to build the LSTM neural network, with 70% of the data used for training and the other 30% for testing. Three models were established to examine the impact of glycosylated hemoglobin (HbA1c), systolic blood pressure (SBP), and pulse pressure (PP) variabilities on the model's performance. RESULTS The developed model achieved an accuracy of 83% and an AUC of 0.83. When the risk factor of HbA1c variability, SBP variability, or PP variability was removed one by one, the accuracy of each model was significantly lower than that of the optimal model, with an accuracy of 78% (P<0.001), 79% (P<0.001), and 81% (P<0.001), respectively. The AUC of ROC was also significantly lower for each model, with values of 0.72 (P<0.001), 0.75 (P<0.001), and 0.77 (P<0.05). CONCLUSION The developed DKD risk predictive model using LSTM neural networks demonstrated high accuracy and AUC value. When HbA1c, SBP, and PP variabilities were added to the model as featured characteristics, the model's performance was greatly improved.
Collapse
Affiliation(s)
- Chuan Yun
- Department of Endocrinology, The First Affiliated Hospital of Hainan Medical University, Haikou, China
| | - Fangli Tang
- International School of Nursing, Hainan Medical University, Haikou, China
| | - Zhenxiu Gao
- School of International Education, Nanjing Medical University, Nanjing, China
| | - Wenjun Wang
- Department of Endocrinology, The First Affiliated Hospital of Hainan Medical University, Haikou, China
| | - Fang Bai
- Nursing Department 531, The First Affiliated Hospital of Hainan Medical University, Haikou, China
| | - Joshua D. Miller
- Department of Medicine, Division of Endocrinology & Metabolism, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Huanhuan Liu
- Department of Endocrinology, Hainan General Hospital, Haikou, China
| | | | - Qingqing Lou
- The First Affiliated Hospital of Hainan Medical University, Hainan Clinical Research Center for Metabolic Disease, Haikou, China
| |
Collapse
|
3
|
Sun L, Zhang M, Lu Y, Zhu W, Yi Y, Yan F. Nodule-CLIP: Lung nodule classification based on multi-modal contrastive learning. Comput Biol Med 2024; 175:108505. [PMID: 38688129 DOI: 10.1016/j.compbiomed.2024.108505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/28/2024] [Accepted: 04/21/2024] [Indexed: 05/02/2024]
Abstract
The latest developments in deep learning have demonstrated the importance of CT medical imaging for the classification of pulmonary nodules. However, challenges remain in fully leveraging the relevant medical annotations of pulmonary nodules and distinguishing between the benign and malignant labels of adjacent nodules. Therefore, this paper proposes the Nodule-CLIP model, which deeply mines the potential relationship between CT images, complex attributes of lung nodules, and benign and malignant attributes of lung nodules through a comparative learning method, and optimizes the model in the image feature extraction network by using its similarities and differences to improve its ability to distinguish similar lung nodules. Firstly, we segment the 3D lung nodule information by U-Net to reduce the interference caused by the background of lung nodules and focus on the lung nodule images. Secondly, the image features, class features, and complex attribute features are aligned by contrastive learning and loss function in Nodule-CLIP to achieve lung nodule image optimization and improve classification ability. A series of testing and ablation experiments were conducted on the public dataset LIDC-IDRI, and the final benign and malignant classification rate was 90.6%, and the recall rate was 92.81%. The experimental results show the advantages of this method in terms of lung nodule classification as well as interpretability.
Collapse
Affiliation(s)
- Lijing Sun
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Mengyi Zhang
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China.
| | - Yu Lu
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Wenjun Zhu
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Yang Yi
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Fei Yan
- Jiangsu Institute of Cancer Research & The Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Nanjing, 210009, Jiangsu, China
| |
Collapse
|
4
|
Pan X, Wang P, Jia S, Wang Y, Liu Y, Zhang Y, Jiang C. Multi-contrast learning-guided lightweight few-shot learning scheme for predicting breast cancer molecular subtypes. Med Biol Eng Comput 2024; 62:1601-1613. [PMID: 38316663 DOI: 10.1007/s11517-024-03031-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Accepted: 12/27/2023] [Indexed: 02/07/2024]
Abstract
Invasive gene expression profiling studies have exposed prognostically significant breast cancer subtypes: normal-like, luminal, HER-2 enriched, and basal-like, which is defined in large part by human epidermal growth factor receptor 2 (HER-2), progesterone receptor (PR), and estrogen receptor (ER). However, while dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has been generally employed in the screening and therapy of breast cancer, there is a challenging problem to noninvasively predict breast cancer molecular subtypes, which have extremely low-data regimes. In this paper, a novel few-shot learning scheme, which combines lightweight contrastive convolutional neural network (LC-CNN) and multi-contrast learning strategy (MCLS), is worthwhile to be developed for predicting molecular subtype of breast cancer in DCE-MRI. Moreover, MCLS is designed to construct One-vs-Rest and One-vs-One classification tasks, which addresses inter-class similarity among normal-like, luminal, HER-2 enriched, and basal-like. Extensive experiments demonstrate the superiority of our proposed scheme over state-of-the-art methods. Furthermore, our scheme is able to achieve competitive results on few samples due to joint LC-CNN and MCLS for excavating contrastive correlations of a pair of DCE-MRI.
Collapse
Affiliation(s)
- Xiang Pan
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau, SAR, China
| | - Pei Wang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Shunyuan Jia
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Yihang Wang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Yuan Liu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Yan Zhang
- Department of Oncology, Wuxi Maternal and Child Health Care Hospital, Jiangnan University, Wuxi, China.
| | - Chunjuan Jiang
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China.
| |
Collapse
|
5
|
Shyamala Bharathi P, Shalini C. Advanced hybrid attention-based deep learning network with heuristic algorithm for adaptive CT and PET image fusion in lung cancer detection. Med Eng Phys 2024; 126:104138. [PMID: 38621836 DOI: 10.1016/j.medengphy.2024.104138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/17/2024] [Accepted: 03/02/2024] [Indexed: 04/17/2024]
Abstract
Lung cancer is one of the most deadly diseases in the world. Lung cancer detection can save the patient's life. Despite being the best imaging tool in the medical sector, clinicians find it challenging to interpret and detect cancer from Computed Tomography (CT) scan data. One of the most effective ways for the diagnosis of certain malignancies like lung tumours is Positron Emission Tomography (PET) imaging. So many diagnosis models have been implemented nowadays to diagnose various diseases. Early lung cancer identification is very important for predicting the severity level of lung cancer in cancer patients. To explore the effective model, an image fusion-based detection model is proposed for lung cancer detection using an improved heuristic algorithm of the deep learning model. Firstly, the PET and CT images are gathered from the internet. Further, these two collected images are fused for further process by using the Adaptive Dilated Convolution Neural Network (AD-CNN), in which the hyperparameters are tuned by the Modified Initial Velocity-based Capuchin Search Algorithm (MIV-CapSA). Subsequently, the abnormal regions are segmented by influencing the TransUnet3+. Finally, the segmented images are fed into the Hybrid Attention-based Deep Networks (HADN) model, encompassed with Mobilenet and Shufflenet. Therefore, the effectiveness of the novel detection model is analyzed using various metrics compared with traditional approaches. At last, the outcome evinces that it aids in early basic detection to treat the patients effectively.
Collapse
Affiliation(s)
- P Shyamala Bharathi
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India.
| | - C Shalini
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| |
Collapse
|
6
|
UrRehman Z, Qiang Y, Wang L, Shi Y, Yang Q, Khattak SU, Aftab R, Zhao J. Effective lung nodule detection using deep CNN with dual attention mechanisms. Sci Rep 2024; 14:3934. [PMID: 38365831 PMCID: PMC10873370 DOI: 10.1038/s41598-024-51833-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 01/10/2024] [Indexed: 02/18/2024] Open
Abstract
Novel methods are required to enhance lung cancer detection, which has overtaken other cancer-related causes of death as the major cause of cancer-related mortality. Radiologists have long-standing methods for locating lung nodules in patients with lung cancer, such as computed tomography (CT) scans. Radiologists must manually review a significant amount of CT scan pictures, which makes the process time-consuming and prone to human error. Computer-aided diagnosis (CAD) systems have been created to help radiologists with their evaluations in order to overcome these difficulties. These systems make use of cutting-edge deep learning architectures. These CAD systems are designed to improve lung nodule diagnosis efficiency and accuracy. In this study, a bespoke convolutional neural network (CNN) with a dual attention mechanism was created, which was especially crafted to concentrate on the most important elements in images of lung nodules. The CNN model extracts informative features from the images, while the attention module incorporates both channel attention and spatial attention mechanisms to selectively highlight significant features. After the attention module, global average pooling is applied to summarize the spatial information. To evaluate the performance of the proposed model, extensive experiments were conducted using benchmark dataset of lung nodules. The results of these experiments demonstrated that our model surpasses recent models and achieves state-of-the-art accuracy in lung nodule detection and classification tasks.
Collapse
Affiliation(s)
- Zia UrRehman
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
| | - Yan Qiang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
- School of Software, North University of China, Taiyuan, China
| | - Long Wang
- Jinzhong College of Information, Jinzhong, China
| | - Yiwei Shi
- NHC Key Laboratory of Pneumoconiosis, Shanxi Key Laboratory of Respiratory Diseases, Department of Pulmonary and Critical Care Medicine, The First Hospital of Shanxi Medical University, Taiyuan, Shanxi, China
| | | | - Saeed Ullah Khattak
- Centre of Biotechnology and Microbiology, University of Peshawar, Peshawar, 25120, Pakistan
| | - Rukhma Aftab
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
| | - Juanjuan Zhao
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China.
- Jinzhong College of Information, Jinzhong, China.
| |
Collapse
|
7
|
Zhong R, Gao T, Li J, Li Z, Tian X, Zhang C, Lin X, Wang Y, Gao L, Hu K. The global research of artificial intelligence in lung cancer: a 20-year bibliometric analysis. Front Oncol 2024; 14:1346010. [PMID: 38371616 PMCID: PMC10869611 DOI: 10.3389/fonc.2024.1346010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/18/2024] [Indexed: 02/20/2024] Open
Abstract
Background Lung cancer (LC) is the second-highest incidence and the first-highest mortality cancer worldwide. Early screening and precise treatment of LC have been the research hotspots in this field. Artificial intelligence (AI) technology has advantages in many aspects of LC and widely used such as LC early diagnosis, LC differential classification, treatment and prognosis prediction. Objective This study aims to analyze and visualize the research history, current status, current hotspots, and development trends of artificial intelligence in the field of lung cancer using bibliometric methods, and predict future research directions and cutting-edge hotspots. Results A total of 2931 articles published between 2003 and 2023 were included, contributed by 15,848 authors from 92 countries/regions. Among them, China (40%) with 1173 papers,USA (24.80%) with 727 papers and the India(10.2%) with 299 papers have made outstanding contributions in this field, accounting for 75% of the total publications. The primary research institutions were Shanghai Jiaotong University(n=66),Chinese Academy of Sciences (n=63) and Harvard Medical School (n=52).Professor Qian Wei(n=20) from Northeastern University in China were ranked first in the top 10 authors while Armato SG(n=458 citations) was the most co-cited authors. Frontiers in Oncology(121 publications; IF 2022,4.7; Q2) was the most published journal. while Radiology (3003 citations; IF 2022, 19.7; Q1) was the most co-cited journal. different countries and institutions should further strengthen cooperation between each other. The most common keywords were lung cancer, classification, cancer, machine learning and deep learning. Meanwhile, The most cited papers was Nicolas Coudray et al.2018.NAT MED(1196 Total Citations). Conclusions Research related to AI in lung cancer has significant application prospects, and the number of scholars dedicated to AI-related research on lung cancer is continually growing. It is foreseeable that non-invasive diagnosis and precise minimally invasive treatment through deep learning and machine learning will remain a central focus in the future. Simultaneously, there is a need to enhance collaboration not only among various countries and institutions but also between high-quality medical and industrial entities.
Collapse
Affiliation(s)
- Ruikang Zhong
- Beijing University of Chinese Medicine, Beijing, China
| | - Tangke Gao
- Beijing University of Chinese Medicine, Beijing, China
| | - Jinghua Li
- Beijing University of Chinese Medicine, Beijing, China
| | - Zexing Li
- Beijing University of Chinese Medicine, Beijing, China
| | - Xue Tian
- Guang'an Men Hospital, China Academy of Chinese Medical Sciences, Beijing, China
| | - Chi Zhang
- Beijing University of Chinese Medicine, Beijing, China
| | - Ximing Lin
- Beijing University of Chinese Medicine, Beijing, China
| | - Yuehui Wang
- Beijing University of Chinese Medicine, Beijing, China
| | - Lei Gao
- Dongfang Hospital, Beijing University of Chinese Medicine, Beijing, China
| | - Kaiwen Hu
- Dongfang Hospital, Beijing University of Chinese Medicine, Beijing, China
| |
Collapse
|
8
|
Wu R, Liang C, Zhang J, Tan Q, Huang H. Multi-kernel driven 3D convolutional neural network for automated detection of lung nodules in chest CT scans. BIOMEDICAL OPTICS EXPRESS 2024; 15:1195-1218. [PMID: 38404310 PMCID: PMC10890889 DOI: 10.1364/boe.504875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/27/2023] [Accepted: 12/28/2023] [Indexed: 02/27/2024]
Abstract
The accurate position detection of lung nodules is crucial in early chest computed tomography (CT)-based lung cancer screening, which helps to improve the survival rate of patients. Deep learning methodologies have shown impressive feature extraction ability in the CT image analysis task, but it is still a challenge to develop a robust nodule detection model due to the salient morphological heterogeneity of nodules and complex surrounding environment. In this study, a multi-kernel driven 3D convolutional neural network (MK-3DCNN) is proposed for computerized nodule detection in CT scans. In the MK-3DCNN, a residual learning-based encoder-decoder architecture is introduced to employ the multi-layer features of the deep model. Considering the various nodule sizes and shapes, a multi-kernel joint learning block is developed to capture 3D multi-scale spatial information of nodule CT images, and this is conducive to improving nodule detection performance. Furthermore, a multi-mode mixed pooling strategy is designed to replace the conventional single-mode pooling manner, and it reasonably integrates the max pooling, average pooling, and center cropping pooling operations to obtain more comprehensive nodule descriptions from complicated CT images. Experimental results on the public dataset LUNA16 illustrate that the proposed MK-3DCNN method achieves more competitive nodule detection performance compared to some state-of-the-art algorithms. The results on our constructed clinical dataset CQUCH-LND indicate that the MK-3DCNN has a good prospect in clinical practice.
Collapse
Affiliation(s)
- Ruoyu Wu
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Changyu Liang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Jiuquan Zhang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - QiJuan Tan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| |
Collapse
|
9
|
Thanoon MA, Zulkifley MA, Mohd Zainuri MAA, Abdani SR. A Review of Deep Learning Techniques for Lung Cancer Screening and Diagnosis Based on CT Images. Diagnostics (Basel) 2023; 13:2617. [PMID: 37627876 PMCID: PMC10453592 DOI: 10.3390/diagnostics13162617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/26/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023] Open
Abstract
One of the most common and deadly diseases in the world is lung cancer. Only early identification of lung cancer can increase a patient's probability of survival. A frequently used modality for the screening and diagnosis of lung cancer is computed tomography (CT) imaging, which provides a detailed scan of the lung. In line with the advancement of computer-assisted systems, deep learning techniques have been extensively explored to help in interpreting the CT images for lung cancer identification. Hence, the goal of this review is to provide a detailed review of the deep learning techniques that were developed for screening and diagnosing lung cancer. This review covers an overview of deep learning (DL) techniques, the suggested DL techniques for lung cancer applications, and the novelties of the reviewed methods. This review focuses on two main methodologies of deep learning in screening and diagnosing lung cancer, which are classification and segmentation methodologies. The advantages and shortcomings of current deep learning models will also be discussed. The resultant analysis demonstrates that there is a significant potential for deep learning methods to provide precise and effective computer-assisted lung cancer screening and diagnosis using CT scans. At the end of this review, a list of potential future works regarding improving the application of deep learning is provided to spearhead the advancement of computer-assisted lung cancer diagnosis systems.
Collapse
Affiliation(s)
- Mohammad A. Thanoon
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
- System and Control Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Muhammad Ammirrul Atiqi Mohd Zainuri
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Malaysia;
| |
Collapse
|
10
|
Bishnoi V, Goel N. Tensor-RT-Based Transfer Learning Model for Lung Cancer Classification. J Digit Imaging 2023; 36:1364-1375. [PMID: 37059889 PMCID: PMC10407002 DOI: 10.1007/s10278-023-00822-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 03/26/2023] [Accepted: 03/28/2023] [Indexed: 04/16/2023] Open
Abstract
Cancer is a leading cause of death across the globe, in which lung cancer constitutes the maximum mortality rate. Early diagnosis through computed tomography scan imaging helps to identify the stages of lung cancer. Several deep learning-based classification methods have been employed for developing automatic systems for the diagnosis and detection of computed tomography scan lung slices. However, the diagnosis based on nodule detection is a challenging task as it requires manual annotation of nodule regions. Also, these computer-aided systems have yet not achieved the desired performance in real-time lung cancer classification. In the present paper, a high-speed real-time transfer learning-based framework is proposed for the classification of computed tomography lung cancer slices into benign and malignant. The proposed framework comprises of three modules: (i) pre-processing and segmentation of lung images using K-means clustering based on cosine distance and morphological operations; (ii) tuning and regularization of the proposed model named as weighted VGG deep network (WVDN); (iii) model inference in Nvidia tensor-RT during post-processing for the deployment in real-time applications. In this study, two pre-trained CNN models were experimented and compared with the proposed model. All the models have been trained on 19,419 computed tomography scan lung slices, which were obtained from the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset. The proposed model achieved the best classification metric, an accuracy of 0.932, precision, recall, an F1 score of 0.93, and Cohen's kappa score of 0.85. A statistical evaluation has also been performed on the classification parameters and achieved a p-value <0.0001 for the proposed model. The quantitative and statistical results validate the improved performance of the proposed model as compared to state-of-the-art methods. The proposed framework is based on complete computed tomography slices rather than the marked annotations and may help in improving clinical diagnosis.
Collapse
Affiliation(s)
- Vidhi Bishnoi
- Indira Gandhi Delhi Technical University for Women, Delhi, India
| | - Nidhi Goel
- Indira Gandhi Delhi Technical University for Women, Delhi, India
| |
Collapse
|
11
|
Ahmed I, Chehri A, Jeon G, Piccialli F. Automated Pulmonary Nodule Classification and Detection Using Deep Learning Architectures. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2445-2456. [PMID: 35853048 DOI: 10.1109/tcbb.2022.3192139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Recent advancement in biomedical imaging technologies has contributed to tremendous opportunities for the health care sector and the biomedical community. However, collecting, measuring, and analyzing large volumes of health-related data like images is a laborious and time-consuming job for medical experts. Thus, in this regard, artificial intelligence applications (including machine and deep learning systems) help in the early diagnosis of various contagious/ cancerous diseases such as lung cancer. As lung or pulmonary cancer may have no apparent or clear initial symptoms, it is essential to develop and promote a Computer Aided Detection (CAD) system that can support medical experts in classifying and detecting lung nodules at early stages. Therefore, in this article, we analyze the problem of lung cancer diagnosis by classification and detecting pulmonary nodules, i.e., benign and malignant, in CT images. To achieve this objective, an automated deep learning based system is introduced for classifying and detecting lung nodules. In addition, we use novel state-of-the-art detection architectures, including, Faster-RCNN, YOLOv3, and SSD, for detection purposes. All deep learning models are evaluated using a publicly available benchmark LIDC-IDRI data set. The experimental outcomes reveal that the False Positive Rate (FPR) is reduced, and the accuracy is enhanced.
Collapse
|
12
|
Chang S, Gao Y, Pomeroy MJ, Bai T, Zhang H, Lu S, Pickhardt PJ, Gupta A, Reiter MJ, Gould ES, Liang Z. Exploring Dual-Energy CT Spectral Information for Machine Learning-Driven Lesion Diagnosis in Pre-Log Domain. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1835-1845. [PMID: 37022248 PMCID: PMC10238622 DOI: 10.1109/tmi.2023.3240847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In this study, we proposed a computer-aided diagnosis (CADx) framework under dual-energy spectral CT (DECT), which operates directly on the transmission data in the pre-log domain, called CADxDE, to explore the spectral information for lesion diagnosis. The CADxDE includes material identification and machine learning (ML) based CADx. Benefits from DECT's capability of performing virtual monoenergetic imaging with the identified materials, the responses of different tissue types (e.g., muscle, water, and fat) in lesions at each energy can be explored by ML for CADx. Without losing essential factors in the DECT scan, a pre-log domain model-based iterative reconstruction is adopted to obtain decomposed material images, which are then used to generate the virtual monoenergetic images (VMIs) at selected n energies. While these VMIs have the same anatomy, their contrast distribution patterns contain rich information along with the n energies for tissue characterization. Thus, a corresponding ML-based CADx is developed to exploit the energy-enhanced tissue features for differentiating malignant from benign lesions. Specifically, an original image-driven multi-channel three-dimensional convolutional neural network (CNN) and extracted lesion feature-based ML CADx methods are developed to show the feasibility of CADxDE. Results from three pathologically proven clinical datasets showed 4.01% to 14.25% higher AUC (area under the receiver operating characteristic curve) scores than the scores of both the conventional DECT data (high and low energy spectrum separately) and the conventional CT data. The mean gain >9.13% in AUC scores indicated that the energy spectral-enhanced tissue features from CADxDE have great potential to improve lesion diagnosis performance.
Collapse
Affiliation(s)
- Shaojie Chang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Marc J. Pomeroy
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Ti Bai
- Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, TX 75390, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, NY 10065, USA
| | - Siming Lu
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Perry J. Pickhardt
- Department of Radiology, School of Medicine, University of Wisconsin, Madison, WI 53792, USA
| | - Amit Gupta
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Michael J. Reiter
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Elaine S. Gould
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Zhengrong Liang
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
13
|
Chen Y, Hou X, Yang Y, Ge Q, Zhou Y, Nie S. A Novel Deep Learning Model Based on Multi-Scale and Multi-View for Detection of Pulmonary Nodules. J Digit Imaging 2023; 36:688-699. [PMID: 36544067 PMCID: PMC10039158 DOI: 10.1007/s10278-022-00749-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/03/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022] Open
Abstract
Lung cancer manifests as pulmonary nodules in the early stage. Thus, the early and accurate detection of these nodules is crucial for improving the survival rate of patients. We propose a novel two-stage model for lung nodule detection. In the candidate nodule detection stage, a deep learning model based on 3D context information roughly segments the nodules detects the preprocessed image and obtain candidate nodules. In this model, 3D image blocks are input into the constructed model, and it learns the contextual information between the various slices in the 3D image block. The parameters of our model are equivalent to those of a 2D convolutional neural network (CNN), but the model could effectively learn the 3D context information of the nodules. In the false-positive reduction stage, we propose a multi-scale shared convolutional structure model. Our lung detection model has no significant increase in parameters and computation in both stages of multi-scale and multi-view detection. The proposed model was evaluated by using 888 computed tomography (CT) scans from the LIDC-IDRI dataset and achieved a competition performance metric (CPM) score of 0.957. The average detection sensitivity per scan was 0.971/1.0 FP. Furthermore, an average detection sensitivity of 0.933/1.0 FP per scan was achieved based on data from Shanghai Pulmonary Hospital. Our model exhibited a higher detection sensitivity, a lower false-positive rate, and better generalization than current lung nodule detection methods. The method has fewer parameters and less computational complexity, which provides more possibilities for the clinical application of this method.
Collapse
Affiliation(s)
- Yang Chen
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuewen Hou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yifeng Yang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Qianqian Ge
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yan Zhou
- Department of Radiology, School of Medicine, Renji Hospital, Shanghai Jiao Tong University, Shanghai, 200127, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
14
|
Nandipati BL, Devarakonda N. Effective lung cancer diagnosis using multi-focus fusion of CT and PET images with deep learning strategies. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2023]
|
15
|
Sebastian AE, Dua D. Lung Nodule Detection via Optimized Convolutional Neural Network: Impact of Improved Moth Flame Algorithm. SENSING AND IMAGING 2023; 24:11. [PMID: 36936054 PMCID: PMC10009866 DOI: 10.1007/s11220-022-00406-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 09/30/2022] [Accepted: 11/02/2022] [Indexed: 06/18/2023]
Abstract
Lung cancer is a high-risk disease that affects people all over the world, and lung nodules are the most common sign of early lung cancer. Since early identification of lung cancer can considerably improve a lung scanner patient's chances of survival, an accurate and efficient nodule detection system can be essential. Automatic lung nodule recognition decreases radiologists' effort, as well as the risk of misdiagnosis and missed diagnoses. Hence, this article developed a new lung nodule detection model with four stages like "Image pre-processing, segmentation, feature extraction and classification". In this processes, pre-processing is the first step, in which the input image is subjected to a series of operations. Then, the "Otsu Thresholding model" is used to segment the pre-processed pictures. Then in the third stage, the LBP features are retrieved that is then classified via optimized Convolutional Neural Network (CNN). In this, the activation function and convolutional layer count of CNN is optimally tuned via a proposed algorithm known as Improved Moth Flame Optimization (IMFO). At the end, the betterment of the scheme is validated by carrying out analysis in terms of certain measures. Especially, the accuracy of the proposed work is 6.85%, 2.91%, 1.75%, 0.73%, 1.83%, as well as 4.05% superior to the extant SVM, KNN, CNN, MFO, WTEEB as well as GWO + FRVM methods respectively.
Collapse
Affiliation(s)
| | - Disha Dua
- Indira Gandhi Delhi Technical University for Women, Delhi, Delhi, India
| |
Collapse
|
16
|
Saleem MA, Thien Le N, Asdornwised W, Chaitusaney S, Javeed A, Benjapolakul W. Sooty Tern Optimization Algorithm-Based Deep Learning Model for Diagnosing NSCLC Tumours. SENSORS (BASEL, SWITZERLAND) 2023; 23:2147. [PMID: 36850744 PMCID: PMC9959990 DOI: 10.3390/s23042147] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 02/05/2023] [Accepted: 02/08/2023] [Indexed: 06/18/2023]
Abstract
Lung cancer is one of the most common causes of cancer deaths in the modern world. Screening of lung nodules is essential for early recognition to facilitate treatment that improves the rate of patient rehabilitation. An increase in accuracy during lung cancer detection is vital for sustaining the rate of patient persistence, even though several research works have been conducted in this research domain. Moreover, the classical system fails to segment cancer cells of different sizes accurately and with excellent reliability. This paper proposes a sooty tern optimization algorithm-based deep learning (DL) model for diagnosing non-small cell lung cancer (NSCLC) tumours with increased accuracy. We discuss various algorithms for diagnosing models that adopt the Otsu segmentation method to perfectly isolate the lung nodules. Then, the sooty tern optimization algorithm (SHOA) is adopted for partitioning the cancer nodules by defining the best characteristics, which aids in improving diagnostic accuracy. It further utilizes a local binary pattern (LBP) for determining appropriate feature retrieval from the lung nodules. In addition, it adopts CNN and GRU-based classifiers for identifying whether the lung nodules are malignant or non-malignant depending on the features retrieved during the diagnosing process. The experimental results of this SHOA-optimized DNN model achieved an accuracy of 98.32%, better than the baseline schemes used for comparison.
Collapse
Affiliation(s)
- Muhammad Asim Saleem
- Center of Excellence in Artificial Intelligence, Machine Learning and Smart Grid Technology, Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok 10330, Thailand
| | - Ngoc Thien Le
- Center of Excellence in Artificial Intelligence, Machine Learning and Smart Grid Technology, Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok 10330, Thailand
| | - Widhyakorn Asdornwised
- Center of Excellence in Artificial Intelligence, Machine Learning and Smart Grid Technology, Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok 10330, Thailand
| | - Surachai Chaitusaney
- Center of Excellence in Artificial Intelligence, Machine Learning and Smart Grid Technology, Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok 10330, Thailand
| | - Ashir Javeed
- Aging Research Center, Karolinska Institutet, 171 65 Stockholm, Sweden
| | - Watit Benjapolakul
- Center of Excellence in Artificial Intelligence, Machine Learning and Smart Grid Technology, Department of Electrical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok 10330, Thailand
| |
Collapse
|
17
|
Wang T, Li Z, Yu H, Duan C, Feng W, Chang L, Yu J, Liu F, Gao J, Zang Y, Luo Z, Liu H, Zhang Y, Zhou X. Prediction of microvascular invasion in hepatocellular carcinoma based on preoperative Gd-EOB-DTPA-enhanced MRI: Comparison of predictive performance among 2D, 2D-expansion and 3D deep learning models. Front Oncol 2023; 13:987781. [PMID: 36816963 PMCID: PMC9936232 DOI: 10.3389/fonc.2023.987781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Accepted: 01/20/2023] [Indexed: 02/05/2023] Open
Abstract
Purpose To evaluate and compare the predictive performance of different deep learning models using gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced MRI in predicting microvascular invasion (MVI) in hepatocellular carcinoma. Methods The data of 233 patients with pathologically confirmed hepatocellular carcinoma (HCC) treated at our hospital from June 2016 to June 2021 were retrospectively analyzed. Three deep learning models were constructed based on three different delineate methods of the region of interest (ROI) using the Darwin Scientific Research Platform (Beijing Yizhun Intelligent Technology Co., Ltd., China). Manual segmentation of ROI was performed on the T1-weighted axial Hepatobiliary phase images. According to the ratio of 7:3, the samples were divided into a training set (N=163) and a validation set (N=70). The receiver operating characteristic (ROC) curve was used to evaluate the predictive performance of three models, and their sensitivity, specificity and accuracy were assessed. Results Among 233 HCC patients, 109 were pathologically MVI positive, including 91 men and 18 women, with an average age of 58.20 ± 10.17 years; 124 patients were MVI negative, including 93 men and 31 women, with an average age of 58.26 ± 10.20 years. Among three deep learning models, 2D-expansion-DL model and 3D-DL model showed relatively good performance, the AUC value were 0.70 (P=0.003) (95% CI 0.57-0.82) and 0.72 (P<0.001) (95% CI 0.60-0.84), respectively. In the 2D-expansion-DL model, the accuracy, sensitivity and specificity were 0.7143, 0.739 and 0.688. In the 3D-DL model, the accuracy, sensitivity and specificity were 0.6714, 0.800 and 0.575, respectively. Compared with the 3D-DL model (based on 3D-ResNet), the 2D-DL model is smaller in scale and runs faster. The frames per second (FPS) for the 2D-DL model is 244.7566, which is much larger than that of the 3D-DL model (73.3374). Conclusion The deep learning model based on Gd-EOB-DTPA-enhanced MRI could preoperatively evaluate MVI in HCC. Considering that the predictive performance of 2D-expansion-DL model was almost the same as the 3D-DL model and the former was relatively easy to implement, we prefer the 2D-expansion-DL model in practical research.
Collapse
Affiliation(s)
- Tao Wang
- Department of Radiology, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Zhen Li
- School of Medical Imaging, Weifang Medical University, Weifang, Shandong, China
| | - Haiyang Yu
- Department of Radiology, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Chongfeng Duan
- Department of Radiology, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Weihua Feng
- Department of Radiology, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | | | - Jing Yu
- Yizhun Medical AI Co., Ltd, Beijing, China
| | - Fang Liu
- Department of Radiology, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Juan Gao
- Department of Cardiology, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Yichen Zang
- Department of Ultrasound, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Ziwei Luo
- Department of Radiology, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Hao Liu
- Yizhun Medical AI Co., Ltd, Beijing, China
| | - Yu Zhang
- Department of Radiology, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xiaoming Zhou
- Department of Radiology, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China,*Correspondence: Xiaoming Zhou,
| |
Collapse
|
18
|
Ryalat MH, Dorgham O, Tedmori S, Al-Rahamneh Z, Al-Najdawi N, Mirjalili S. Harris hawks optimization for COVID-19 diagnosis based on multi-threshold image segmentation. Neural Comput Appl 2023; 35:6855-6873. [PMID: 36471798 PMCID: PMC9714421 DOI: 10.1007/s00521-022-08078-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 11/22/2022] [Indexed: 12/04/2022]
Abstract
Digital image processing techniques and algorithms have become a great tool to support medical experts in identifying, studying, diagnosing certain diseases. Image segmentation methods are of the most widely used techniques in this area simplifying image representation and analysis. During the last few decades, many approaches have been proposed for image segmentation, among which multilevel thresholding methods have shown better results than most other methods. Traditional statistical approaches such as the Otsu and the Kapur methods are the standard benchmark algorithms for automatic image thresholding. Such algorithms provide optimal results, yet they suffer from high computational costs when multilevel thresholding is required, which is considered as an optimization matter. In this work, the Harris hawks optimization technique is combined with Otsu's method to effectively reduce the required computational cost while maintaining optimal outcomes. The proposed approach is tested on a publicly available imaging datasets, including chest images with clinical and genomic correlates, and represents a rural COVID-19-positive (COVID-19-AR) population. According to various performance measures, the proposed approach can achieve a substantial decrease in the computational cost and the time to converge while maintaining a level of quality highly competitive with the Otsu method for the same threshold values.
Collapse
Affiliation(s)
- Mohammad Hashem Ryalat
- grid.443749.90000 0004 0623 1491Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt, 19117 Jordan
| | - Osama Dorgham
- grid.443749.90000 0004 0623 1491Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt, 19117 Jordan ,grid.461585.b0000 0004 1762 8208School of Information Technology, Skyline University College, Sharjah, United Arab Emirates
| | - Sara Tedmori
- grid.29251.3d0000 0004 0404 9637King Hussein School of Computing Sciences, Princess Sumaya University for Technology, Amman, 11941 Jordan
| | - Zainab Al-Rahamneh
- grid.443749.90000 0004 0623 1491Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt, 19117 Jordan
| | - Nijad Al-Najdawi
- grid.443749.90000 0004 0623 1491Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt, 19117 Jordan
| | - Seyedali Mirjalili
- grid.449625.80000 0004 4654 2104Centre for Artificial Intelligence Research and Optimisation, Torrens University, Adelaide, SA 5000 Australia ,grid.15444.300000 0004 0470 5454Yonsei Frontier Lab, Yonsei University, Seoul, South Korea
| |
Collapse
|
19
|
Liu J, Cao L, Akin O, Tian Y. Robust and accurate pulmonary nodule detection with self-supervised feature learning on domain adaptation. FRONTIERS IN RADIOLOGY 2022; 2:1041518. [PMID: 37492669 PMCID: PMC10365286 DOI: 10.3389/fradi.2022.1041518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Accepted: 11/28/2022] [Indexed: 07/27/2023]
Abstract
Medical imaging data annotation is expensive and time-consuming. Supervised deep learning approaches may encounter overfitting if trained with limited medical data, and further affect the robustness of computer-aided diagnosis (CAD) on CT scans collected by various scanner vendors. Additionally, the high false-positive rate in automatic lung nodule detection methods prevents their applications in daily clinical routine diagnosis. To tackle these issues, we first introduce a novel self-learning schema to train a pre-trained model by learning rich feature representatives from large-scale unlabeled data without extra annotation, which guarantees a consistent detection performance over novel datasets. Then, a 3D feature pyramid network (3DFPN) is proposed for high-sensitivity nodule detection by extracting multi-scale features, where the weights of the backbone network are initialized by the pre-trained model and then fine-tuned in a supervised manner. Further, a High Sensitivity and Specificity (HS2 ) network is proposed to reduce false positives by tracking the appearance changes among continuous CT slices on Location History Images (LHI) for the detected nodule candidates. The proposed method's performance and robustness are evaluated on several publicly available datasets, including LUNA16, SPIE-AAPM, LungTIME, and HMS. Our proposed detector achieves the state-of-the-art result of 90.6 % sensitivity at 1 / 8 false positive per scan on the LUNA16 dataset. The proposed framework's generalizability has been evaluated on three additional datasets (i.e., SPIE-AAPM, LungTIME, and HMS) captured by different types of CT scanners.
Collapse
Affiliation(s)
- Jingya Liu
- The City College of New York, New York, NY, USA
| | | | - Oguz Akin
- Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Yingli Tian
- The City College of New York, New York, NY, USA
| |
Collapse
|
20
|
Bilal A, Shafiq M, Fang F, Waqar M, Ullah I, Ghadi YY, Long H, Zeng R. IGWO-IVNet3: DL-Based Automatic Diagnosis of Lung Nodules Using an Improved Gray Wolf Optimization and InceptionNet-V3. SENSORS (BASEL, SWITZERLAND) 2022; 22:9603. [PMID: 36559970 PMCID: PMC9786099 DOI: 10.3390/s22249603] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 11/28/2022] [Accepted: 12/06/2022] [Indexed: 06/17/2023]
Abstract
Artificial intelligence plays an essential role in diagnosing lung cancer. Lung cancer is notoriously difficult to diagnose until it has progressed to a late stage, making it a leading cause of cancer-related mortality. Lung cancer is fatal if not treated early, making this a significant issue. Initial diagnosis of malignant nodules is often made using chest radiography (X-ray) and computed tomography (CT) scans; nevertheless, the possibility of benign nodules leads to wrong choices. In their first phases, benign and malignant nodules seem very similar. Additionally, radiologists have a hard time viewing and categorizing lung abnormalities. Lung cancer screenings performed by radiologists are often performed with the use of computer-aided diagnostic technologies. Computer scientists have presented many methods for identifying lung cancer in recent years. Low-quality images compromise the segmentation process, rendering traditional lung cancer prediction algorithms inaccurate. This article suggests a highly effective strategy for identifying and categorizing lung cancer. Noise in the pictures was reduced using a weighted filter, and the improved Gray Wolf Optimization method was performed before segmentation with watershed modification and dilation operations. We used InceptionNet-V3 to classify lung cancer into three groups, and it performed well compared to prior studies: 98.96% accuracy, 94.74% specificity, as well as 100% sensitivity.
Collapse
Affiliation(s)
- Anas Bilal
- College of Information Science and Technology, Hainan Normal University, Haikou 571158, China
| | - Muhammad Shafiq
- School of Information Engineering, Qujing Normal University, Qujing 655011, China
| | - Fang Fang
- College of Information Engineering, Hainan Vocational University of Science and Technology, Haikou 571126, China
| | - Muhammad Waqar
- Department of Computer Science, COMSATS University, Islamabad 45550, Pakistan
| | - Inam Ullah
- BK21 Chungbuk Information Technology Education and Research Center, Chungbuk National University, Cheongju-si 28644, Republic of Korea
| | - Yazeed Yasin Ghadi
- Department of Computer Science, Al Ain University, Abu Dhabi 64141, United Arab Emirates
| | - Haixia Long
- College of Information Science and Technology, Hainan Normal University, Haikou 571158, China
| | - Rao Zeng
- College of Information Science and Technology, Hainan Normal University, Haikou 571158, China
| |
Collapse
|
21
|
Hussain MA, Gogoi L. Performance analyses of five neural network classifiers on nodule classification in lung CT images using WEKA: a comparative study. Phys Eng Sci Med 2022; 45:1193-1204. [PMID: 36315381 DOI: 10.1007/s13246-022-01187-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
In this report, we are presenting our work on performance analyses of five different neural network classifiers viz. MLP, DL4JMLP, logistic regression, SGD and simple logistic classifier in lung nodule detection using WEKA interface. To the best of our knowledge, this report demonstrates first use of WEKA for comparative performance analyses of neural network classifiers in identifying lung nodules from lung CT-images. A total of 624 handcrafted features from 52 numbers of lung CT-images collected randomly from Lung Image Database Consortium (LIDC) were fed into WEKA to evaluate the performances of the classifiers under four different categories of computation. Performances of the classifiers were observed in terms of 11 important parameters viz. accuracy, kappa statistic, root mean squared error, TPR, FPR, precision, sensitivity, F-measurement, MCC, ROC area and PRC area. Results show 86.53%, 77.77%, 55.55%, 94.44% & 88.88% accuracy as well as 0.91, 0.86, 0.68, 0.91 & 0.93 ROC area for MLP, DL4JMLP, logistic, SGD and simple logistic classifier respectively at tenfold cross-validation by taking 66% of the data set for training and 34% for testing and validation purpose. SGDClassifier has been found the best performing followed by simple logistic classifier for the purpose.
Collapse
Affiliation(s)
- Md Anwar Hussain
- Department of Electronics and Communication Engineering, North Eastern Regional Institute of Science and Technology, Nirjuli, 791109, India
| | - Lakshipriya Gogoi
- Department of Electronics and Communication Engineering, North Eastern Regional Institute of Science and Technology, Nirjuli, 791109, India.
| |
Collapse
|
22
|
Yousefzadeh M, Hasanpour M, Zolghadri M, Salimi F, Yektaeian Vaziri A, Mahmoudi Aqeel Abadi A, Jafari R, Esfahanian P, Nazem-Zadeh MR. Deep learning framework for prediction of infection severity of COVID-19. Front Med (Lausanne) 2022; 9:940960. [PMID: 36059818 PMCID: PMC9428758 DOI: 10.3389/fmed.2022.940960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/15/2022] [Indexed: 11/13/2022] Open
Abstract
With the onset of the COVID-19 pandemic, quantifying the condition of positively diagnosed patients is of paramount importance. Chest CT scans can be used to measure the severity of a lung infection and the isolate involvement sites in order to increase awareness of a patient's disease progression. In this work, we developed a deep learning framework for lung infection severity prediction. To this end, we collected a dataset of 232 chest CT scans and involved two public datasets with an additional 59 scans for our model's training and used two external test sets with 21 scans for evaluation. On an input chest Computer Tomography (CT) scan, our framework, in parallel, performs a lung lobe segmentation utilizing a pre-trained model and infection segmentation using three distinct trained SE-ResNet18 based U-Net models, one for each of the axial, coronal, and sagittal views. By having the lobe and infection segmentation masks, we calculate the infection severity percentage in each lobe and classify that percentage into 6 categories of infection severity score using a k-nearest neighbors (k-NN) model. The lobe segmentation model achieved a Dice Similarity Score (DSC) in the range of [0.918, 0.981] for different lung lobes and our infection segmentation models gained DSC scores of 0.7254 and 0.7105 on our two test sets, respectfully. Similarly, two resident radiologists were assigned the same infection segmentation tasks, for which they obtained a DSC score of 0.7281 and 0.6693 on the two test sets. At last, performance on infection severity score over the entire test datasets was calculated, for which the framework's resulted in a Mean Absolute Error (MAE) of 0.505 ± 0.029, while the resident radiologists' was 0.571 ± 0.039.
Collapse
Affiliation(s)
- Mehdi Yousefzadeh
- Department of Physics, Shahid Beheshti University, Tehran, Iran
- School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Masoud Hasanpour
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
| | - Mozhdeh Zolghadri
- Department of Medical Physics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Fatemeh Salimi
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
| | - Ava Yektaeian Vaziri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Abolfazl Mahmoudi Aqeel Abadi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Ramezan Jafari
- Department of Radiology, Health Research Center, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Parsa Esfahanian
- School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Mohammad-Reza Nazem-Zadeh
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- *Correspondence: Mohammad-Reza Nazem-Zadeh
| |
Collapse
|
23
|
Chen X, Lei Y, Su J, Yang H, Ni W, Yu J, Gu Y, Mao Y. A Review of Artificial Intelligence in Cerebrovascular Disease Imaging: Applications and Challenges. Curr Neuropharmacol 2022; 20:1359-1382. [PMID: 34749621 PMCID: PMC9881077 DOI: 10.2174/1570159x19666211108141446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 09/07/2021] [Accepted: 10/10/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND A variety of emerging medical imaging technologies based on artificial intelligence have been widely applied in many diseases, but they are still limitedly used in the cerebrovascular field even though the diseases can lead to catastrophic consequences. OBJECTIVE This work aims to discuss the current challenges and future directions of artificial intelligence technology in cerebrovascular diseases through reviewing the existing literature related to applications in terms of computer-aided detection, prediction and treatment of cerebrovascular diseases. METHODS Based on artificial intelligence applications in four representative cerebrovascular diseases including intracranial aneurysm, arteriovenous malformation, arteriosclerosis and moyamoya disease, this paper systematically reviews studies published between 2006 and 2021 in five databases: National Center for Biotechnology Information, Elsevier Science Direct, IEEE Xplore Digital Library, Web of Science and Springer Link. And three refinement steps were further conducted after identifying relevant literature from these databases. RESULTS For the popular research topic, most of the included publications involved computer-aided detection and prediction of aneurysms, while studies about arteriovenous malformation, arteriosclerosis and moyamoya disease showed an upward trend in recent years. Both conventional machine learning and deep learning algorithms were utilized in these publications, but machine learning techniques accounted for a larger proportion. CONCLUSION Algorithms related to artificial intelligence, especially deep learning, are promising tools for medical imaging analysis and will enhance the performance of computer-aided detection, prediction and treatment of cerebrovascular diseases.
Collapse
Affiliation(s)
- Xi Chen
- School of Information Science and Technology, Fudan University, Shanghai, China; ,These authors contributed equally to this work
| | - Yu Lei
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China,These authors contributed equally to this work
| | - Jiabin Su
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Heng Yang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Wei Ni
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, Shanghai, China; ,Address correspondence to these authors at the School of Information Science and Technology, Fudan University, Shanghai 200433, China; Tel: +86 021 65643202; Fax: +86 021 65643202; E-mail: Department of Neurosurgery, Huashan Hospital of Fudan University, Shanghai 200040, China; Tel: +86 021 52889999; Fax: +86 021 62489191; E-mail:
| | - Yuxiang Gu
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China,Address correspondence to these authors at the School of Information Science and Technology, Fudan University, Shanghai 200433, China; Tel: +86 021 65643202; Fax: +86 021 65643202; E-mail: Department of Neurosurgery, Huashan Hospital of Fudan University, Shanghai 200040, China; Tel: +86 021 52889999; Fax: +86 021 62489191; E-mail:
| | - Ying Mao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
24
|
Benign-malignant classification of pulmonary nodule with deep feature optimization framework. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103701] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
25
|
Meng Q, Li B, Gao P, Liu W, Zhou P, Ding J, Zhang J, Ge H. Development and Validation of a Risk Stratification Model of Pulmonary Ground-Glass Nodules Based on Complementary Lung-RADS 1.1 and Deep Learning Scores. Front Public Health 2022; 10:891306. [PMID: 35677762 PMCID: PMC9168898 DOI: 10.3389/fpubh.2022.891306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 04/29/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose To assess the value of novel deep learning (DL) scores combined with complementary lung imaging reporting and data system 1.1 (cLung-RADS 1.1) in managing the risk stratification of ground-glass nodules (GGNs) and therefore improving the efficiency of lung cancer (LC) screening in China. Materials and Methods Overall, 506 patients with 561 GGNs on routine computed tomography images, obtained between January 2017 and March 2021, were enrolled in this single-center, retrospective Chinese study. Moreover, the cLung-RADS 1.1 was previously validated, and the DL algorithms were based on a multi-stage, three-dimensional DL-based convolutional neural network. Therefore, the DL-based cLung-RADS 1.1 model was created using a combination of the risk scores of DL and category of cLung-RADS 1.1. The recall rate, precision, accuracy, per-class F1 score, weighted average F1 score (F1weighted), Matthews correlation coefficient (MCC), and area under the curve (AUC) were used to evaluate the performance of DL-based cLung-RADS 1.1. Results The percentage of neoplastic lesions appeared as GGNs in our study was 95.72% (537/561) after long-period follow-up.Compared to cLung-RADS 1.1 model or DL model, The DL-based cLung-RADS 1.1 model achieved the excellent performance with F1 scores of 95.96% and 95.58%, F1weighted values of 97.49 and 96.62%, accuracies of 92.38 and 91.77%, and MCCs of 32.43 and 37.15% in the training and validation tests, respectively. The combined model achieved the best AUCs of 0.753 (0.526–0.980) and 0.734 (0.585–0.884) for the training and validation tests, respectively. Conclusion The DL-based cLung-RADS 1.1 model shows the best performance in risk stratification management of GGNs, which demonstrates substantial promise for developing a more effective personalized lung neoplasm management paradigm for LC screening in China.
Collapse
Affiliation(s)
- Qingcheng Meng
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, China
| | - Bing Li
- Department of Radiotherapy, The Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, China
| | - Pengrui Gao
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, China
| | - Wentao Liu
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, China
| | - Peijin Zhou
- Department of Radiology, The People's Hospital of Nanzhao Country, Nanyang, China
| | - Jia Ding
- Yizhun Medical AI Co. Ltd, Beijing, China
| | | | - Hong Ge
- Department of Radiotherapy, The Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, China
| |
Collapse
|
26
|
Painuli D, Bhardwaj S, Köse U. Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Comput Biol Med 2022; 146:105580. [PMID: 35551012 DOI: 10.1016/j.compbiomed.2022.105580] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 04/14/2022] [Accepted: 04/30/2022] [Indexed: 02/07/2023]
Abstract
Being a second most cause of mortality worldwide, cancer has been identified as a perilous disease for human beings, where advance stage diagnosis may not help much in safeguarding patients from mortality. Thus, efforts to provide a sustainable architecture with proven cancer prevention estimate and provision for early diagnosis of cancer is the need of hours. Advent of machine learning methods enriched cancer diagnosis area with its overwhelmed efficiency & low error-rate then humans. A significant revolution has been witnessed in the development of machine learning & deep learning assisted system for segmentation & classification of various cancers during past decade. This research paper includes a review of various types of cancer detection via different data modalities using machine learning & deep learning-based methods along with different feature extraction techniques and benchmark datasets utilized in the recent six years studies. The focus of this study is to review, analyse, classify, and address the recent development in cancer detection and diagnosis of six types of cancers i.e., breast, lung, liver, skin, brain and pancreatic cancer, using machine learning & deep learning techniques. Various state-of-the-art technique are clustered into same group and results are examined through key performance indicators like accuracy, area under the curve, precision, sensitivity, dice score on benchmark datasets and concluded with future research work challenges.
Collapse
Affiliation(s)
- Deepak Painuli
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India.
| | - Suyash Bhardwaj
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India
| | - Utku Köse
- Department of Computer Engineering, Suleyman Demirel University, Isparta, Turkey
| |
Collapse
|
27
|
Huang H, Wu R, Li Y, Peng C. Self-Supervised Transfer Learning Based on Domain Adaptation for Benign-Malignant Lung Nodule Classification on Thoracic CT. IEEE J Biomed Health Inform 2022; 26:3860-3871. [PMID: 35503850 DOI: 10.1109/jbhi.2022.3171851] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The spatial heterogeneity is an important indicator of the malignancy of lung nodules in lung cancer diagnosis. Compared with 2D nodule CT images, the 3D volumes with entire nodule objects hold richer discriminative information. However, for deep learning methods driven by massive data, effectively capturing the 3D discriminative features of nodules in limited labeled samples is a challenging task. Different from previous models that proposed transfer learning models in a 2D pattern or learning from scratch 3D models, we develop a self-supervised transfer learning based on domain adaptation (SSTL-DA) 3D CNN framework for benign-malignant lung nodule classification. At first, a data pre-processing strategy termed adaptive slice selection (ASS) is developed to eliminate the redundant noise of the input samples with lung nodules. Then, the self-supervised learning network is constructed to learn robust image representation from CT images. Finally, a transfer learning method based on domain adaptation is designed to obtain discriminant features for classification. The proposed SSTL-DA method has been assessed on the LIDC-IDRI benchmark dataset, and it obtains an accuracy of 91.07% and an AUC of 95.84%. These results demonstrate that the SSTL-DA model achieves quite a competitive classification performance compared with some state-of-the-art approaches.
Collapse
|
28
|
Nam S, Kim D, Jung W, Zhu Y. Understanding the Research Landscape of Deep Learning in Biomedical Science: Scientometric Analysis. J Med Internet Res 2022; 24:e28114. [PMID: 35451980 PMCID: PMC9077503 DOI: 10.2196/28114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/30/2021] [Accepted: 02/20/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Advances in biomedical research using deep learning techniques have generated a large volume of related literature. However, there is a lack of scientometric studies that provide a bird's-eye view of them. This absence has led to a partial and fragmented understanding of the field and its progress. OBJECTIVE This study aimed to gain a quantitative and qualitative understanding of the scientific domain by analyzing diverse bibliographic entities that represent the research landscape from multiple perspectives and levels of granularity. METHODS We searched and retrieved 978 deep learning studies in biomedicine from the PubMed database. A scientometric analysis was performed by analyzing the metadata, content of influential works, and cited references. RESULTS In the process, we identified the current leading fields, major research topics and techniques, knowledge diffusion, and research collaboration. There was a predominant focus on applying deep learning, especially convolutional neural networks, to radiology and medical imaging, whereas a few studies focused on protein or genome analysis. Radiology and medical imaging also appeared to be the most significant knowledge sources and an important field in knowledge diffusion, followed by computer science and electrical engineering. A coauthorship analysis revealed various collaborations among engineering-oriented and biomedicine-oriented clusters of disciplines. CONCLUSIONS This study investigated the landscape of deep learning research in biomedicine and confirmed its interdisciplinary nature. Although it has been successful, we believe that there is a need for diverse applications in certain areas to further boost the contributions of deep learning in addressing biomedical research problems. We expect the results of this study to help researchers and communities better align their present and future work.
Collapse
Affiliation(s)
- Seojin Nam
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Donghun Kim
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Woojin Jung
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Yongjun Zhu
- Department of Library and Information Science, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
29
|
Min Y, Hu L, Wei L, Nie S. Computer-aided detection of pulmonary nodules based on convolutional neural networks: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac568e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 02/18/2022] [Indexed: 02/08/2023]
Abstract
Abstract
Computer-aided detection (CADe) technology has been proven to increase the detection rate of pulmonary nodules that has important clinical significance for the early diagnosis of lung cancer. In this study, we systematically review the latest techniques in pulmonary nodule CADe based on deep learning models with convolutional neural networks in computed tomography images. First, the brief descriptions and popular architecture of convolutional neural networks are introduced. Second, several common public databases and evaluation metrics are briefly described. Third, state-of-the-art approaches with excellent performances are selected. Subsequently, we combine the clinical diagnostic process and the traditional four steps of pulmonary nodule CADe into two stages, namely, data preprocessing and image analysis. Further, the major optimizations of deep learning models and algorithms are highlighted according to the progressive evaluation effect of each method, and some clinical evidence is added. Finally, various methods are summarized and compared. The innovative or valuable contributions of each method are expected to guide future research directions. The analyzed results show that deep learning-based methods significantly transformed the detection of pulmonary nodules, and the design of these methods can be inspired by clinical imaging diagnostic procedures. Moreover, focusing on the image analysis stage will result in improved returns. In particular, optimal results can be achieved by optimizing the steps of candidate nodule generation and false positive reduction. End-to-end methods, with greater operating speeds and lower computational consumptions, are superior to other methods in CADe of pulmonary nodules.
Collapse
|
30
|
Ye Q, Gao Y, Ding W, Niu Z, Wang C, Jiang Y, Wang M, Fang EF, Menpes-Smith W, Xia J, Yang G. Robust weakly supervised learning for COVID-19 recognition using multi-center CT images. Appl Soft Comput 2022; 116:108291. [PMID: 34934410 PMCID: PMC8667427 DOI: 10.1016/j.asoc.2021.108291] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 10/18/2021] [Accepted: 12/06/2021] [Indexed: 12/20/2022]
Abstract
The world is currently experiencing an ongoing pandemic of an infectious disease named coronavirus disease 2019 (i.e., COVID-19), which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Computed Tomography (CT) plays an important role in assessing the severity of the infection and can also be used to identify those symptomatic and asymptomatic COVID-19 carriers. With a surge of the cumulative number of COVID-19 patients, radiologists are increasingly stressed to examine the CT scans manually. Therefore, an automated 3D CT scan recognition tool is highly in demand since the manual analysis is time-consuming for radiologists and their fatigue can cause possible misjudgment. However, due to various technical specifications of CT scanners located in different hospitals, the appearance of CT images can be significantly different leading to the failure of many automated image recognition approaches. The multi-domain shift problem for the multi-center and multi-scanner studies is therefore nontrivial that is also crucial for a dependable recognition and critical for reproducible and objective diagnosis and prognosis. In this paper, we proposed a COVID-19 CT scan recognition model namely coronavirus information fusion and diagnosis network (CIFD-Net) that can efficiently handle the multi-domain shift problem via a new robust weakly supervised learning paradigm. Our model can resolve the problem of different appearance in CT scan images reliably and efficiently while attaining higher accuracy compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Qinghao Ye
- Hangzhou Ocean's Smart Boya Co., Ltd, China
- University of California, San Diego, La Jolla, CA, USA
| | - Yuan Gao
- Institute of Biomedical Engineering, University of Oxford, UK
- Aladdin Healthcare Technologies Ltd, UK
| | | | | | - Chengjia Wang
- BHF Center for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Yinghui Jiang
- Hangzhou Ocean's Smart Boya Co., Ltd, China
- Mind Rank Ltd, China
| | - Minhao Wang
- Hangzhou Ocean's Smart Boya Co., Ltd, China
- Mind Rank Ltd, China
| | - Evandro Fei Fang
- Department of Clinical Molecular Biology, University of Oslo, Norway
| | | | - Jun Xia
- Radiology Department, Shenzhen Second People's Hospital, Shenzhen, China
| | - Guang Yang
- Royal Brompton Hospital, London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| |
Collapse
|
31
|
Efficient tumor volume measurement and segmentation approach for CT image based on twin support vector machines. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06769-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
32
|
Shetty MV, D J, Tunga S. Optimized Deformable Model-based Segmentation and Deep Learning for Lung Cancer Classification. THE JOURNAL OF MEDICAL INVESTIGATION 2022; 69:244-255. [PMID: 36244776 DOI: 10.2152/jmi.69.244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Lung cancer is one of the life taking disease and causes more deaths worldwide. Early detection and treatment is necessary to save life. It is very difficult for doctors to interpret and identify diseases using imaging modalities alone. Therefore computer aided diagnosis can assist doctors for the early detection of cancer very accurately. In the proposed work, optimized deformable models and deep learning techniques are applied for the detection and classification of lung cancer. This method involves pre-processing, lung lobe segmentation, lung cancer segmentation, Data augmentation and lung cancer classification. The median filtering is considered for pre-processing and the Bayesian fuzzy clustering is applied for segmenting the lung lobes. The lung cancer segmentation is carried out using Water Cycle Sea Lion Optimization (WSLnO) based deformable model. The data augmentation process is used to augment the size of segmented region in order to perform better classification. The lung cancer classification is done effectively using Shepard Convolutional Neural Network (ShCNN), which is trained by WSLnO algorithm. The proposed WSLnO algorithm is designed by incorporating Water cycle algorithm (WCA) and Sea Lion Optimization (SLnO) algorithm. The performance of the proposed technique is analyzed with various performance metrics and attained the better results in terms of accuracy, sensitivity, specificity and average segmentation accuracy of 0.9303, 0.9123, 0.9133 and 0.9091 respectively. J. Med. Invest. 69 : 244-255, August, 2022.
Collapse
Affiliation(s)
- Mamtha V Shetty
- Department of Electronics & Instrumentation Engineering, JSS Academy of Technical Education, Bengaluru, VTU, India
| | - Jayadevappa D
- Department of Electronics & Instrumentation Engineering, JSS Academy of Technical Education, Bengaluru, VTU, India
| | - Satish Tunga
- Dept. of Electronics & Telecommunication Engineering, M S Ramaiah Institute of Technology, Bengaluru, VTU, India
| |
Collapse
|
33
|
Cui X, Zheng S, Heuvelmans MA, Du Y, Sidorenkov G, Fan S, Li Y, Xie Y, Zhu Z, Dorrius MD, Zhao Y, Veldhuis RNJ, de Bock GH, Oudkerk M, van Ooijen PMA, Vliegenthart R, Ye Z. Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. Eur J Radiol 2021; 146:110068. [PMID: 34871936 DOI: 10.1016/j.ejrad.2021.110068] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 10/03/2021] [Accepted: 11/22/2021] [Indexed: 11/03/2022]
Abstract
OBJECTIVE To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program. MATERIALS AND METHODS One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS. RESULTS The reference standard consisted of 262 nodules ≥ 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules ≥ 4 - ≤ 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001). CONCLUSIONS The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening.
Collapse
Affiliation(s)
- Xiaonan Cui
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China; University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Sunyi Zheng
- Westlake University, Artificial Intelligence and Biomedical Image Analysis Lab, School of Engineering, Hangzhou, People's Republic of China; Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, People's Republic of China; University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, the Netherlands
| | - Marjolein A Heuvelmans
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Yihui Du
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Grigory Sidorenkov
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Shuxuan Fan
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Yanju Li
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Yongsheng Xie
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Zhongyuan Zhu
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Monique D Dorrius
- University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Yingru Zhao
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Raymond N J Veldhuis
- University of Twente, Faculty of Electrical Engineering Mathematics and Computer Science, the Netherlands
| | - Geertruida H de Bock
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Matthijs Oudkerk
- University of Groningen, Faculty of Medical Sciences, the Netherlands
| | - Peter M A van Ooijen
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, the Netherlands; University of Groningen, University Medical Center Groningen, Machine Learning Lab, Data Science Center in Health, Groningen, the Netherlands
| | - Rozemarijn Vliegenthart
- University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Zhaoxiang Ye
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China.
| |
Collapse
|
34
|
Kavithaa G, Balakrishnan P, Yuvaraj SA. Lung Cancer Detection and Improving Accuracy Using Linear Subspace Image Classification Algorithm. Interdiscip Sci 2021; 13:779-786. [PMID: 34351570 DOI: 10.1007/s12539-021-00468-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 07/15/2021] [Accepted: 07/23/2021] [Indexed: 06/13/2023]
Abstract
The ability to identify lung cancer at an early stage is critical, because it can help patients live longer. However, predicting the affected area while diagnosing cancer is a huge challenge. An intelligent computer-aided diagnostic system can be utilized to detect and diagnose lung cancer by detecting the damaged region. The suggested Linear Subspace Image Classification Algorithm (LSICA) approach classifies images in a linear subspace. This methodology is used to accurately identify the damaged region, and it involves three steps: image enhancement, segmentation, and classification. The spatial image clustering technique is used to quickly segment and identify the impacted area in the image. LSICA is utilized to determine the accuracy value of the affected region for classification purposes. Therefore, a lung cancer detection system with classification-dependent image processing is used for lung cancer CT imaging. Therefore, a new method to overcome these deficiencies of the process for detection using LSICA is proposed in this work on lung cancer. MATLAB has been used in all programs. A proposed system designed to easily identify the affected region with help of the classification technique to enhance and get more accurate results.
Collapse
Affiliation(s)
- G Kavithaa
- Department of Electronics and Communication Engineering, Government College of Engineering, Salem, Tamilnadu, India.
| | - P Balakrishnan
- Malla Reddy Engineering College for Women (Autonomous), Hyderabad, 500100, India
| | - S A Yuvaraj
- Department of ECE, GRT Institute of Engineering and Technology, Tiruttani, Tamilnadu, India
| |
Collapse
|
35
|
Morelli R, Clissa L, Amici R, Cerri M, Hitrec T, Luppi M, Rinaldi L, Squarcio F, Zoccoli A. Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet. Sci Rep 2021; 11:22920. [PMID: 34824294 PMCID: PMC8617067 DOI: 10.1038/s41598-021-01929-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 11/03/2021] [Indexed: 02/06/2023] Open
Abstract
Counting cells in fluorescent microscopy is a tedious, time-consuming task that researchers have to accomplish to assess the effects of different experimental conditions on biological structures of interest. Although such objects are generally easy to identify, the process of manually annotating cells is sometimes subject to fatigue errors and suffers from arbitrariness due to the operator’s interpretation of the borderline cases. We propose a Deep Learning approach that exploits a fully-convolutional network in a binary segmentation fashion to localize the objects of interest. Counts are then retrieved as the number of detected items. Specifically, we introduce a Unet-like architecture, cell ResUnet (c-ResUnet), and compare its performance against 3 similar architectures. In addition, we evaluate through ablation studies the impact of two design choices, (i) artifacts oversampling and (ii) weight maps that penalize the errors on cells boundaries increasingly with overcrowding. In summary, the c-ResUnet outperforms the competitors with respect to both detection and counting metrics (respectively, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$F_1$$\end{document}F1 score = 0.81 and MAE = 3.09). Also, the introduction of weight maps contribute to enhance performances, especially in presence of clumping cells, artifacts and confounding biological structures. Posterior qualitative assessment by domain experts corroborates previous results, suggesting human-level performance inasmuch even erroneous predictions seem to fall within the limits of operator interpretation. Finally, we release the pre-trained model and the annotated dataset to foster research in this and related fields.
Collapse
Affiliation(s)
- Roberto Morelli
- National Institute for Nuclear Physics, Bologna, Italy. .,Department of Physics and Astronomy, University of Bologna, Bologna, Italy.
| | - Luca Clissa
- National Institute for Nuclear Physics, Bologna, Italy.,Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| | - Roberto Amici
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Matteo Cerri
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Timna Hitrec
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Marco Luppi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Lorenzo Rinaldi
- National Institute for Nuclear Physics, Bologna, Italy.,Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| | - Fabio Squarcio
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Antonio Zoccoli
- National Institute for Nuclear Physics, Bologna, Italy.,Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| |
Collapse
|
36
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
37
|
CAD system for lung nodule detection using deep learning with CNN. Med Biol Eng Comput 2021; 60:221-228. [PMID: 34811644 DOI: 10.1007/s11517-021-02462-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 09/29/2021] [Indexed: 10/19/2022]
Abstract
The early detection of pulmonary nodules using computer-aided diagnosis (CAD) systems is very essential in reducing mortality rates of lung cancer. In this paper, we propose a new deep learning approach to improve the classification accuracy of pulmonary nodules in computed tomography (CT) images. Our proposed CNN-5CL (convolutional neural network with 5 convolutional layers) approach uses an 11-layer convolutional neural network (with 5 convolutional layers) for automatic feature extraction and classification. The proposed method is evaluated using LIDC/IDRI images. The proposed method is implemented in the Python platform, and the performance is evaluated with metrics such as accuracy, sensitivity, specificity, and receiver operating characteristics (ROC). The results show that the proposed method achieves accuracy, sensitivity, specificity, and area under the roc curve (AUC) of 98.88%, 99.62%, 93.73%, and 0.928, respectively. The proposed approach outperforms various other methods such as Naïve Bayes, K-nearest neighbor, support vector machine, adaptive neuro fuzzy inference system methods, and also other deep learning-based approaches.
Collapse
|
38
|
Liu SC, Lai J, Huang JY, Cho CF, Lee PH, Lu MH, Yeh CC, Yu J, Lin WC. Predicting microvascular invasion in hepatocellular carcinoma: a deep learning model validated across hospitals. Cancer Imaging 2021; 21:56. [PMID: 34627393 PMCID: PMC8501676 DOI: 10.1186/s40644-021-00425-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 09/22/2021] [Indexed: 12/11/2022] Open
Abstract
BACKGROUND The accuracy of estimating microvascular invasion (MVI) preoperatively in hepatocellular carcinoma (HCC) by clinical observers is low. Most recent studies constructed MVI predictive models utilizing radiological and/or radiomics features extracted from computed tomography (CT) images. These methods, however, rely heavily on human experiences and require manual tumor contouring. We developed a deep learning-based framework for preoperative MVI prediction by using CT images of arterial phase (AP) with simple tumor labeling and without the need of manual feature extraction. The model was further validated on CT images that were originally scanned at multiple different hospitals. METHODS CT images of AP were acquired for 309 patients from China Medical University Hospital (CMUH). Images of 164 patients, who took their CT scanning at 54 different hospitals but were referred to CMUH, were also collected. Deep learning (ResNet-18) and machine learning (support vector machine) models were constructed with AP images and/or patients' clinical factors (CFs), and their performance was compared systematically. All models were independently evaluated on two patient cohorts: validation set (within CMUH) and external set (other hospitals). Subsequently, explainability of the best model was visualized using gradient-weighted class activation map (Grad-CAM). RESULTS The ResNet-18 model built with AP images and patients' clinical factors was superior than other models achieving a highest AUC of 0.845. When evaluating on the external set, the model produced an AUC of 0.777, approaching its performance on the validation set. Model interpretation with Grad-CAM revealed that MVI relevant imaging features on CT images were captured and learned by the ResNet-18 model. CONCLUSIONS This framework provide evidence showing the generalizability and robustness of ResNet-18 in predicting MVI using CT images of AP scanned at multiple different hospitals. Attention heatmaps obtained from model explainability further confirmed that ResNet-18 focused on imaging features on CT overlapping with the conditions used by radiologists to estimate MVI clinically.
Collapse
Affiliation(s)
- Shu-Cheng Liu
- AI Innovation Center, China Medical University Hospital, Taichung, Taiwan
| | - Jesyin Lai
- AI Innovation Center, China Medical University Hospital, Taichung, Taiwan
| | - Jhao-Yu Huang
- AI Innovation Center, China Medical University Hospital, Taichung, Taiwan
| | - Chia-Fong Cho
- AI Innovation Center, China Medical University Hospital, Taichung, Taiwan
| | - Pei Hua Lee
- Department of Medical Imaging, China Medical University Hospital, Taichung, Taiwan
| | - Min-Hsuan Lu
- AI Innovation Center, China Medical University Hospital, Taichung, Taiwan
| | - Chun-Chieh Yeh
- Department of Surgery, Organ Transplantation Center, China Medical University Hospital, Taichung, Taiwan.,Department of Medicine, School of Medicine, China Medical University, Taichung, Taiwan.,Department of Surgery, Asia University Hospital, Taichung, Taiwan, 41354
| | - Jiaxin Yu
- AI Innovation Center, China Medical University Hospital, Taichung, Taiwan.
| | - Wei-Ching Lin
- AI Innovation Center, China Medical University Hospital, Taichung, Taiwan. .,Department of Medical Imaging, China Medical University Hospital, Taichung, Taiwan. .,Department of Biomedical Imaging and Radiological Science, School of Medicine, China Medical University, Taichung, Taiwan.
| |
Collapse
|
39
|
Wu M, Chai Z, Qian G, Lin H, Wang Q, Wang L, Chen H. Development and Evaluation of a Deep Learning Algorithm for Rib Segmentation and Fracture Detection from Multicenter Chest CT Images. Radiol Artif Intell 2021; 3:e200248. [PMID: 34617026 DOI: 10.1148/ryai.2021200248] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 06/07/2021] [Accepted: 06/29/2020] [Indexed: 12/12/2022]
Abstract
Purpose To evaluate the performance of a deep learning-based algorithm for automatic detection and labeling of rib fractures from multicenter chest CT images. Materials and Methods This retrospective study included 10 943 patients (mean age, 55 years; 6418 men) from six hospitals (January 1, 2017 to December 30, 2019), which consisted of patients with and without rib fractures who underwent CT. The patients were separated into one training set (n = 2425), two lesion-level test sets (n = 362 and 105), and one examination-level test set (n = 8051). Free-response receiver operating characteristic (FROC) score (mean sensitivity of seven different false-positive rates), precision, sensitivity, and F1 score were used as metrics to assess rib fracture detection performance. Area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were employed to evaluate the classification accuracy. The mean Dice coefficient and accuracy were used to assess the performance of rib labeling. Results In the detection of rib fractures, the model showed an FROC score of 84.3% on test set 1. For test set 2, the algorithm achieved a detection performance (precision, 82.2%; sensitivity, 84.9%; F1 score, 83.3%) comparable to three radiologists (precision, 81.7%, 98.0%, 92.0%; sensitivity, 91.2%, 78.6%, 69.2%; F1 score, 86.1%, 87.2%, 78.9%). When the radiologists used the algorithm, the mean sensitivity of the three radiologists showed an improvement (from 79.7% to 89.2%), with precision achieving similar performance (from 90.6% to 88.4%). Furthermore, the model achieved an AUC of 0.93 (95% CI: 0.91, 0.94), sensitivity of 87.9% (95% CI: 83.7%, 91.4%), and specificity of 85.3% (95% CI: 74.6%, 89.8%) on test set 3. On a subset of test set 1, the model achieved a Dice score of 0.827 with an accuracy of 96.0% for rib segmentation. Conclusion The developed deep learning algorithm was capable of detecting rib fractures, as well as corresponding anatomic locations on CT images.Keywords CT, Ribs© RSNA, 2021.
Collapse
Affiliation(s)
- Mingxiang Wu
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Zhizhong Chai
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Guangwu Qian
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Huangjing Lin
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Qiong Wang
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Liansheng Wang
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Hao Chen
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| |
Collapse
|
40
|
|
41
|
|
42
|
Shen T, Hou R, Ye X, Li X, Xiong J, Zhang Q, Zhang C, Cai X, Yu W, Zhao J, Fu X. Predicting Malignancy and Invasiveness of Pulmonary Subsolid Nodules on CT Images Using Deep Learning. Front Oncol 2021; 11:700158. [PMID: 34381723 PMCID: PMC8351466 DOI: 10.3389/fonc.2021.700158] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 07/08/2021] [Indexed: 12/28/2022] Open
Abstract
Background To develop and validate a deep learning-based model on CT images for the malignancy and invasiveness prediction of pulmonary subsolid nodules (SSNs). Materials and Methods This study retrospectively collected patients with pulmonary SSNs treated by surgery in our hospital from 2012 to 2018. Postoperative pathology was used as the diagnostic reference standard. Three-dimensional convolutional neural network (3D CNN) models were constructed using preoperative CT images to predict the malignancy and invasiveness of SSNs. Then, an observer reader study conducted by two thoracic radiologists was used to compare with the CNN model. The diagnostic power of the models was evaluated with receiver operating characteristic curve (ROC) analysis. Results A total of 2,614 patients were finally included and randomly divided for training (60.9%), validation (19.1%), and testing (20%). For the benign and malignant classification, the best 3D CNN model achieved a satisfactory AUC of 0.913 (95% CI: 0.885-0.940), sensitivity of 86.1%, and specificity of 83.8% at the optimal decision point, which outperformed all observer readers' performance (AUC: 0.846±0.031). For pre-invasive and invasive classification of malignant SSNs, the 3D CNN also achieved satisfactory AUC of 0.908 (95% CI: 0.877-0.939), sensitivity of 87.4%, and specificity of 80.8%. Conclusion The deep-learning model showed its potential to accurately identify the malignancy and invasiveness of SSNs and thus can help surgeons make treatment decisions.
Collapse
Affiliation(s)
- Tianle Shen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Runping Hou
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China.,School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaodan Ye
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoyang Li
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Junfeng Xiong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Qin Zhang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Chenchen Zhang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xuwei Cai
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Wen Yu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaolong Fu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
43
|
Menopausal Women's Health Care Method Based on Computer Nursing Diagnosis Intelligent System. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:4963361. [PMID: 34367537 PMCID: PMC8346312 DOI: 10.1155/2021/4963361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 06/26/2021] [Indexed: 11/17/2022]
Abstract
Taking into account the current feature extraction speed and recognition effect of intelligent diagnosis of menopausal women's health care behavior, this paper proposes to use a cross-layer convolutional neural network to extract behavior features autonomously and use support vector machine multiclass behavior classifier to classify behavior. Compared with the feature images extracted by traditional methods, the behavioral features extracted in this paper are related to the individual menopausal women and have better semantic information, and the feature description ability in the time domain and the space domain has been enhanced. Through Matlab software, using the database established in this paper to compare its feature extraction time, test classification time, and final recognition accuracy with ordinary convolutional neural networks, it is concluded that the cross-layer CNN-SVM model can ensure the speed of feature extraction. It proves that the method in this paper can be applied to the behavioral intelligent diagnosis system for intelligently nursing menopausal women and has good practical value. This paper designs a home care bed intelligent monitoring system, which can automatically detect the posture of the care bed, and not only can change the posture of the bed under the control of personnel, but also can automatically complete the posture conversion according to the setting. At the same time, the system has the function of monitoring the physical condition of the person being cared for and can detect the heart rate, blood oxygen, and other physiological indicators of the bedridden person. In addition, the system can also provide a remote diagnosis function, allowing nursing staff to remotely view the current state of the nursing bed and the physical condition of the person. After testing, the system works stably, improves the automation and safety of the nursing bed control, and enriches the functions of the nursing bed.
Collapse
|
44
|
Jingxin L, Mengchao Z, Yuchen L, Jinglei C, Yutong Z, Zhong Z, Lihui Z. COVID-19 lesion detection and segmentation-A deep learning method. Methods 2021; 202:62-69. [PMID: 34237453 PMCID: PMC8256684 DOI: 10.1016/j.ymeth.2021.07.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 04/24/2021] [Accepted: 07/02/2021] [Indexed: 12/31/2022] Open
Abstract
PURPOSE In this paper, we utilized deep learning methods to screen the positive COVID-19 cases in chest CT. Our primary goal is to supply rapid and precise assistance for disease surveillance on the medical imaging aspect. MATERIALS AND METHODS Basing on deep learning, we combined semantic segmentation and object detection methods to study the lesion performance of COVID-19. We put forward a novel end-to-end model which takes advantage of the Spatio-temporal features. Furthermore, a segmentation model attached with a fully connected CRF was designed for a more effective ROI input. RESULTS Our method showed a better performance across different metrics against the comparison models. Moreover, our strategy highlighted strong robustness for the processed augmented testing samples. CONCLUSION The comprehensive fusion of Spatio-temporal correlations can exploit more valuable features for locating target regions, and this mechanism is friendly to detect tiny lesions. Although it remains in discrete form, the feature extracting in temporal dimension improves the precision of final prediction.
Collapse
Affiliation(s)
- Liu Jingxin
- Department of Radiology, China-Japan Union Hospital, Jilin University, Changchun, China
| | - Zhang Mengchao
- Department of Radiology, China-Japan Union Hospital, Jilin University, Changchun, China
| | - Liu Yuchen
- School of Medical Information, Changchun University of Chinese Medicine, Changchun, China
| | - Cui Jinglei
- Medical Imaging Engineering Technology R&D Center of Jilin Province, Changchun, China
| | - Zhong Yutong
- Electronic Information Engineering College, Changchun University of Science and Technology, Changchun, China
| | - Zhang Zhong
- R&D Department, WX Medical Technology Co., Shenyang, China.
| | - Zu Lihui
- Department of Radiology, China-Japan Union Hospital, Jilin University, Changchun, China.
| |
Collapse
|
45
|
Yu H, Yang LT, Zhang Q, Armstrong D, Deen MJ. Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.157] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
46
|
Kim S, Lee P, Oh KT, Byun MS, Yi D, Lee JH, Kim YK, Ye BS, Yun MJ, Lee DY, Jeong Y. Deep learning-based amyloid PET positivity classification model in the Alzheimer's disease continuum by using 2-[ 18F]FDG PET. EJNMMI Res 2021; 11:56. [PMID: 34114091 PMCID: PMC8192639 DOI: 10.1186/s13550-021-00798-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 06/02/2021] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Considering the limited accessibility of amyloid position emission tomography (PET) in patients with dementia, we proposed a deep learning (DL)-based amyloid PET positivity classification model from PET images with 2-deoxy-2-[fluorine-18]fluoro-D-glucose (2-[18F]FDG). METHODS We used 2-[18F]FDG PET datasets from the Alzheimer's Disease Neuroimaging Initiative and Korean Brain Aging Study for the Early diagnosis and prediction of Alzheimer's disease for model development. Moreover, we used an independent dataset from another hospital. A 2.5-D deep learning architecture was constructed using 291 submodules and three axes images as the input. We conducted the voxel-wise analysis to assess the regions with substantial differences in glucose metabolism between the amyloid PET-positive and PET-negative participants. This facilitated an understanding of the deep model classification. In addition, we compared these regions with the classification probability from the submodules. RESULTS There were 686 out of 1433 (47.9%) and 50 out of 100 (50%) amyloid PET-positive participants in the training and internal validation datasets and the external validation datasets, respectively. With 50 times iterations of model training and validation, the model achieved an AUC of 0.811 (95% confidence interval (CI) of 0.803-0.819) and 0.798 (95% CI, 0.789-0.807) on the internal and external validation datasets, respectively. The area under the curve (AUC) was 0.860 when tested with the model with the highest value (0.864) on the external validation dataset. Moreover, it had 75.0% accuracy, 76.0% sensitivity, 74.0% specificity, and 75.0% F1-score. We found an overlap between the regions within the default mode network, thus generating high classification values. CONCLUSION The proposed model based on the 2-[18F]FDG PET imaging data and a DL framework might successfully classify amyloid PET positivity in clinical practice, without performing amyloid PET, which have limited accessibility.
Collapse
Affiliation(s)
- Suhong Kim
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- Korea Advanced Institute of Science and Technology (KAIST), KI for Health Science Technology, Daejeon, Republic of Korea
| | - Peter Lee
- Korea Advanced Institute of Science and Technology (KAIST), KI for Health Science Technology, Daejeon, Republic of Korea
| | - Kyeong Taek Oh
- Department of Medical Engineering, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Min Soo Byun
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Dahyun Yi
- Institute of Human Behavioral Medicine, Medical Research Center, Seoul National University, Seoul, Republic of Korea
| | - Jun Ho Lee
- Department of Neuropsychiatry, National Center for Mental Health, Seoul, Republic of Korea
| | - Yu Kyeong Kim
- Department of Nuclear Medicine, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | - Byoung Seok Ye
- Department of Neurology, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Mi Jin Yun
- Department of Nuclear Medicine, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | - Dong Young Lee
- Department of Neuropsychiatry, National Center for Mental Health, Seoul, Republic of Korea.
- Department of Psychiatry, Seoul National University College of Medicine, 101 Daehak-ro, Joungno-gu, Seoul, 03080, Republic of Korea.
- Department of Neuropsychiatry, Seoul National University Hospital, Seoul, Republic of Korea.
| | - Yong Jeong
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea.
- Korea Advanced Institute of Science and Technology (KAIST), KI for Health Science Technology, Daejeon, Republic of Korea.
| |
Collapse
|
47
|
Srinivasulu A, Ramanjaneyulu K, Neelaveni R, Karanam SR, Majji S, Jothilingam M, Patnala TR. Advanced lung cancer prediction based on blockchain material using extended CNN. APPLIED NANOSCIENCE 2021. [DOI: 10.1007/s13204-021-01897-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
48
|
Peters AA, Decasper A, Munz J, Klaus J, Loebelenz LI, Hoffner MKM, Hourscht C, Heverhagen JT, Christe A, Ebner L. Performance of an AI based CAD system in solid lung nodule detection on chest phantom radiographs compared to radiology residents and fellow radiologists. J Thorac Dis 2021; 13:2728-2737. [PMID: 34164165 PMCID: PMC8182550 DOI: 10.21037/jtd-20-3522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Background Despite the decreasing relevance of chest radiography in lung cancer screening, chest radiography is still frequently applied to assess for lung nodules. The aim of the current study was to determine the accuracy of a commercial AI based CAD system for the detection of artificial lung nodules on chest radiograph phantoms and compare the performance to radiologists in training. Methods Sixty-one anthropomorphic lung phantoms were equipped with 140 randomly deployed artificial lung nodules (5, 8, 10, 12 mm). A random generator chose nodule size and distribution before a two-plane chest X-ray (CXR) of each phantom was performed. Seven blinded radiologists in training (2 fellows, 5 residents) with 2 to 5 years of experience in chest imaging read the CXRs on a PACS-workstation independently. Results of the software were recorded separately. McNemar test was used to compare each radiologist’s results to the AI-computer-aided-diagnostic (CAD) software in a per-nodule and a per-phantom approach and Fleiss-Kappa was applied for inter-rater and intra-observer agreements. Results Five out of seven readers showed a significantly higher accuracy than the AI algorithm. The pooled accuracies of the radiologists in a nodule-based and a phantom-based approach were 0.59 and 0.82 respectively, whereas the AI-CAD showed accuracies of 0.47 and 0.67, respectively. Radiologists’ average sensitivity for 10 and 12 mm nodules was 0.80 and dropped to 0.66 for 8 mm (P=0.04) and 0.14 for 5 mm nodules (P<0.001). The radiologists and the algorithm both demonstrated a significant higher sensitivity for peripheral compared to central nodules (0.66 vs. 0.48; P=0.004 and 0.64 vs. 0.094; P=0.025, respectively). Inter-rater agreements were moderate among the radiologists and between radiologists and AI-CAD software (K’=0.58±0.13 and 0.51±0.1). Intra-observer agreement was calculated for two readers and was almost perfect for the phantom-based (K’=0.85±0.05; K’=0.80±0.02); and substantial to almost perfect for the nodule-based approach (K’=0.83±0.02; K’=0.78±0.02). Conclusions The AI based CAD system as a primary reader acts inferior to radiologists regarding lung nodule detection in chest phantoms. Chest radiography has reasonable accuracy in lung nodule detection if read by a radiologist alone and may be further optimized by an AI based CAD system as a second reader.
Collapse
Affiliation(s)
- Alan A Peters
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Amanda Decasper
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jaro Munz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jeremias Klaus
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Laura I Loebelenz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Maximilian Korbinian Michael Hoffner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Cynthia Hourscht
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Johannes T Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Department of BioMedical Research, Experimental Radiology, University of Bern, Bern, Switzerland.,Department of Radiology, The Ohio State University, Columbus, OH, USA
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
49
|
LI SHIWEI, LIU DANDAN. AUTOMATED CLASSIFICATION OF SOLITARY PULMONARY NODULES USING CONVOLUTIONAL NEURAL NETWORK BASED ON TRANSFER LEARNING STRATEGY. J MECH MED BIOL 2021. [DOI: 10.1142/s0219519421400029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This study aimed to propose an effective malignant solitary pulmonary nodule classification method based on improved Faster R-CNN and transfer learning strategy. In practice, the existing solitary pulmonary nodule classification methods divide the lung cancer images into two categories only: normal and cancerous. This study proposed the deep convolution neural network to classify the computed tomography (CT) images of lung cancer into four categories: lung adenocarcinoma, lung squamous cell carcinoma, metastatic lung cancer, and normal types of lung cancer. Some high-resolution lung CT images have unnecessary characters such as a large number of high-density continuity features, small-size lung nodule targets, CT image background complexity, and so forth. In this study, the CT image sub-block preprocessing strategy was used to extract nodule features for enhancement and alleviate the aforementioned problems. The experimental results showed that the proposed system was effective in resolving issues such as high false-positive rate and long classification time cost based on the original Faster R-CNN detection method. Meanwhile, the transfer learning strategy was used to improve the classification efficiency so as to avoid the overfitting problem caused by a few labeled samples of lung cancer datasets. The classification results were integrated using the majority vote algorithm. The classification results of the lung CT imaging showed that the proposed method had an average detection accuracy of 89.7% and reduced the rate of misdiagnosis to meet the clinical needs.
Collapse
Affiliation(s)
- SHIWEI LI
- Department of Data Science and Technology, Heilongjiang University Harbin, Heilongjiang 150080, P. R. China
| | - DANDAN LIU
- Department of Oncology, Heilongjiang Province Hospital, Harbin, Heilongjiang 150036, P. R. China
| |
Collapse
|
50
|
Salient detection network for lung nodule detection in 3D Thoracic MRI Images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|