1
|
Liu Z, Yuan Y, Zhang C, Zhu Q, Xu X, Yuan M, Tan W. Hierarchical classification of early microscopic lung nodule based on cascade network. Health Inf Sci Syst 2024; 12:13. [PMID: 38404714 PMCID: PMC10891040 DOI: 10.1007/s13755-024-00273-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 01/08/2024] [Indexed: 02/27/2024] Open
Abstract
Purpose Early-stage lung cancer is typically characterized clinically by the presence of isolated lung nodules. Thousands of cases are examined each year, and one case usually contains numerous lung CT slices. Detecting and classifying early microscopic lung nodules is demanding due to their diminutive dimensions and restricted characterization capabilities. Therefore, a lung nodule classification model that performs well and is sensitive to microscopic lung nodules is needed to accurately classify lung nodules. Methods This paper uses the Resnet34 network as a basic classification model. A new cascade lung nodule classification method is proposed to classify lung nodules into 6 classes instead of the traditional 2 or 4 classes. It can effectively classify six different nodule types including ground-glass and solid nodules, benign and malignant nodules, and nodules with predominantly ground-glass or solid components. Results In this paper, the traditional multi-classification method and the cascade classification method proposed in this paper were tested using real lung nodule data collected in the clinic. The test results demonstrate that the cascade classification method in this study achieves an accuracy of 80.04% , outperforming the conventional multi-classification approach. Conclusions Different from the existing methods for categorizing the benign and malignant nature of lung nodules, the approach presented in this paper can classify lung nodules into 6 categories more accurately. At the same time, This paper proposes a rapid, precise, and dependable approach for classifying six distinct categories of lung nodules, which increases the accuracy categorization compared with the traditional multivariate categorization method.
Collapse
Affiliation(s)
- Ziang Liu
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110189 China
- College of Computer Science and Engineering, Northeastern University, Shenyang, 110189 China
| | - Ye Yuan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110189 China
- College of Computer Science and Engineering, Northeastern University, Shenyang, 110189 China
| | - Cui Zhang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110189 China
- College of Computer Science and Engineering, Northeastern University, Shenyang, 110189 China
| | - Quan Zhu
- Department of Thoracic Surgery, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029 China
| | - Xinfeng Xu
- Department of Thoracic Surgery, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029 China
| | - Mei Yuan
- Department of Thoracic Surgery, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029 China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110189 China
- College of Computer Science and Engineering, Northeastern University, Shenyang, 110189 China
| |
Collapse
|
2
|
Gao C, Wu L, Wu W, Huang Y, Wang X, Sun Z, Xu M, Gao C. Deep learning in pulmonary nodule detection and segmentation: a systematic review. Eur Radiol 2024:10.1007/s00330-024-10907-0. [PMID: 38985185 DOI: 10.1007/s00330-024-10907-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/09/2024] [Accepted: 05/10/2024] [Indexed: 07/11/2024]
Abstract
OBJECTIVES The accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature. METHODS This study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information. RESULTS After screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient. CONCLUSIONS This study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research. CLINICAL RELEVANCE STATEMENT Deep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility. KEY POINTS Deep learning shows potential in the detection and segmentation of pulmonary nodules. There are methodological gaps and biases present in the existing literature. Factors such as external validation and transparency affect the clinical application.
Collapse
Affiliation(s)
- Chuan Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Linyu Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Wei Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yichao Huang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xinyue Wang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Zhichao Sun
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Maosheng Xu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Chen Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| |
Collapse
|
3
|
Li S, Xie J, Liu J, Wu Y, Wang Z, Cao Z, Wen D, Zhang X, Wang B, Yang Y, Lu L, Dong X. Prognostic Value of a Combined Nomogram Model Integrating 3-Dimensional Deep Learning and Radiomics for Head and Neck Cancer. J Comput Assist Tomogr 2024; 48:498-507. [PMID: 38438336 DOI: 10.1097/rct.0000000000001584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
OBJECTIVE The preoperative prediction of the overall survival (OS) status of patients with head and neck cancer (HNC) is significant value for their individualized treatment and prognosis. This study aims to evaluate the impact of adding 3D deep learning features to radiomics models for predicting 5-year OS status. METHODS Two hundred twenty cases from The Cancer Imaging Archive public dataset were included in this study; 2212 radiomics features and 304 deep features were extracted from each case. The features were selected by univariate analysis and the least absolute shrinkage and selection operator, and then grouped into a radiomics model containing Positron Emission Tomography /Computed Tomography (PET/CT) radiomics features score, a deep model containing deep features score, and a combined model containing PET/CT radiomics features score +3D deep features score. TumorStage model was also constructed using initial patient tumor node metastasis stage to compare the performance of the combined model. A nomogram was constructed to analyze the influence of deep features on the performance of the model. The 10-fold cross-validation of the average area under the receiver operating characteristic curve and calibration curve were used to evaluate performance, and Shapley Additive exPlanations (SHAP) was developed for interpretation. RESULTS The TumorStage model, radiomics model, deep model, and the combined model achieved areas under the receiver operating characteristic curve of 0.604, 0.851, 0.840, and 0.895 on the train set and 0.571, 0.849, 0.832, and 0.900 on the test set. The combined model showed better performance of predicting the 5-year OS status of HNC patients than the radiomics model and deep model. The combined model was shown to provide a favorable fit in calibration curves and be clinically useful in decision curve analysis. SHAP summary plot and SHAP The SHAP summary plot and SHAP force plot visually interpreted the influence of deep features and radiomics features on the model results. CONCLUSIONS In predicting 5-year OS status in patients with HNC, 3D deep features could provide richer features for combined model, which showed outperformance compared with the radiomics model and deep model.
Collapse
Affiliation(s)
| | - Jiayi Xie
- Department of automation, Tsinghua University, Beijing, China
| | | | | | - Zhongxiao Wang
- From the Hebei International Research Center for Medical-Engineering
| | - Zhendong Cao
- Department of Radiology, The Affiliated Hospital of Chengde Medical University, Chengde, Hebei
| | - Dong Wen
- Institute of Artificial Intelligence, University of Science and Technology Beijing
| | - Xiaolei Zhang
- From the Hebei International Research Center for Medical-Engineering
| | | | - Yifan Yang
- Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou
| | | |
Collapse
|
4
|
Lin CY, Guo SM, Lien JJJ, Tsai TY, Liu YS, Lai CH, Hsu IL, Chang CC, Tseng YL. Development of a modified 3D region proposal network for lung nodule detection in computed tomography scans: a secondary analysis of lung nodule datasets. Cancer Imaging 2024; 24:40. [PMID: 38509635 PMCID: PMC10953193 DOI: 10.1186/s40644-024-00683-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 03/03/2024] [Indexed: 03/22/2024] Open
Abstract
BACKGROUND Low-dose computed tomography (LDCT) has been shown useful in early lung cancer detection. This study aimed to develop a novel deep learning model for detecting pulmonary nodules on chest LDCT images. METHODS In this secondary analysis, three lung nodule datasets, including Lung Nodule Analysis 2016 (LUNA16), Lung Nodule Received Operation (LNOP), and Lung Nodule in Health Examination (LNHE), were used to train and test deep learning models. The 3D region proposal network (RPN) was modified via a series of pruning experiments for better predictive performance. The performance of each modified deep leaning model was evaluated based on sensitivity and competition performance metric (CPM). Furthermore, the performance of the modified 3D RPN trained on three datasets was evaluated by 10-fold cross validation. Temporal validation was conducted to assess the reliability of the modified 3D RPN for detecting lung nodules. RESULTS The results of pruning experiments indicated that the modified 3D RPN composed of the Cross Stage Partial Network (CSPNet) approach to Residual Network (ResNet) Xt (CSP-ResNeXt) module, feature pyramid network (FPN), nearest anchor method, and post-processing masking, had the optimal predictive performance with a CPM of 92.2%. The modified 3D RPN trained on the LUNA16 dataset had the highest CPM (90.1%), followed by the LNOP dataset (CPM: 74.1%) and the LNHE dataset (CPM: 70.2%). When the modified 3D RPN trained and tested on the same datasets, the sensitivities were 94.6%, 84.8%, and 79.7% for LUNA16, LNOP, and LNHE, respectively. The temporal validation analysis revealed that the modified 3D RPN tested on LNOP test set achieved a CPM of 71.6% and a sensitivity of 85.7%, and the modified 3D RPN tested on LNHE test set had a CPM of 71.7% and a sensitivity of 83.5%. CONCLUSION A modified 3D RPN for detecting lung nodules on LDCT scans was designed and validated, which may serve as a computer-aided diagnosis system to facilitate lung nodule detection and lung cancer diagnosis.
Collapse
Affiliation(s)
- Chia-Ying Lin
- Department of Medical Imaging, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, No.1, University Road, 701, Tainan City, Taiwan
| | - Shu-Mei Guo
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Jenn-Jier James Lien
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Tzung-Yi Tsai
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Yi-Sheng Liu
- Department of Medical Imaging, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, No.1, University Road, 701, Tainan City, Taiwan
| | - Chao-Han Lai
- Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| | - I-Lin Hsu
- Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| | - Chao-Chun Chang
- Division of Thoracic Surgery, Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan.
| | - Yau-Lin Tseng
- Division of Thoracic Surgery, Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| |
Collapse
|
5
|
Abdulahi AT, Ogundokun RO, Adenike AR, Shah MA, Ahmed YK. PulmoNet: a novel deep learning based pulmonary diseases detection model. BMC Med Imaging 2024; 24:51. [PMID: 38418987 PMCID: PMC10903074 DOI: 10.1186/s12880-024-01227-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 02/11/2024] [Indexed: 03/02/2024] Open
Abstract
Pulmonary diseases are various pathological conditions that affect respiratory tissues and organs, making the exchange of gas challenging for animals inhaling and exhaling. It varies from gentle and self-limiting such as the common cold and catarrh, to life-threatening ones, such as viral pneumonia (VP), bacterial pneumonia (BP), and tuberculosis, as well as a severe acute respiratory syndrome, such as the coronavirus 2019 (COVID-19). The cost of diagnosis and treatment of pulmonary infections is on the high side, most especially in developing countries, and since radiography images (X-ray and computed tomography (CT) scan images) have proven beneficial in detecting various pulmonary infections, many machine learning (ML) models and image processing procedures have been utilized to identify these infections. The need for timely and accurate detection can be lifesaving, especially during a pandemic. This paper, therefore, suggested a deep convolutional neural network (DCNN) founded image detection model, optimized with image augmentation technique, to detect three (3) different pulmonary diseases (COVID-19, bacterial pneumonia, and viral pneumonia). The dataset containing four (4) different classes (healthy (10,325), COVID-19 (3,749), BP (883), and VP (1,478)) was utilized as training/testing data for the suggested model. The model's performance indicates high potential in detecting the three (3) classes of pulmonary diseases. The model recorded average detection accuracy of 94%, 95.4%, 99.4%, and 98.30%, and training/detection time of about 60/50 s. This result indicates the proficiency of the suggested approach when likened to the traditional texture descriptors technique of pulmonary disease recognition utilizing X-ray and CT scan images. This study introduces an innovative deep convolutional neural network model to enhance the detection of pulmonary diseases like COVID-19 and pneumonia using radiography. This model, notable for its accuracy and efficiency, promises significant advancements in medical diagnostics, particularly beneficial in developing countries due to its potential to surpass traditional diagnostic methods.
Collapse
Affiliation(s)
- AbdulRahman Tosho Abdulahi
- Department of Computer Science, Institute of Information and Communication Technology, Kwara State Polytechnic, Ilorin, Nigeria
| | - Roseline Oluwaseun Ogundokun
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
- Department of Computer Science, Landmark University Omu Aran, Omu Aran, Nigeria
| | - Ajiboye Raimot Adenike
- Department of Statistics, Institute of Applied Sciences, Kwara State Polytechnic, Ilorin, Nigeria
| | - Mohd Asif Shah
- Department of Economics, Kebri Dehar University, Kebri Dehar, 250, Somali, Ethiopia.
- Centre of Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
- Chitkara Centre for Research and Development, Chitkara University, Baddi, Himachal Pradesh, 174103, India.
| | - Yusuf Kola Ahmed
- Department of Biomedical Engineering, University of Ilorin, Ilorin, Nigeria
- Department of Occupational Therapy, University of Alberta, Edmonton, Canada
| |
Collapse
|
6
|
Estler A, Hauser TK, Mengel A, Brunnée M, Zerweck L, Richter V, Zuena M, Schuhholz M, Ernemann U, Gohla G. Deep Learning Accelerated Image Reconstruction of Fluid-Attenuated Inversion Recovery Sequence in Brain Imaging: Reduction of Acquisition Time and Improvement of Image Quality. Acad Radiol 2024; 31:180-186. [PMID: 37280126 DOI: 10.1016/j.acra.2023.05.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 05/08/2023] [Accepted: 05/08/2023] [Indexed: 06/08/2023]
Abstract
RATIONALE AND OBJECTIVES Fluid-attenuated inversion recovery (FLAIR) imaging is playing an increasingly significant role in the detection of brain metastases with a concomitant increase in the number of magnetic resonance imaging (MRI) examinations. Therefore, the purpose of this study was to investigate the impact on image quality and diagnostic confidence of an innovative deep learning-based accelerated FLAIR (FLAIRDLR) sequence of the brain compared to conventional (standard) FLAIR (FLAIRS) imaging. MATERIALS AND METHODS Seventy consecutive patients with staging cerebral MRIs were retrospectively enrolled in this single-center study. The FLAIRDLR was conducted using the same MRI acquisition parameters as the FLAIRS sequence, except for a higher acceleration factor for parallel imaging (from 2 to 4), which resulted in a shorter acquisition time of 1:39 minute instead of 2:40 minutes (-38%). Two specialized neuroradiologists evaluated the imaging datasets using a Likert scale that ranged from 1 to 4, with 4 indicating the best score for the following parameters: sharpness, lesion demarcation, artifacts, overall image quality, and diagnostic confidence. Additionally, the image preference of the readers and the interreader agreement were assessed. RESULTS The average age of the patients was 63 ± 11years. FLAIRDLR exhibited significantly less image noise than FLAIRS, with P-values of< .001 and< .05, respectively. The sharpness of the images and the ability to detect lesions were rated higher in FLAIRDLR, with a median score of 4 compared to a median score of 3 in FLAIRS (P-values of<.001 for both readers). In terms of overall image quality, FLAIRDLR was rated superior to FLAIRS, with a median score of 4 vs 3 (P-values of<.001 for both readers). Both readers preferred FLAIRDLR in 68/70 cases. CONCLUSION The feasibility of deep learning FLAIR brain imaging was shown with additional 38% reduction in examination time compared to standard FLAIR imaging. Furthermore, this technique has shown improvement in image quality, noise reduction, and lesion demarcation.
Collapse
Affiliation(s)
- Arne Estler
- Diagnostic and Interventional Neuroradiology, Department of Radiology, University Hospital Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Baden-Württemberg, Germany (A.E., T.-K.H., L.Z., V.R., M.Z., U.E., G.G.).
| | - Till-Karsten Hauser
- Diagnostic and Interventional Neuroradiology, Department of Radiology, University Hospital Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Baden-Württemberg, Germany (A.E., T.-K.H., L.Z., V.R., M.Z., U.E., G.G.)
| | - Annerose Mengel
- Department of Neurology & Stroke, Eberhard-Karls University of Tübingen, Tuebingen, Germany (A.M.)
| | - Merle Brunnée
- Department of Neuroradiology, Neurological University Clinic, Heidelberg University Hospital, Heidelberg, Germany (M.B.)
| | - Leonie Zerweck
- Diagnostic and Interventional Neuroradiology, Department of Radiology, University Hospital Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Baden-Württemberg, Germany (A.E., T.-K.H., L.Z., V.R., M.Z., U.E., G.G.)
| | - Vivien Richter
- Diagnostic and Interventional Neuroradiology, Department of Radiology, University Hospital Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Baden-Württemberg, Germany (A.E., T.-K.H., L.Z., V.R., M.Z., U.E., G.G.)
| | - Mario Zuena
- Diagnostic and Interventional Neuroradiology, Department of Radiology, University Hospital Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Baden-Württemberg, Germany (A.E., T.-K.H., L.Z., V.R., M.Z., U.E., G.G.)
| | - Martin Schuhholz
- Faculty of Medicine, University of Tuebingen, Tübingen, Germany (M.S.)
| | - Ulrike Ernemann
- Diagnostic and Interventional Neuroradiology, Department of Radiology, University Hospital Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Baden-Württemberg, Germany (A.E., T.-K.H., L.Z., V.R., M.Z., U.E., G.G.)
| | - Georg Gohla
- Diagnostic and Interventional Neuroradiology, Department of Radiology, University Hospital Tuebingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Baden-Württemberg, Germany (A.E., T.-K.H., L.Z., V.R., M.Z., U.E., G.G.)
| |
Collapse
|
7
|
Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH. A Systematic Literature Review of 3D Deep Learning Techniques in Computed Tomography Reconstruction. Tomography 2023; 9:2158-2189. [PMID: 38133073 PMCID: PMC10748093 DOI: 10.3390/tomography9060169] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 11/27/2023] [Accepted: 12/01/2023] [Indexed: 12/23/2023] Open
Abstract
Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
Collapse
Affiliation(s)
- Hameedur Rahman
- Department of Computer Games Development, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Abdur Rehman Khan
- Department of Creative Technologies, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Touseef Sadiq
- Centre for Artificial Intelligence Research, Department of Information and Communication Technology, University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway
| | - Ashfaq Hussain Farooqi
- Department of Computer Science, Faculty of Computing AI, Air University, Islamabad 44000, Pakistan;
| | - Inam Ullah Khan
- Department of Electronic Engineering, School of Engineering & Applied Sciences (SEAS), Isra University, Islamabad Campus, Islamabad 44000, Pakistan;
| | - Wei Hong Lim
- Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia;
| |
Collapse
|
8
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
9
|
Siddiqui EA, Chaurasia V, Shandilya M. Classification of lung cancer computed tomography images using a 3-dimensional deep convolutional neural network with multi-layer filter. J Cancer Res Clin Oncol 2023; 149:11279-11294. [PMID: 37368121 DOI: 10.1007/s00432-023-04992-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 06/15/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer creates pulmonary nodules in the patient's lung, which may be diagnosed early on using computer-aided diagnostics. A novel automated pulmonary nodule diagnosis technique using three-dimensional deep convolutional neural networks and multi-layered filter has been presented in this paper. For the suggested automated diagnosis of lung nodule, volumetric computed tomographic images are employed. The proposed approach generates three-dimensional feature layers, which retain the temporal links between adjacent slices of computed tomographic images. The use of several activation functions at different levels of the proposed network results in increased feature extraction and efficient classification. The suggested approach divides lung volumetric computed tomography pictures into malignant and benign categories. The suggested technique's performance is evaluated using three commonly used datasets in the domain: LUNA 16, LIDC-IDRI, and TCIA. The proposed method outperforms the state-of-the-art in terms of accuracy, sensitivity, specificity, F-1 score, false-positive rate, false-negative rate, and error rate.
Collapse
Affiliation(s)
| | | | - Madhu Shandilya
- Maulana Azad National Institute of Technology, Bhopal, 462003, India
| |
Collapse
|
10
|
Shao J, Feng J, Li J, Liang S, Li W, Wang C. Novel tools for early diagnosis and precision treatment based on artificial intelligence. CHINESE MEDICAL JOURNAL PULMONARY AND CRITICAL CARE MEDICINE 2023; 1:148-160. [PMID: 39171128 PMCID: PMC11332840 DOI: 10.1016/j.pccm.2023.05.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Indexed: 08/23/2024]
Abstract
Lung cancer has the highest mortality rate among all cancers in the world. Hence, early diagnosis and personalized treatment plans are crucial to improving its 5-year survival rate. Chest computed tomography (CT) serves as an essential tool for lung cancer screening, and pathology images are the gold standard for lung cancer diagnosis. However, medical image evaluation relies on manual labor and suffers from missed diagnosis or misdiagnosis, and physician heterogeneity. The rapid development of artificial intelligence (AI) has brought a whole novel opportunity for medical task processing, demonstrating the potential for clinical application in lung cancer diagnosis and treatment. AI technologies, including machine learning and deep learning, have been deployed extensively for lung nodule detection, benign and malignant classification, and subtype identification based on CT images. Furthermore, AI plays a role in the non-invasive prediction of genetic mutations and molecular status to provide the optimal treatment regimen, and applies to the assessment of therapeutic efficacy and prognosis of lung cancer patients, enabling precision medicine to become a reality. Meanwhile, histology-based AI models assist pathologists in typing, molecular characterization, and prognosis prediction to enhance the efficiency of diagnosis and treatment. However, the leap to extensive clinical application still faces various challenges, such as data sharing, standardized label acquisition, clinical application regulation, and multimodal integration. Nevertheless, AI holds promising potential in the field of lung cancer to improve cancer care.
Collapse
Affiliation(s)
- Jun Shao
- Department of Pulmonary and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Jiaming Feng
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Jingwei Li
- Department of Pulmonary and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Shufan Liang
- Department of Pulmonary and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Weimin Li
- Department of Pulmonary and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Chengdi Wang
- Department of Pulmonary and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| |
Collapse
|
11
|
Javed MA, Bin Liaqat H, Meraj T, Alotaibi A, Alshammari M. Identification and Classification of Lungs Focal Opacity Using CNN Segmentation and Optimal Feature Selection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:6357252. [PMID: 37538561 PMCID: PMC10396675 DOI: 10.1155/2023/6357252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/07/2022] [Accepted: 09/26/2022] [Indexed: 08/05/2023]
Abstract
Lung cancer is one of the deadliest cancers around the world, with high mortality rate in comparison to other cancers. A lung cancer patient's survival probability in late stages is very low. However, if it can be detected early, the patient survival rate can be improved. Diagnosing lung cancer early is a complicated task due to having the visual similarity of lungs nodules with trachea, vessels, and other surrounding tissues that leads toward misclassification of lung nodules. Therefore, correct identification and classification of nodules is required. Previous studies have used noisy features, which makes results comprising. A predictive model has been proposed to accurately detect and classify the lung nodules to address this problem. In the proposed framework, at first, the semantic segmentation was performed to identify the nodules in images in the Lungs image database consortium (LIDC) dataset. Optimal features for classification include histogram oriented gradients (HOGs), local binary patterns (LBPs), and geometric features are extracted after segmentation of nodules. The results shown that support vector machines performed better in identifying the nodules than other classifiers, achieving the highest accuracy of 97.8% with sensitivity of 100%, specificity of 93%, and false positive rate of 6.7%.
Collapse
Affiliation(s)
| | - Hannan Bin Liaqat
- Department of Information Technology, Division of Science and Technology University of Education, Township Campus Lahore, Lahore, Pakistan
| | - Talha Meraj
- Department of Computer Science, COMSATS University Islamabad—Wah Campus, Wah Cantt, Rawalpindi 47040, Pakistan
| | - Aziz Alotaibi
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Majid Alshammari
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| |
Collapse
|
12
|
Iqbal S, Qureshi AN, Li J, Choudhry IA, Mahmood T. Dynamic learning for imbalanced data in learning chest X-ray and CT images. Heliyon 2023; 9:e16807. [PMID: 37313141 PMCID: PMC10258426 DOI: 10.1016/j.heliyon.2023.e16807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 05/26/2023] [Accepted: 05/29/2023] [Indexed: 06/15/2023] Open
Abstract
Massive annotated datasets are necessary for networks of deep learning. When a topic is being researched for the first time, as in the situation of the viral epidemic, handling it with limited annotated datasets might be difficult. Additionally, the datasets are quite unbalanced in this situation, with limited findings coming from significant instances of the novel illness. We offer a technique that allows a class balancing algorithm to understand and detect lung disease signs from chest X-ray and CT images. Deep learning techniques are used to train and evaluate images, enabling the extraction of basic visual attributes. The training objects' characteristics, instances, categories, and relative data modeling are all represented probabilistically. It is possible to identify a minority category in the classification process by using an imbalance-based sample analyzer. In order to address the imbalance problem, learning samples from the minority class are examined. The Support Vector Machine (SVM) is used to categorize images in clustering. Physicians and medical professionals can use the CNN model to validate their initial assessments of malignant and benign categorization. The proposed technique for class imbalance (3-Phase Dynamic Learning (3PDL)) and parallel CNN model (Hybrid Feature Fusion (HFF)) for multiple modalities achieve a high F1 score of 96.83 and precision is 96.87, its outstanding accuracy and generalization suggest that it may be utilized to create a pathologist's help tool.
Collapse
Affiliation(s)
- Saeed Iqbal
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124,China
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124,China
- Beijing Engineering Research Center for IoT Software and Systems, 100124, China
| | - Imran Arshad Choudhry
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Tariq Mahmood
- Faculty of Information Sciences, University of Education, Vehari Campus, Vehari, 61100, Pakistan
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586, Kingdom of Saudi Arabia
| |
Collapse
|
13
|
Kumar S, Choudhary S, Jain A, Singh K, Ahmadian A, Bajuri MY. Brain Tumor Classification Using Deep Neural Network and Transfer Learning. Brain Topogr 2023; 36:305-318. [PMID: 37061591 DOI: 10.1007/s10548-023-00953-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 03/01/2023] [Indexed: 04/17/2023]
Abstract
In the field of medical imaging, the classification of brain tumors based on histopathological analysis is a laborious and traditional approach. To address this issue, the use of deep learning techniques, specifically Convolutional Neural Networks (CNNs), has become a popular trend in research and development. Our proposed solution is a novel Convolutional Neural Network that leverages transfer learning to classify brain tumors in MRI images as benign or malignant with high accuracy. We evaluated the performance of our proposed model against several existing pre-trained networks, including Res-Net, Alex-Net, U-Net, and VGG-16. Our results showed a significant improvement in prediction accuracy, precision, recall, and F1-score, respectively, compared to the existing methods. Our proposed method achieved a benign and malignant classification accuracy of 99.30 and 98.40% using improved Res-Net 50. Our proposed system enhances image fusion quality and has the potential to aid in more accurate diagnoses.
Collapse
Affiliation(s)
- Sandeep Kumar
- Department of Electronics & Communication, Sreyas Institute of Engineering and Technology, Hyderabad, India
| | - Shilpa Choudhary
- Department of Computer Science and Engineering, Neil Gogte Institute of Technology, Hyderabad, India
| | - Arpit Jain
- Faculty of Engineering & Computing Sciences, Teerthanker Mahaveer University, Moradabad, India
| | - Karan Singh
- School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India
| | - Ali Ahmadian
- Decisions Lab, Mediterranea University of Reggio Calabria, Reggio Calabria, Italy.
- Department of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon.
| | - Mohd Yazid Bajuri
- Department of Orthopaedics and Traumatology, Faculty of Medicine, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|
14
|
Sebastian AE, Dua D. Lung Nodule Detection via Optimized Convolutional Neural Network: Impact of Improved Moth Flame Algorithm. SENSING AND IMAGING 2023; 24:11. [PMID: 36936054 PMCID: PMC10009866 DOI: 10.1007/s11220-022-00406-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 09/30/2022] [Accepted: 11/02/2022] [Indexed: 06/18/2023]
Abstract
Lung cancer is a high-risk disease that affects people all over the world, and lung nodules are the most common sign of early lung cancer. Since early identification of lung cancer can considerably improve a lung scanner patient's chances of survival, an accurate and efficient nodule detection system can be essential. Automatic lung nodule recognition decreases radiologists' effort, as well as the risk of misdiagnosis and missed diagnoses. Hence, this article developed a new lung nodule detection model with four stages like "Image pre-processing, segmentation, feature extraction and classification". In this processes, pre-processing is the first step, in which the input image is subjected to a series of operations. Then, the "Otsu Thresholding model" is used to segment the pre-processed pictures. Then in the third stage, the LBP features are retrieved that is then classified via optimized Convolutional Neural Network (CNN). In this, the activation function and convolutional layer count of CNN is optimally tuned via a proposed algorithm known as Improved Moth Flame Optimization (IMFO). At the end, the betterment of the scheme is validated by carrying out analysis in terms of certain measures. Especially, the accuracy of the proposed work is 6.85%, 2.91%, 1.75%, 0.73%, 1.83%, as well as 4.05% superior to the extant SVM, KNN, CNN, MFO, WTEEB as well as GWO + FRVM methods respectively.
Collapse
Affiliation(s)
| | - Disha Dua
- Indira Gandhi Delhi Technical University for Women, Delhi, Delhi, India
| |
Collapse
|
15
|
Hussain Ali Y, Chinnaperumal S, Marappan R, Raju SK, Sadiq AT, Farhan AK, Srinivasan P. Multi-Layered Non-Local Bayes Model for Lung Cancer Early Diagnosis Prediction with the Internet of Medical Things. Bioengineering (Basel) 2023; 10:bioengineering10020138. [PMID: 36829633 PMCID: PMC9952033 DOI: 10.3390/bioengineering10020138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 01/07/2023] [Accepted: 01/11/2023] [Indexed: 01/22/2023] Open
Abstract
The Internet of Things (IoT) has been influential in predicting major diseases in current practice. The deep learning (DL) technique is vital in monitoring and controlling the functioning of the healthcare system and ensuring an effective decision-making process. In this study, we aimed to develop a framework implementing the IoT and DL to identify lung cancer. The accurate and efficient prediction of disease is a challenging task. The proposed model deploys a DL process with a multi-layered non-local Bayes (NL Bayes) model to manage the process of early diagnosis. The Internet of Medical Things (IoMT) could be useful in determining factors that could enable the effective sorting of quality values through the use of sensors and image processing techniques. We studied the proposed model by analyzing its results with regard to specific attributes such as accuracy, quality, and system process efficiency. In this study, we aimed to overcome problems in the existing process through the practical results of a computational comparison process. The proposed model provided a low error rate (2%, 5%) and an increase in the number of instance values. The experimental results led us to conclude that the proposed model can make predictions based on images with high sensitivity and better precision values compared to other specific results. The proposed model achieved the expected accuracy (81%, 95%), the expected specificity (80%, 98%), and the expected sensitivity (80%, 99%). This model is adequate for real-time health monitoring systems in the prediction of lung cancer and can enable effective decision-making with the use of DL techniques.
Collapse
Affiliation(s)
- Yossra Hussain Ali
- Department of Computer Sciences, University of Technology, Baghdad 10066, Iraq
| | - Seelammal Chinnaperumal
- Department of Computer Science and Engineering, Solamalai College of Engineering, Madurai 625020, India
| | - Raja Marappan
- School of Computing, Sastra Deemed University, Thanjavur 613401, India
| | - Sekar Kidambi Raju
- School of Computing, Sastra Deemed University, Thanjavur 613401, India
- Correspondence:
| | - Ahmed T. Sadiq
- Department of Computer Sciences, University of Technology, Baghdad 10066, Iraq
| | - Alaa K. Farhan
- Department of Computer Sciences, University of Technology, Baghdad 10066, Iraq
| | | |
Collapse
|
16
|
Gassenmaier S, Warm V, Nickel D, Weiland E, Herrmann J, Almansour H, Wessling D, Afat S. Thin-Slice Prostate MRI Enabled by Deep Learning Image Reconstruction. Cancers (Basel) 2023; 15:578. [PMID: 36765539 PMCID: PMC9913660 DOI: 10.3390/cancers15030578] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/08/2023] [Accepted: 01/13/2023] [Indexed: 01/20/2023] Open
Abstract
OBJECTIVES Thin-slice prostate MRI might be beneficial for prostate cancer diagnostics. However, prolongation of acquisition time is a major drawback of thin-slice imaging. Therefore, the purpose of this study was to investigate the impact of a thin-slice deep learning accelerated T2-weighted (w) TSE imaging sequence (T2DLR) of the prostate as compared to conventional T2w TSE imaging (T2S). MATERIALS AND METHODS Thirty patients were included in this prospective study at one university center after obtaining written informed consent. T2S (3 mm slice thickness) was acquired first in three orthogonal planes followed by thin-slice T2DLR (2 mm slice thickness) in axial plane. Acquisition time of axial conventional T2S was 4:12 min compared to 4:37 min for T2DLR. Imaging datasets were evaluated by two radiologists using a Likert-scale ranging from 1-4, with 4 being the best regarding the following parameters: sharpness, lesion detectability, artifacts, overall image quality, and diagnostic confidence. Furthermore, preference of T2S versus T2DLR was evaluated. RESULTS The mean patient age was 68 ± 8 years. Sharpness of images and lesion detectability were rated better in T2DLR with a median of 4 versus a median of 3 in T2S (p < 0.001 for both readers). Image noise was evaluated to be significantly worse in T2DLR as compared to T2S (p < 0.001 and p = 0.021, respectively). Overall image quality was also evaluated to be superior in T2DLR versus T2S with a median of 4 versus 3 (p < 0.001 for both readers). Both readers chose T2DLR in 29 cases as their preference. CONCLUSIONS Thin-slice T2DLR of the prostate provides a significant improvement of image quality without significant prolongation of acquisition time.
Collapse
Affiliation(s)
- Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| | - Verena Warm
- Institute for Pathology and Neuropathology, University Hospital of Tuebingen, Eberhard Karls University Tuebingen, 72076 Tuebingen, Germany
| | - Dominik Nickel
- MR Applications Predevelopment, Siemens Healthcare GmbH, Allee am Roethelheimpark 2, 91052 Erlangen, Germany
| | - Elisabeth Weiland
- MR Applications Predevelopment, Siemens Healthcare GmbH, Allee am Roethelheimpark 2, 91052 Erlangen, Germany
| | - Judith Herrmann
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| | - Daniel Wessling
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| |
Collapse
|
17
|
Krishnapriya S, Karuna Y. Pre-trained deep learning models for brain MRI image classification. Front Hum Neurosci 2023; 17:1150120. [PMID: 37151901 PMCID: PMC10157370 DOI: 10.3389/fnhum.2023.1150120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/06/2023] [Indexed: 05/09/2023] Open
Abstract
Brain tumors are serious conditions caused by uncontrolled and abnormal cell division. Tumors can have devastating implications if not accurately and promptly detected. Magnetic resonance imaging (MRI) is one of the methods frequently used to detect brain tumors owing to its excellent resolution. In the past few decades, substantial research has been conducted in the field of classifying brain images, ranging from traditional methods to deep-learning techniques such as convolutional neural networks (CNN). To accomplish classification, machine-learning methods require manually created features. In contrast, CNN achieves classification by extracting visual features from unprocessed images. The size of the training dataset had a significant impact on the features that CNN extracts. The CNN tends to overfit when its size is small. Deep CNNs (DCNN) with transfer learning have therefore been developed. The aim of this work was to investigate the brain MR image categorization potential of pre-trained DCNN VGG-19, VGG-16, ResNet50, and Inception V3 models using data augmentation and transfer learning techniques. Validation of the test set utilizing accuracy, recall, Precision, and F1 score showed that the pre-trained VGG-19 model with transfer learning exhibited the best performance. In addition, these methods offer an end-to-end classification of raw images without the need for manual attribute extraction.
Collapse
|
18
|
Mridha MF, Prodeep AR, Hoque ASMM, Islam MR, Lima AA, Kabir MM, Hamid MA, Watanobe Y. A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, American International University Bangladesh, Dhaka 1229, Bangladesh
| | - Akibur Rahman Prodeep
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - A. S. M. Morshedul Hoque
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
19
|
Wang H, Tang N, Zhang C, Hao Y, Meng X, Li J. Practice toward standardized performance testing of computer-aided detection algorithms for pulmonary nodule. Front Public Health 2022; 10:1071673. [PMID: 36568775 PMCID: PMC9768365 DOI: 10.3389/fpubh.2022.1071673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
This study aimed at implementing practice to build a standardized protocol to test the performance of computer-aided detection (CAD) algorithms for pulmonary nodules. A test dataset was established according to a standardized procedure, including data collection, curation and annotation. Six types of pulmonary nodules were manually annotated as reference standard. Three specific rules to match algorithm output with reference standard were applied and compared. These rules included: (1) "center hit" [whether the center of algorithm highlighted region of interest (ROI) hit the ROI of reference standard]; (2) "center distance" (whether the distance between algorithm highlighted ROI center and reference standard center was below a certain threshold); (3) "area overlap" (whether the overlap between algorithm highlighted ROI and reference standard was above a certain threshold). Performance metrics were calculated and the results were compared among ten algorithms under test (AUTs). The test set currently consisted of CT sequences from 593 patients. Under "center hit" rule, the average recall rate, average precision, and average F1 score of ten algorithms under test were 54.68, 38.19, and 42.39%, respectively. Correspondingly, the results under "center distance" rule were 55.43, 38.69, and 42.96%, and the results under "area overlap" rule were 40.35, 27.75, and 31.13%. Among the six types of pulmonary nodules, the AUTs showed the highest miss rate for pure ground-glass nodules, with an average of 59.32%, followed by pleural nodules and solid nodules, with an average of 49.80 and 42.21%, respectively. The algorithm testing results changed along with specific matching methods adopted in the testing process. The AUTs showed uneven performance on different types of pulmonary nodules. This centralized testing protocol supports the comparison between algorithms with similar intended use, and helps evaluate algorithm performance.
Collapse
Affiliation(s)
- Hao Wang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Na Tang
- School of Bioengineering, Chongqing University, Chongqing, China
| | - Chao Zhang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Ye Hao
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Xiangfeng Meng
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,*Correspondence: Xiangfeng Meng
| | - Jiage Li
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,Jiage Li
| |
Collapse
|
20
|
Multi-instance learning based on spatial continuous category representation for case-level meningioma grading in MRI images. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04114-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
21
|
Usman M, Zia T, Tariq A. Analyzing Transfer Learning of Vision Transformers for Interpreting Chest Radiography. J Digit Imaging 2022; 35:1445-1462. [PMID: 35819537 PMCID: PMC9274969 DOI: 10.1007/s10278-022-00666-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 05/28/2022] [Accepted: 06/03/2022] [Indexed: 12/01/2022] Open
Abstract
Limited availability of medical imaging datasets is a vital limitation when using "data hungry" deep learning to gain performance improvements. Dealing with the issue, transfer learning has become a de facto standard, where a pre-trained convolution neural network (CNN), typically on natural images (e.g., ImageNet), is finetuned on medical images. Meanwhile, pre-trained transformers, which are self-attention-based models, have become de facto standard in natural language processing (NLP) and state of the art in image classification due to their powerful transfer learning abilities. Inspired by the success of transformers in NLP and image classification, large-scale transformers (such as vision transformer) are trained on natural images. Based on these recent developments, this research aims to explore the efficacy of pre-trained natural image transformers for medical images. Specifically, we analyze pre-trained vision transformer on CheXpert and pediatric pneumonia dataset. We use CNN standard models including VGGNet and ResNet as baseline models. By examining the acquired representations and results, we discover that transfer learning from the pre-trained vision transformer shows improved results as compared to pre-trained CNN which demonstrates a greater transfer ability of the transformers in medical imaging.
Collapse
Affiliation(s)
- Mohammad Usman
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tehseen Zia
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan
- Medical Imaging and Diagnostic Center, National Center for Artificial Intelligence, Islamabad, Pakistan
| | - Ali Tariq
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
22
|
Niu C, Wang G. Unsupervised contrastive learning based transformer for lung nodule detection. Phys Med Biol 2022; 67:10.1088/1361-6560/ac92ba. [PMID: 36113445 PMCID: PMC10040209 DOI: 10.1088/1361-6560/ac92ba] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 09/16/2022] [Indexed: 11/12/2022]
Abstract
Objective.Early detection of lung nodules with computed tomography (CT) is critical for the longer survival of lung cancer patients and better quality of life. Computer-aided detection/diagnosis (CAD) is proven valuable as a second or concurrent reader in this context. However, accurate detection of lung nodules remains a challenge for such CAD systems and even radiologists due to not only the variability in size, location, and appearance of lung nodules but also the complexity of lung structures. This leads to a high false-positive rate with CAD, compromising its clinical efficacy.Approach.Motivated by recent computer vision techniques, here we present a self-supervised region-based 3D transformer model to identify lung nodules among a set of candidate regions. Specifically, a 3D vision transformer is developed that divides a CT volume into a sequence of non-overlap cubes, extracts embedding features from each cube with an embedding layer, and analyzes all embedding features with a self-attention mechanism for the prediction. To effectively train the transformer model on a relatively small dataset, the region-based contrastive learning method is used to boost the performance by pre-training the 3D transformer with public CT images.Results.Our experiments show that the proposed method can significantly improve the performance of lung nodule screening in comparison with the commonly used 3D convolutional neural networks.Significance.This study demonstrates a promising direction to improve the performance of current CAD systems for lung nodule detection.
Collapse
Affiliation(s)
- Chuang Niu
- Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| | - Ge Wang
- Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
23
|
Priya KV, Peter JD. A federated approach for detecting the chest diseases using DenseNet for multi-label classification. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-021-00474-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
AbstractMulti-label disease classification algorithms help to predict various chronic diseases at an early stage. Diverse deep neural networks are applied for multi-label classification problems to foresee multiple mutually non-exclusive classes or diseases. We propose a federated approach for detecting the chest diseases using DenseNets for better accuracy in prediction of various diseases. Images of chest X-ray from the Kaggle repository is used as the dataset in the proposed model. This new model is tested with both sample and full dataset of chest X-ray, and it outperforms existing models in terms of various evaluation metrics. We adopted transfer learning approach along with the pre-trained network from scratch to improve performance. For this, we have integrated DenseNet121 to our framework. DenseNets have a few focal points as they help to overcome vanishing gradient issues, boost up the feature propagation and reuse and also to reduce the number of parameters. Furthermore, gradCAMS are used as visualization methods to visualize the affected parts on chest X-ray. Henceforth, the proposed architecture will help the prediction of various diseases from a single chest X-ray and furthermore direct the doctors and specialists for taking timely decisions.
Collapse
|
24
|
Neural architecture search for pneumonia diagnosis from chest X-rays. Sci Rep 2022; 12:11309. [PMID: 35788644 PMCID: PMC9252574 DOI: 10.1038/s41598-022-15341-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 06/22/2022] [Indexed: 11/25/2022] Open
Abstract
Pneumonia is one of the diseases that causes the most fatalities worldwide, especially in children. Recently, pneumonia-caused deaths have increased dramatically due to the novel Coronavirus global pandemic. Chest X-ray (CXR) images are one of the most readily available and common imaging modality for the detection and identification of pneumonia. However, the detection of pneumonia from chest radiography is a difficult task even for experienced radiologists. Artificial Intelligence (AI) based systems have great potential in assisting in quick and accurate diagnosis of pneumonia from chest X-rays. The aim of this study is to develop a Neural Architecture Search (NAS) method to find the best convolutional architecture capable of detecting pneumonia from chest X-rays. We propose a Learning by Teaching framework inspired by the teaching-driven learning methodology from humans, and conduct experiments on a pneumonia chest X-ray dataset with over 5000 images. Our proposed method yields an area under ROC curve (AUC) of 97.6% for pneumonia detection, which improves upon previous NAS methods by 5.1% (absolute).
Collapse
|
25
|
Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 2022; 79:102444. [PMID: 35472844 PMCID: PMC9156578 DOI: 10.1016/j.media.2022.102444] [Citation(s) in RCA: 186] [Impact Index Per Article: 93.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 03/09/2022] [Accepted: 04/01/2022] [Indexed: 02/07/2023]
Abstract
Deep learning has received extensive research interest in developing new medical image processing algorithms, and deep learning based models have been remarkably successful in a variety of medical imaging tasks to support disease detection and diagnosis. Despite the success, the further improvement of deep learning models in medical image analysis is majorly bottlenecked by the lack of large-sized and well-annotated datasets. In the past five years, many studies have focused on addressing this challenge. In this paper, we reviewed and summarized these recent studies to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks. Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical image analysis, which are summarized based on different application scenarios, including classification, segmentation, detection, and image registration. We also discuss major technical challenges and suggest possible solutions in the future research efforts.
Collapse
Affiliation(s)
- Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Ximin Wang
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Ke Zhang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Kar-Ming Fung
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Theresa C Thai
- Department of Radiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Kathleen Moore
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Robert S Mannel
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA.
| |
Collapse
|
26
|
Tomassini S, Falcionelli N, Sernani P, Burattini L, Dragoni AF. Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey. Comput Biol Med 2022; 146:105691. [PMID: 35691714 DOI: 10.1016/j.compbiomed.2022.105691] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 05/26/2022] [Accepted: 05/31/2022] [Indexed: 11/30/2022]
Abstract
Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks.
Collapse
Affiliation(s)
- Selene Tomassini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Nicola Falcionelli
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Paolo Sernani
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Laura Burattini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Aldo Franco Dragoni
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| |
Collapse
|
27
|
Tiwari P, Pant B, Elarabawy MM, Abd-Elnaby M, Mohd N, Dhiman G, Sharma S. CNN Based Multiclass Brain Tumor Detection Using Medical Imaging. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1830010. [PMID: 35774437 PMCID: PMC9239800 DOI: 10.1155/2022/1830010] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 05/23/2022] [Accepted: 05/30/2022] [Indexed: 02/08/2023]
Abstract
Brain tumors are the 10th leading reason for the death which is common among the adults and children. On the basis of texture, region, and shape there exists various types of tumor, and each one has the chances of survival very low. The wrong classification can lead to the worse consequences. As a result, these had to be properly divided into the many classes or grades, which is where multiclass classification comes into play. Magnetic resonance imaging (MRI) pictures are the most acceptable manner or method for representing the human brain for identifying the various tumors. Recent developments in image classification technology have made great strides, and the most popular and better approach that has been considered best in this area is CNN, and therefore, CNN is used for the brain tumor classification issue in this paper. The proposed model was successfully able to classify the brain image into four different classes, namely, no tumor indicating the given MRI of the brain does not have the tumor, glioma, meningioma, and pituitary tumor. This model produces an accuracy of 99%.
Collapse
Affiliation(s)
- Pallavi Tiwari
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, India
| | - Bhaskar Pant
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, India
| | - Mahmoud M. Elarabawy
- Department of Mathematics, Faculty of Science, Suez Canal University, Ismailia 41522, Egypt
| | - Mohammed Abd-Elnaby
- Department of Computer Engineering, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Noor Mohd
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, India
| | - Gaurav Dhiman
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, India
| | | |
Collapse
|
28
|
Ramana K, Kumar MR, Sreenivasulu K, Gadekallu TR, Bhatia S, Agarwal P, Idrees SM. Early Prediction of Lung Cancers Using Deep Saliency Capsule and Pre-Trained Deep Learning Frameworks. Front Oncol 2022; 12:886739. [PMID: 35785184 PMCID: PMC9247339 DOI: 10.3389/fonc.2022.886739] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 05/13/2022] [Indexed: 12/12/2022] Open
Abstract
Lung cancer is the cellular fission of abnormal cells inside the lungs that leads to 72% of total deaths worldwide. Lung cancer are also recognized to be one of the leading causes of mortality, with a chance of survival of only 19%. Tumors can be diagnosed using a variety of procedures, including X-rays, CT scans, biopsies, and PET-CT scans. From the above techniques, Computer Tomography (CT) scan technique is considered to be one of the most powerful tools for an early diagnosis of lung cancers. Recently, machine and deep learning algorithms have picked up peak energy, and this aids in building a strong diagnosis and prediction system using CT scan images. But achieving the best performances in diagnosis still remains on the darker side of the research. To solve this problem, this paper proposes novel saliency-based capsule networks for better segmentation and employs the optimized pre-trained transfer learning for the better prediction of lung cancers from the input CT images. The integration of capsule-based saliency segmentation leads to the reduction and eventually reduces the risk of computational complexity and overfitting problem. Additionally, hyperparameters of pretrained networks are tuned by the whale optimization algorithm to improve the prediction accuracy by sacrificing the complexity. The extensive experimentation carried out using the LUNA-16 and LIDC Lung Image datasets and various performance metrics such as accuracy, precision, recall, specificity, and F1-score are evaluated and analyzed. Experimental results demonstrate that the proposed framework has achieved the peak performance of 98.5% accuracy, 99.0% precision, 98.8% recall, and 99.1% F1-score and outperformed the DenseNet, AlexNet, Resnets-50, Resnets-100, VGG-16, and Inception models.
Collapse
Affiliation(s)
- Kadiyala Ramana
- Department of Information Technology (IT), Chaitanya Bharathi Institute of Technology, Hyderabad, India
| | - Madapuri Rudra Kumar
- Department of Computer Science and Engineering (CSE), G. Pullaiah College of Engineering and Technology, Kurnool, India
| | - K. Sreenivasulu
- Department of Computer Science and Engineering (CSE), G. Pullaiah College of Engineering and Technology, Kurnool, India
| | | | - Surbhi Bhatia
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Hasa, Saudi Arabia
| | - Parul Agarwal
- Department of Computer Science and Engineering (CSE), Jamia Hamdard, India
| | - Sheikh Mohammad Idrees
- Department of Computer Science Institutt for datateknologi og informatikk (IDI), Norwegian University of Science and Technology, Gjøvik, Norway
- *Correspondence: Sheikh Mohammad Idrees,
| |
Collapse
|
29
|
LMA-Net: A lesion morphology aware network for medical image segmentation towards breast tumors. Comput Biol Med 2022; 147:105685. [DOI: 10.1016/j.compbiomed.2022.105685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/20/2022] [Accepted: 05/30/2022] [Indexed: 11/17/2022]
|
30
|
Lakshmi MJ, Nagaraja Rao S. Brain tumor magnetic resonance image classification: a deep learning approach. Soft comput 2022. [DOI: 10.1007/s00500-022-07163-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
31
|
SegChaNet: A Novel Model for Lung Cancer Segmentation in CT Scans. Appl Bionics Biomech 2022; 2022:1139587. [PMID: 35607427 PMCID: PMC9124150 DOI: 10.1155/2022/1139587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 05/02/2022] [Indexed: 11/17/2022] Open
Abstract
Accurate lung tumor identification is crucial for radiation treatment planning. Due to the low contrast of the lung tumor in computed tomography (CT) images, segmentation of the tumor in CT images is challenging. This paper effectively integrates the U-Net with the channel attention module (CAM) to segment the malignant lung area from the surrounding chest region. The SegChaNet method encodes CT slices of the input lung into feature maps utilizing the trail of encoders. Finally, we explicitly developed a multiscale, dense-feature extraction module to extract multiscale features from the collection of encoded feature maps. We have identified the segmentation map of the lungs by employing the decoders and compared SegChaNet with the state-of-the-art. The model has learned the dense-feature extraction in lung abnormalities, while iterative downsampling followed by iterative upsampling causes the network to remain invariant to the size of the dense abnormality. Experimental results show that the proposed method is accurate and efficient and directly provides explicit lung regions in complex circumstances without postprocessing.
Collapse
|
32
|
Li M, Wu L, Xu G, Duan F, Zhu C. A Robust 3D-Convolutional Neural Network- based Electroencephalogram Decoding Model for the Intra-Individual Difference. Int J Neural Syst 2022; 32:2250034. [DOI: 10.1142/s0129065722500344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
33
|
Zhou J, Xin H. Emerging artificial intelligence methods for fighting lung cancer: a survey. CLINICAL EHEALTH 2022. [DOI: 10.1016/j.ceh.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
34
|
Soni M, Gomathi S, Kumar P, Churi PP, Mohammed MA, Salman AO. Hybridizing Convolutional Neural Network for Classification of Lung Diseases. INTERNATIONAL JOURNAL OF SWARM INTELLIGENCE RESEARCH 2022. [DOI: 10.4018/ijsir.287544] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Pulmonary disease is widespread worldwide. There is persistent blockage of the lungs, pneumonia, asthma, TB, etc. It is essential to diagnose the lungs promptly. For this reason, machine learning models were developed. For lung disease prediction, many deep learning technologies, including the CNN, and the capsule network, are used. The fundamental CNN has low rotating, inclined, or other irregular image orientation efficiency. Therefore by integrating the space transformer network (STN) with CNN, we propose a new hybrid deep learning architecture named STNCNN. The new model is implemented on the dataset from the Kaggle repository for an NIH chest X-ray image. STNCNN has an accuracy of 69% in respect of the entire dataset, while the accuracy values of vanilla grey, vanilla RGB, hybrid CNN are 67.8%, 69.5%, and 63.8%, respectively. When the sample data set is applied, STNCNN takes much less time to train at the cost of a slightly less reliable validation. Therefore both specialists and physicians are simplified by the proposed STNCNN System for the diagnosis of lung disease.
Collapse
Affiliation(s)
| | - S. Gomathi
- UK International Qualifications, Ltd., India
| | - Pankaj Kumar
- Noida Institute of Engineering and Technology, Greater Noida, India
| | | | | | | |
Collapse
|
35
|
Zhang S, Lv B, Zheng X, Li Y, Ge W, Zhang L, Mo F, Qiu J. Dosimetric Study of Deep Learning-Guided ITV Prediction in Cone-beam CT for Lung Stereotactic Body Radiotherapy. Front Public Health 2022; 10:860135. [PMID: 35392465 PMCID: PMC8980420 DOI: 10.3389/fpubh.2022.860135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 02/21/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose The purpose of this study was to evaluate the accuracy of a lung stereotactic body radiotherapy (SBRT) treatment plan with the target of a newly predicted internal target volume (ITVpredict) and the feasibility of its clinical application. ITVpredict was automatically generated by our in-house deep learning model according to the cone-beam CT (CBCT) image database. Method A retrospective study of 45 patients who underwent SBRT was involved, and Mask R-CNN based algorithm model helped to predict the internal target volume (ITV) using the CBCT image database. The geometric accuracy of ITVpredict was verified by the Dice Similarity Coefficient (DSC), 3D Motion Range (R3D), Relative Volume Index (RVI), and Hausdorff Distance (HD). The PTVpredict was generated by ITVpredict, which was registered and then projected on free-breath CT (FBCT) images. The PTVFBCT was margined from the GTV on FBCT images gross tumor volume on free-breath CT (GTVFBCT). Treatment plans with the target of Predict planning target volume on CBCT images (PTVpredict) and planning target volume on free-breath CT (PTVFBCT) were respectively re-established, and the dosimetric parameters included the ratio of the volume of patients receiving at least the prescribed dose to the volume of PTV (R100%), the ratio of the volume of patients receiving at least 50% of the prescribed dose to the volume of PTV in the Radiation Therapy Oncology Group (RTOG) 0813 Trial (R50%), Gradient Index (GI), and the maximum dose 2 cm from the PTV (D2cm), which were evaluated via Plan4DCT, plan which based on PTVpredict (Planpredict), and plan which based on PTVFBCT (PlanFBCT). Result The geometric results showed that there existed a good correlation between ITVpredict and ITV on the 4-dimensional CT [ITV4DCT; DSC= 0.83 ±0.18]. However, the average volume of ITVpredict was 10% less than that of ITV4DCT (p = 0.333). No significant difference in dose coverage was found in V100% for the ITV with 99.98 ± 0.04% in the ITV4DCT vs. 97.56 ± 4.71% in the ITVpredict (p = 0.162). Dosimetry parameters of PTV, including R100%, R50%, GI and D2cm showed no statistically significant difference between each plan (p > 0.05). Conclusion Dosimetric parameters of Planpredict are clinically comparable to those of the original Plan4DCT. This study confirmed that the treatment plan based on ITVpredict produced by our model could automatically meet clinical requirements. Thus, for patients undergoing lung SBRT, the model has great potential for using CBCT images for ITV contouring which can be used in treatment planning.
Collapse
|
36
|
Afat S, Wessling D, Afat C, Nickel D, Arberet S, Herrmann J, Othman AE, Gassenmaier S. Analysis of a Deep Learning-Based Superresolution Algorithm Tailored to Partial Fourier Gradient Echo Sequences of the Abdomen at 1.5 T: Reduction of Breath-Hold Time and Improvement of Image Quality. Invest Radiol 2022; 57:157-162. [PMID: 34510101 DOI: 10.1097/rli.0000000000000825] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The aim of this study was to investigate the feasibility and impact of a novel deep learning superresolution algorithm tailored to partial Fourier allowing retrospectively theoretical acquisition time reduction in 1.5 T T1-weighted gradient echo imaging of the abdomen. MATERIALS AND METHODS Fifty consecutive patients who underwent a 1.5 T contrast-enhanced magnetic resonance imaging examination of the abdomen between April and May 2021 were included in this retrospective study. After acquisition of a conventional T1-weighted volumetric interpolated breath-hold examination using Dixon for water-fat separation (VIBEStd), the acquired data were reprocessed including a superresolution algorithm that was optimized for partial Fourier acquisitions (VIBESR). To accelerate theoretically the acquisition process, a more aggressive partial Fourier setting was applied in VIBESR reconstructions practically corresponding to a shorter acquisition for the data included in the retrospective reconstruction. Precontrast, dynamic contrast-enhanced, and postcontrast data sets were processed. Image analysis was performed by 2 radiologists independently in a blinded random order without access to clinical data regarding the following criteria using a Likert scale ranging from 1 to 4 with 4 being the best: noise levels, sharpness and contrast of vessels, sharpness and contrast of organs and lymph nodes, overall image quality, diagnostic confidence, and lesion conspicuity.Wilcoxon signed rank test for paired data was applied to test for significance. RESULTS Mean patient age was 61 ± 14 years. Mean acquisition time for the conventional VIBEStd sequence was 15 ± 1 seconds versus theoretical 13 ± 1 seconds of acquired data used for the VIBESR reconstruction. Noise levels were evaluated to be better in VIBESR with a median of 4 (4-4) versus a median of 3 (3-3) in VIBEStd by both readers (P < 0.001). Sharpness and contrast of vessels as well as organs and lymph nodes were also evaluated to be superior in VIBESR compared with VIBEStd with a median of 4 (4-4) versus a median of 3 (3-3) (P < 0.001). Diagnostic confidence was also rated superior in VIBESR with a median of 4 (4-4) versus a median of 3.5 (3-4) in VIBEStd by reader 1 and with a median of 4 (4-4) for VIBESR and a median of 4 (4-4) for VIBEStd by reader 2 (both P < 0.001). CONCLUSIONS Image enhancement using deep learning-based superresolution tailored to partial Fourier acquisitions of T1-weighted gradient echo imaging of the abdomen provides improved image quality and diagnostic confidence in combination with more aggressive partial Fourier settings leading to shorter scan time.
Collapse
Affiliation(s)
- Saif Afat
- From the Departments of Diagnostic and Interventional Radiology
| | - Daniel Wessling
- From the Departments of Diagnostic and Interventional Radiology
| | - Carmen Afat
- Internal Medicine I, Eberhard Karls University Tuebingen, Tuebingen
| | - Dominik Nickel
- MR Applications Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Simon Arberet
- Digital Technology and Innovation, Siemens Healthineers, Princeton, NJ
| | - Judith Herrmann
- From the Departments of Diagnostic and Interventional Radiology
| | | | | |
Collapse
|
37
|
Li Z, Zhang S, Zhang L, Li Y, Zheng X, Fu J, Qiu J. Deep Learning-Based Internal Target Volume (ITV) Prediction Using Cone-Beam CT Images in Lung Stereotactic Body Radiotherapy. Technol Cancer Res Treat 2022; 21:15330338211073380. [PMID: 35188835 PMCID: PMC8864265 DOI: 10.1177/15330338211073380] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Purpose:This study aims to develop a deep learning (DL)-based (Mask R-CNN) method to predict the internal target volume (ITV) in cone beam computed tomography (CBCT) images for lung stereotactic body radiotherapy (SBRT) patients and to evaluate the prediction accuracy of the model using 4DCT as ground truth. Methods and Materials: This study enrolled 78 phantom cases and 156 patient cases who received SBRT treatment. We used a novel DL model (Mask R-CNN) to identify and delineate lung tumor ITV in CBCT images. The results of the DL-based method were compared quantitatively with the ground truth (4DCT) using 4 metrics, including Dice Similarity Coefficient (DSC), Relative Volume Index (RVI), 3D Motion Range (R3D), and Hausdorff Surface Distance (HD). Paired t-tests were used to determine the differences between the DL-based method and manual contouring. Results: The DSC value for 4DCTMIP versus CBCT is 0.86 ± 0.16 and for 4DCTAVG versus CBCT is 0.83 ± 0.18, indicating a high similarity of tumor delineation in CBCT and 4DCT. The mean Accuracy Precision (mAP), R3D, RVI, and HD values for phantom evaluation are 0.94 ± 0.04, 1.37 ± 0.36, 0.79 ± 0.02, and 6.79 ± 0.68, respectively. For patient evaluation, the mAP, R3D, RVI, and HD achieved averaged values of 0.74 ± 0.23, 2.39 ± 1.59, 1.27 ± 0.47, and 17.00 ± 19.89, respectively. These results showed a good correlation between predicted ITV and manually contoured ITV. The phantom p-value for RVI, R3D, and HD are 0.75, 0.08, 0.86, and patient p-value are 0.53, 0.07, 0.28, respectively. These p-values for phantom and patient showed no significant difference between the predicted ITV and physician's manual contouring. Conclusion:The current improved method (Mask R-CNN) yielded a good similarity between predicted ITV in CBCT and the manual contouring in 4DCT, thus indicating great potential for using CBCT for patient ITV contouring.
Collapse
Affiliation(s)
- Zhen Li
- Fudan University Huadong Hospital, Shanghai, China
- Shanghai Sixth People’s Hospital, Shanghai, China
| | - Shujun Zhang
- Fudan University Huadong Hospital, Shanghai, China
| | - Libo Zhang
- Fudan University Huadong Hospital, Shanghai, China
| | - Ya Li
- Fudan University Huadong Hospital, Shanghai, China
| | | | - Jie Fu
- Shanghai Sixth People’s Hospital, Shanghai, China
| | - Jianjian Qiu
- Fudan University Huadong Hospital, Shanghai, China
| |
Collapse
|
38
|
Suzuki K, Otsuka Y, Nomura Y, Kumamaru KK, Kuwatsuru R, Aoki S. Development and Validation of a Modified Three-Dimensional U-Net Deep-Learning Model for Automated Detection of Lung Nodules on Chest CT Images From the Lung Image Database Consortium and Japanese Datasets. Acad Radiol 2022; 29 Suppl 2:S11-S17. [PMID: 32839096 DOI: 10.1016/j.acra.2020.07.030] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 07/13/2020] [Accepted: 07/22/2020] [Indexed: 12/17/2022]
Abstract
RATIONALE AND OBJECTIVES A more accurate lung nodule detection algorithm is needed. We developed a modified three-dimensional (3D) U-net deep-learning model for the automated detection of lung nodules on chest CT images. The purpose of this study was to evaluate the accuracy of the developed modified 3D U-net deep-learning model. MATERIALS AND METHODS In this Health Insurance Portability and Accountability Act-compliant, Institutional Review Board-approved retrospective study, the 3D U-net based deep-learning model was trained using the Lung Image Database Consortium and Image Database Resource Initiative dataset. For internal model validation, we used 89 chest CT scans that were not used for model training. For external model validation, we used 450 chest CT scans taken at an urban university hospital in Japan. Each case included at least one nodule of >5 mm identified by an experienced radiologist. We evaluated model accuracy using the competition performance metric (CPM) (average sensitivity at 1/8, 1/4, 1/2, 1, 2, 4, and 8 false-positives per scan). The 95% confidence interval (CI) was computed by bootstrapping 1000 times. RESULTS In the internal validation, the CPM was 94.7% (95% CI: 89.1%-98.6%). In the external validation, the CPM was 83.3% (95% CI: 79.4%-86.1%). CONCLUSION The modified 3D U-net deep-learning model showed high performance in both internal and external validation.
Collapse
Affiliation(s)
- Kazuhiro Suzuki
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan.
| | - Yujiro Otsuka
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan; Plusmann LLC, Tokyo, Japan; Milliman, Inc., Tokyo, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Kanako K Kumamaru
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan
| | - Ryohei Kuwatsuru
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan
| | - Shigeki Aoki
- Department of Radiology, Juntendo University Faculty of Medicine and Graduate School of Medicine, 3-1-3, Hongo, Bunkyo-ku, Tokyo 113-8431, Japan
| |
Collapse
|
39
|
Mehrotra R, Agrawal R, Ansari MA. Diagnosis of hypercritical chronic pulmonary disorders using dense convolutional network through chest radiography. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:7625-7649. [PMID: 35125924 PMCID: PMC8798313 DOI: 10.1007/s11042-021-11748-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 08/30/2021] [Accepted: 11/22/2021] [Indexed: 06/14/2023]
Abstract
Lung-related ailments are prevalent all over the world which majorly includes asthma, chronic obstructive pulmonary disease (COPD), tuberculosis, pneumonia, fibrosis, etc. and now COVID-19 is added to this list. Infection of COVID-19 poses respirational complications with other indications like cough, high fever, and pneumonia. WHO had identified cancer in the lungs as a fatal cancer type amongst others and thus, the timely detection of such cancer is pivotal for an individual's health. Since the elementary convolutional neural networks have not performed fairly well in identifying atypical image types hence, we recommend a novel and completely automated framework with a deep learning approach for the recognition and classification of chronic pulmonary disorders (CPD) and COVID-pneumonia using Thoracic or Chest X-Ray (CXR) images. A novel three-step, completely automated, approach is presented that first extracts the region of interest from CXR images for preprocessing, and they are then used to detects infected lungs X-rays from the Normal ones. Thereafter, the infected lung images are further classified into COVID-pneumonia, pneumonia, and other chronic pulmonary disorders (OCPD), which might be utilized in the current scenario to help the radiologist in substantiating their diagnosis and in starting well in time treatment of these deadly lung diseases. And finally, highlight the regions in the CXR which are indicative of severe chronic pulmonary disorders like COVID-19 and pneumonia. A detailed investigation of various pivotal parameters based on several experimental outcomes are made here. This paper presents an approach that detects the Normal lung X-rays from infected ones and the infected lung images are further classified into COVID-pneumonia, pneumonia, and other chronic pulmonary disorders with an utmost accuracy of 96.8%. Several other collective performance measurements validate the superiority of the presented model. The proposed framework shows effective results in classifying lung images into Normal, COVID-pneumonia, pneumonia, and other chronic pulmonary disorders (OCPD). This framework can be effectively utilized in this current pandemic scenario to help the radiologist in substantiating their diagnosis and in starting well in time treatment of these deadly lung diseases.
Collapse
Affiliation(s)
- Rajat Mehrotra
- Department of Electrical & Electronics Engineering, GL Bajaj Institute of Technology & Management, Gr. Noida, India
| | - Rajeev Agrawal
- Department of Electronics & Communication Engineering, GL Bajaj Institute of Technology & Management, Gr. Noida, India
| | - M. A. Ansari
- Department of Electrical Engineering, School of Engineering, Gautam Buddha University, Gr. Noida, India
| |
Collapse
|
40
|
Zhang H, Peng Y, Guo Y. Pulmonary nodules detection based on multi-scale attention networks. Sci Rep 2022; 12:1466. [PMID: 35087078 PMCID: PMC8795451 DOI: 10.1038/s41598-022-05372-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 01/10/2022] [Indexed: 12/24/2022] Open
Abstract
Pulmonary nodules are the main manifestation of early lung cancer. Therefore, accurate detection of nodules in CT images is vital for lung cancer diagnosis. A 3D automatic detection system of pulmonary nodules based on multi-scale attention networks is proposed in this paper to use multi-scale features of nodules and avoid network over-fitting problems. The system consists of two parts, nodule candidate detection (determining the locations of candidate nodules), false positive reduction (minimizing the number of false positive nodules). Specifically, with Res2Net structure, using pre-activation operation and convolutional quadruplet attention module, the 3D multi-scale attention block is designed. It makes full use of multi-scale information of pulmonary nodules by extracting multi-scale features at a granular level and alleviates over-fitting by pre-activation. The U-Net-like encoder-decoder structure is combined with multi-scale attention blocks as the backbone network of Faster R-CNN for detection of candidate nodules. Then a 3D deep convolutional neural network based on multi-scale attention blocks is designed for false positive reduction. The extensive experiments on LUNA16 and TianChi competition datasets demonstrate that the proposed approach can effectively improve the detection sensitivity and control the number of false positive nodules, which has clinical application value.
Collapse
Affiliation(s)
- Hui Zhang
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Yanjun Peng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
| | - Yanfei Guo
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| |
Collapse
|
41
|
Classification of Brain MRI Tumor Images Based on Deep Learning PGGAN Augmentation. Diagnostics (Basel) 2021; 11:diagnostics11122343. [PMID: 34943580 PMCID: PMC8700152 DOI: 10.3390/diagnostics11122343] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 12/02/2021] [Accepted: 12/07/2021] [Indexed: 12/16/2022] Open
Abstract
The wide prevalence of brain tumors in all age groups necessitates having the ability to make an early and accurate identification of the tumor type and thus select the most appropriate treatment plans. The application of convolutional neural networks (CNNs) has helped radiologists to more accurately classify the type of brain tumor from magnetic resonance images (MRIs). The learning of CNN suffers from overfitting if a suboptimal number of MRIs are introduced to the system. Recognized as the current best solution to this problem, the augmentation method allows for the optimization of the learning stage and thus maximizes the overall efficiency. The main objective of this study is to examine the efficacy of a new approach to the classification of brain tumor MRIs through the use of a VGG19 features extractor coupled with one of three types of classifiers. A progressive growing generative adversarial network (PGGAN) augmentation model is used to produce ‘realistic’ MRIs of brain tumors and help overcome the shortage of images needed for deep learning. Results indicated the ability of our framework to classify gliomas, meningiomas, and pituitary tumors more accurately than in previous studies with an accuracy of 98.54%. Other performance metrics were also examined.
Collapse
|
42
|
Naik A, Edla DR, Dharavath R. Prediction of Malignancy in Lung Nodules Using Combination of Deep, Fractal, and Gray-Level Co-Occurrence Matrix Features. BIG DATA 2021; 9:480-498. [PMID: 34191590 DOI: 10.1089/big.2020.0190] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate detection of malignant tumor on lung computed tomography scans is crucial for early diagnosis of lung cancer and hence the faster recovery of patients. Several deep learning methodologies have been proposed for lung tumor detection, especially the convolution neural network (CNN). However, as CNN may lose some of the spatial relationships between features, we plan to combine texture features such as fractal features and gray-level co-occurrence matrix (GLCM) features along with the CNN features to improve the accuracy of tumor detection. Our framework has two advantages. First it fuses the advantage of CNN features with hand-crafted features such as fractal and GLCM features to gather the spatial information. Second, we reduce the overfitting effect by replacing the softmax layer with the support vector machine classifier. Experiments have shown that texture features such as fractal and GLCM when concatenated with deep features extracted from DenseNet architecture have a better accuracy of 95.42%, sensitivity of 97.49%, and specificity of 93.97%, and a positive predictive value of 95.96% with area under curve score of 0.95.
Collapse
Affiliation(s)
- Amrita Naik
- Department of Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| | - Damodar Reddy Edla
- Department of Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| | - Ramesh Dharavath
- Department of Computer Science and Engineering, Indian Institute of Technology Dhanbad, Dhanbad, Jharkhand, India
| |
Collapse
|
43
|
Naik A, Edla DR. Lung nodule classification using combination of CNN, second and higher order texture features. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Lung cancer is the most common cancer throughout the world and identification of malignant tumors at an early stage is needed for diagnosis and treatment of patient thus avoiding the progression to a later stage. In recent times, deep learning architectures such as CNN have shown promising results in effectively identifying malignant tumors in CT scans. In this paper, we combine the CNN features with texture features such as Haralick and Gray level run length matrix features to gather benefits of high level and spatial features extracted from the lung nodules to improve the accuracy of classification. These features are further classified using SVM classifier instead of softmax classifier in order to reduce the overfitting problem. Our model was validated on LUNA dataset and achieved an accuracy of 93.53%, sensitivity of 86.62%, the specificity of 96.55%, and positive predictive value of 94.02%.
Collapse
Affiliation(s)
- Amrita Naik
- Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| | - Damodar Reddy Edla
- Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| |
Collapse
|
44
|
Wu M, Chai Z, Qian G, Lin H, Wang Q, Wang L, Chen H. Development and Evaluation of a Deep Learning Algorithm for Rib Segmentation and Fracture Detection from Multicenter Chest CT Images. Radiol Artif Intell 2021; 3:e200248. [PMID: 34617026 DOI: 10.1148/ryai.2021200248] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 06/07/2021] [Accepted: 06/29/2020] [Indexed: 12/12/2022]
Abstract
Purpose To evaluate the performance of a deep learning-based algorithm for automatic detection and labeling of rib fractures from multicenter chest CT images. Materials and Methods This retrospective study included 10 943 patients (mean age, 55 years; 6418 men) from six hospitals (January 1, 2017 to December 30, 2019), which consisted of patients with and without rib fractures who underwent CT. The patients were separated into one training set (n = 2425), two lesion-level test sets (n = 362 and 105), and one examination-level test set (n = 8051). Free-response receiver operating characteristic (FROC) score (mean sensitivity of seven different false-positive rates), precision, sensitivity, and F1 score were used as metrics to assess rib fracture detection performance. Area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were employed to evaluate the classification accuracy. The mean Dice coefficient and accuracy were used to assess the performance of rib labeling. Results In the detection of rib fractures, the model showed an FROC score of 84.3% on test set 1. For test set 2, the algorithm achieved a detection performance (precision, 82.2%; sensitivity, 84.9%; F1 score, 83.3%) comparable to three radiologists (precision, 81.7%, 98.0%, 92.0%; sensitivity, 91.2%, 78.6%, 69.2%; F1 score, 86.1%, 87.2%, 78.9%). When the radiologists used the algorithm, the mean sensitivity of the three radiologists showed an improvement (from 79.7% to 89.2%), with precision achieving similar performance (from 90.6% to 88.4%). Furthermore, the model achieved an AUC of 0.93 (95% CI: 0.91, 0.94), sensitivity of 87.9% (95% CI: 83.7%, 91.4%), and specificity of 85.3% (95% CI: 74.6%, 89.8%) on test set 3. On a subset of test set 1, the model achieved a Dice score of 0.827 with an accuracy of 96.0% for rib segmentation. Conclusion The developed deep learning algorithm was capable of detecting rib fractures, as well as corresponding anatomic locations on CT images.Keywords CT, Ribs© RSNA, 2021.
Collapse
Affiliation(s)
- Mingxiang Wu
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Zhizhong Chai
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Guangwu Qian
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Huangjing Lin
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Qiong Wang
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Liansheng Wang
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| | - Hao Chen
- Department of Radiology, Shenzhen People's Hospital, Luohu, China (M.W.); AI Research Laboratory, Imsight Technology, Nanshan, China (Z.C., H.L.); Peng Cheng Laboratory, Nanshan, China (G.Q.); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China (Q.W.); Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China (L.W.); and Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong (H.C.)
| |
Collapse
|
45
|
|
46
|
Gu Y, Chi J, Liu J, Yang L, Zhang B, Yu D, Zhao Y, Lu X. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput Biol Med 2021; 137:104806. [PMID: 34461501 DOI: 10.1016/j.compbiomed.2021.104806] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 08/23/2021] [Accepted: 08/23/2021] [Indexed: 12/17/2022]
Abstract
Lung cancer has one of the highest mortalities of all cancers. According to the National Lung Screening Trial, patients who underwent low-dose computed tomography (CT) scanning once a year for 3 years showed a 20% decline in lung cancer mortality. To further improve the survival rate of lung cancer patients, computer-aided diagnosis (CAD) technology shows great potential. In this paper, we summarize existing CAD approaches applying deep learning to CT scan data for pre-processing, lung segmentation, false positive reduction, lung nodule detection, segmentation, classification and retrieval. Selected papers are drawn from academic journals and conferences up to November 2020. We discuss the development of deep learning, describe several important aspects of lung nodule CAD systems and assess the performance of the selected studies on various datasets, which include LIDC-IDRI, LUNA16, LIDC, DSB2017, NLST, TianChi, and ELCAP. Overall, in the detection studies reviewed, the sensitivity of these techniques is found to range from 61.61% to 98.10%, and the value of the FPs per scan is between 0.125 and 32. In the selected classification studies, the accuracy ranges from 75.01% to 97.58%. The precision of the selected retrieval studies is between 71.43% and 87.29%. Based on performance, deep learning based CAD technologies for detection and classification of pulmonary nodules achieve satisfactory results. However, there are still many challenges and limitations remaining including over-fitting, lack of interpretability and insufficient annotated data. This review helps researchers and radiologists to better understand CAD technology for pulmonary nodule detection, segmentation, classification and retrieval. We summarize the performance of current techniques, consider the challenges, and propose directions for future high-impact research.
Collapse
Affiliation(s)
- Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jingqian Chi
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jiaqi Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Xiaoqi Lu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China; College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China
| |
Collapse
|
47
|
Generating Virtual Short Tau Inversion Recovery (STIR) Images from T1- and T2-Weighted Images Using a Conditional Generative Adversarial Network in Spine Imaging. Diagnostics (Basel) 2021; 11:diagnostics11091542. [PMID: 34573884 PMCID: PMC8467788 DOI: 10.3390/diagnostics11091542] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/15/2021] [Accepted: 08/21/2021] [Indexed: 11/17/2022] Open
Abstract
Short tau inversion recovery (STIR) sequences are frequently used in magnetic resonance imaging (MRI) of the spine. However, STIR sequences require a significant amount of scanning time. The purpose of the present study was to generate virtual STIR (vSTIR) images from non-contrast, non-fat-suppressed T1- and T2-weighted images using a conditional generative adversarial network (cGAN). The training dataset comprised 612 studies from 514 patients, and the validation dataset comprised 141 studies from 133 patients. For validation, 100 original STIR and respective vSTIR series were presented to six senior radiologists (blinded for the STIR type) in independent A/B-testing sessions. Additionally, for 141 real or vSTIR sequences, the testers were required to produce a structured report of 15 different findings. In the A/B-test, most testers could not reliably identify the real STIR (mean error of tester 1-6: 41%; 44%; 58%; 48%; 39%; 45%). In the evaluation of the structured reports, vSTIR was equivalent to real STIR in 13 of 15 categories. In the category of the number of STIR hyperintense vertebral bodies (p = 0.08) and in the diagnosis of bone metastases (p = 0.055), the vSTIR was only slightly insignificantly equivalent. By virtually generating STIR images of diagnostic quality from T1- and T2-weighted images using a cGAN, one can shorten examination times and increase throughput.
Collapse
|
48
|
Kaur I, Behl T, Aleya L, Rahman H, Kumar A, Arora S, Bulbul IJ. Artificial intelligence as a fundamental tool in management of infectious diseases and its current implementation in COVID-19 pandemic. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2021; 28:40515-40532. [PMID: 34036497 PMCID: PMC8148397 DOI: 10.1007/s11356-021-13823-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 04/05/2021] [Indexed: 04/15/2023]
Abstract
The world has never been prepared for global pandemics like the COVID-19, currently posing an immense threat to the public and consistent pressure on the global healthcare systems to navigate optimized tools, equipments, medicines, and techno-driven approaches to retard the infection spread. The synergized outcome of artificial intelligence paradigms and human-driven control measures elicit a significant impact on screening, analysis, prediction, and tracking the currently infected individuals, and likely the future patients, with precision and accuracy, generating regular international and national data on confirmed, recovered, and death cases, as the current status of 3,820,869 infected patients worldwide. Artificial intelligence is a frontline concept, with time-saving, cost-effective, and productive access to disease management, rendering positive results in physician assistance in high workload conditions, radiology imaging, computational tomography, and database formulations, to facilitate availability of information accessible to researchers all over the globe. The review tends to elaborate the role of industry 4.0 technology, fast diagnostic procedures, and convolutional neural networks, as artificial intelligence aspects, in potentiating the COVID-19 management criteria and differentiating infection in SARS-CoV-2 positive and negative groups. Therefore, the review successfully supplements the processes of vaccine development, disease management, diagnosis, patient records, transmission inhibition, social distancing, and future pandemic predictions, with artificial intelligence revolution and smart techno processes to ensure that the human race wins this battle with COVID-19 and many more combats in the future.
Collapse
Affiliation(s)
- Ishnoor Kaur
- Chitkara College of Pharmacy, Chitkara University, Chandigarh, Punjab, India
| | - Tapan Behl
- Chitkara College of Pharmacy, Chitkara University, Chandigarh, Punjab, India.
| | - Lotfi Aleya
- Chrono-Environment Laboratory, UMR CNRS 6249, Bourgogne Franche-Comté University, Besançon, France
| | - Habibur Rahman
- Department of Global Medical Science, Wonju College of Medicine, Yonsei University, Seoul, South Korea
- Department of Pharmacy, Southeast University, Banani, Dhaka, 1213, Bangladesh
| | - Arun Kumar
- Chitkara College of Pharmacy, Chitkara University, Chandigarh, Punjab, India
| | - Sandeep Arora
- Chitkara College of Pharmacy, Chitkara University, Chandigarh, Punjab, India
| | - Israt Jahan Bulbul
- Department of Pharmacy, Southeast University, Banani, Dhaka, 1213, Bangladesh
| |
Collapse
|
49
|
Yu H, Yang LT, Zhang Q, Armstrong D, Deen MJ. Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.157] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
50
|
Rich CNN Features for Water-Body Segmentation from Very High Resolution Aerial and Satellite Imagery. REMOTE SENSING 2021. [DOI: 10.3390/rs13101912] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Extracting water-bodies accurately is a great challenge from very high resolution (VHR) remote sensing imagery. The boundaries of a water body are commonly hard to identify due to the complex spectral mixtures caused by aquatic vegetation, distinct lake/river colors, silts near the bank, shadows from the surrounding tall plants, and so on. The diversity and semantic information of features need to be increased for a better extraction of water-bodies from VHR remote sensing images. In this paper, we address these problems by designing a novel multi-feature extraction and combination module. This module consists of three feature extraction sub-modules based on spatial and channel correlations in feature maps at each scale, which extract the complete target information from the local space, larger space, and between-channel relationship to achieve a rich feature representation. Simultaneously, to better predict the fine contours of water-bodies, we adopt a multi-scale prediction fusion module. Besides, to solve the semantic inconsistency of feature fusion between the encoding stage and the decoding stage, we apply an encoder-decoder semantic feature fusion module to promote fusion effects. We carry out extensive experiments in VHR aerial and satellite imagery respectively. The result shows that our method achieves state-of-the-art segmentation performance, surpassing the classic and recent methods. Moreover, our proposed method is robust in challenging water-body extraction scenarios.
Collapse
|