1
|
Jia PF, Li YR, Wang LY, Lu XR, Guo X. Radiomics in esophagogastric junction cancer: A scoping review of current status and advances. Eur J Radiol 2024; 177:111577. [PMID: 38905802 DOI: 10.1016/j.ejrad.2024.111577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 06/03/2024] [Accepted: 06/14/2024] [Indexed: 06/23/2024]
Abstract
PURPOSE This scoping review aimed to understand the advances in radiomics in esophagogastric junction (EGJ) cancer and assess the current status of radiomics in EGJ cancer. METHODS We conducted systematic searches of PubMed, Embase, and Web of Science databases from January 18, 2012, to January 15, 2023, to identify radiomics articles related to EGJ cancer. Two researchers independently screened the literature, extracted data, and assessed the quality of the studies using the Radiomics Quality Score (RQS) and the METhodological RadiomICs Score (METRICS) tool, respectively. RESULTS A total of 120 articles were retrieved from the three databases, and after screening, only six papers met the inclusion criteria. These studies investigated the role of radiomics in differentiating adenocarcinoma from squamous carcinoma, diagnosing T-stage, evaluating HER2 overexpression, predicting response to neoadjuvant therapy, and prognosis in EGJ cancer. The median score percentage of RQS was 34.7% (range from 22.2% to 38.9%). The median score percentage of METRICS was 71.2% (range from 58.2% to 84.9%). CONCLUSION Although there is a considerable difference between the RQS and METRICS scores of the included literature, we believe that the research value of radiomics in EGJ cancer has been revealed. In the future, while actively exploring more diagnostic, prognostic, and biological correlation studies in EGJ cancer, greater emphasis should be placed on the standardization and clinical application of radiomics.
Collapse
Affiliation(s)
- Ping-Fan Jia
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Yu-Ru Li
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Lu-Yao Wang
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Xiao-Rui Lu
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Xing Guo
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China.
| |
Collapse
|
2
|
Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, Acharya R. Deep learning techniques in PET/CT imaging: A comprehensive review from sinogram to image space. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107880. [PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/16/2023] [Accepted: 10/21/2023] [Indexed: 11/06/2023]
Abstract
Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
Collapse
Affiliation(s)
- Maryam Fallahpoor
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia.
| | - Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University Cambridge Campus, United Kingdom
| | - Prabal Datta Barua
- School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | | | - Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD, Australia
| |
Collapse
|
3
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
4
|
Thanoon MA, Zulkifley MA, Mohd Zainuri MAA, Abdani SR. A Review of Deep Learning Techniques for Lung Cancer Screening and Diagnosis Based on CT Images. Diagnostics (Basel) 2023; 13:2617. [PMID: 37627876 PMCID: PMC10453592 DOI: 10.3390/diagnostics13162617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/26/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023] Open
Abstract
One of the most common and deadly diseases in the world is lung cancer. Only early identification of lung cancer can increase a patient's probability of survival. A frequently used modality for the screening and diagnosis of lung cancer is computed tomography (CT) imaging, which provides a detailed scan of the lung. In line with the advancement of computer-assisted systems, deep learning techniques have been extensively explored to help in interpreting the CT images for lung cancer identification. Hence, the goal of this review is to provide a detailed review of the deep learning techniques that were developed for screening and diagnosing lung cancer. This review covers an overview of deep learning (DL) techniques, the suggested DL techniques for lung cancer applications, and the novelties of the reviewed methods. This review focuses on two main methodologies of deep learning in screening and diagnosing lung cancer, which are classification and segmentation methodologies. The advantages and shortcomings of current deep learning models will also be discussed. The resultant analysis demonstrates that there is a significant potential for deep learning methods to provide precise and effective computer-assisted lung cancer screening and diagnosis using CT scans. At the end of this review, a list of potential future works regarding improving the application of deep learning is provided to spearhead the advancement of computer-assisted lung cancer diagnosis systems.
Collapse
Affiliation(s)
- Mohammad A. Thanoon
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
- System and Control Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Muhammad Ammirrul Atiqi Mohd Zainuri
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Malaysia;
| |
Collapse
|
5
|
Zhu J, Ye J, Dong L, Ma X, Tang N, Xu P, Jin W, Li R, Yang G, Lai X. Non-invasive prediction of overall survival time for glioblastoma multiforme patients based on multimodal MRI radiomics. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2023; 33:1261-1274. [PMID: 38505467 PMCID: PMC10946632 DOI: 10.1002/ima.22869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 02/08/2023] [Accepted: 02/23/2023] [Indexed: 03/21/2024]
Abstract
Glioblastoma multiforme (GBM) is the most common and deadly primary malignant brain tumor. As GBM tumor is aggressive and shows high biological heterogeneity, the overall survival (OS) time is extremely low even with the most aggressive treatment. If the OS time can be predicted before surgery, developing personalized treatment plans for GBM patients will be beneficial. Magnetic resonance imaging (MRI) is a commonly used diagnostic tool for brain tumors with high-resolution and sound imaging effects. However, in clinical practice, doctors mainly rely on manually segmenting the tumor regions in MRI and predicting the OS time of GBM patients, which is time-consuming, subjective and repetitive, limiting the effectiveness of clinical diagnosis and treatment. Therefore, it is crucial to segment the brain tumor regions in MRI, and an accurate pre-operative prediction of OS time for personalized treatment is highly desired. In this study, we present a multimodal MRI radiomics-based automatic framework for non-invasive prediction of the OS time for GBM patients. A modified 3D-UNet model is built to segment tumor subregions in MRI of GBM patients; then, the radiomic features in the tumor subregions are extracted and combined with the clinical features input into the Support Vector Regression (SVR) model to predict the OS time. In the experiments, the BraTS2020, BraTS2019 and BraTS2018 datasets are used to evaluate our framework. Our model achieves competitive OS time prediction accuracy compared to most typical approaches.
Collapse
Affiliation(s)
- Jingyu Zhu
- Department of UrologyHangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical UniversityHangzhouChina
| | - Jianming Ye
- First Affiliated HospitalGannan Medical UniversityGanzhouChina
| | - Leshui Dong
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| | - Xiaofei Ma
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| | - Na Tang
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| | - Peng Xu
- The Third Affiliated HospitalZhejiang Chinese Medical UniversityHangzhouChina
| | - Wei Jin
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| | - Ruipeng Li
- Department of UrologyHangzhou Third People's HospitalHangzhouChina
| | - Guang Yang
- Cardiovascular Research CentreRoyal Brompton HospitalLondonUK
- National Heart and Lung InstituteImperial College LondonLondonUK
| | - Xiaobo Lai
- Department of UrologyHangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical UniversityHangzhouChina
- School of Medical Technology and Information EngineeringZhejiang Chinese Medical UniversityHangzhouChina
| |
Collapse
|
6
|
Ahamed MKU, Islam MM, Uddin MA, Akhter A, Acharjee UK, Paul BK, Moni MA. DTLCx: An Improved ResNet Architecture to Classify Normal and Conventional Pneumonia Cases from COVID-19 Instances with Grad-CAM-Based Superimposed Visualization Utilizing Chest X-ray Images. Diagnostics (Basel) 2023; 13:diagnostics13030551. [PMID: 36766662 PMCID: PMC9914155 DOI: 10.3390/diagnostics13030551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 01/04/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
COVID-19 is a severe respiratory contagious disease that has now spread all over the world. COVID-19 has terribly impacted public health, daily lives and the global economy. Although some developed countries have advanced well in detecting and bearing this coronavirus, most developing countries are having difficulty in detecting COVID-19 cases for the mass population. In many countries, there is a scarcity of COVID-19 testing kits and other resources due to the increasing rate of COVID-19 infections. Therefore, this deficit of testing resources and the increasing figure of daily cases encouraged us to improve a deep learning model to aid clinicians, radiologists and provide timely assistance to patients. In this article, an efficient deep learning-based model to detect COVID-19 cases that utilizes a chest X-ray images dataset has been proposed and investigated. The proposed model is developed based on ResNet50V2 architecture. The base architecture of ResNet50V2 is concatenated with six extra layers to make the model more robust and efficient. Finally, a Grad-CAM-based discriminative localization is used to readily interpret the detection of radiological images. Two datasets were gathered from different sources that are publicly available with class labels: normal, confirmed COVID-19, bacterial pneumonia and viral pneumonia cases. Our proposed model obtained a comprehensive accuracy of 99.51% for four-class cases (COVID-19/normal/bacterial pneumonia/viral pneumonia) on Dataset-2, 96.52% for the cases with three classes (normal/ COVID-19/bacterial pneumonia) and 99.13% for the cases with two classes (COVID-19/normal) on Dataset-1. The accuracy level of the proposed model might motivate radiologists to rapidly detect and diagnose COVID-19 cases.
Collapse
Affiliation(s)
- Md. Khabir Uddin Ahamed
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Md Manowarul Islam
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
- Correspondence:
| | - Md. Ashraf Uddin
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
- School of Information Technology, Geelong, Deakin University, Geelong, VIC 3216, Australia
| | - Arnisha Akhter
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Uzzal Kumar Acharjee
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Bikash Kumar Paul
- Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Tangail 1902, Bangladesh
- Department of Software Engineering, Daffodil International University, Dhaka 1207, Bangladesh
| | - Mohammad Ali Moni
- Artificial Intelligence & Data Science, School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, St. Lucia, QLD 4072, Australia
| |
Collapse
|
7
|
Jeong S, Fischer ML, Breunig H, Marklein AR, Hopkins FM, Biraud SC. Artificial Intelligence Approach for Estimating Dairy Methane Emissions. ENVIRONMENTAL SCIENCE & TECHNOLOGY 2022; 56:4849-4858. [PMID: 35363471 DOI: 10.1021/acs.est.1c08802] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
California's dairy sector accounts for ∼50% of anthropogenic CH4 emissions in the state's greenhouse gas (GHG) emission inventory. Although California dairy facilities' location and herd size vary over time, atmospheric inverse modeling studies rely on decade-old facility-scale geospatial information. For the first time, we apply artificial intelligence (AI) to aerial imagery to estimate dairy CH4 emissions from California's San Joaquin Valley (SJV), a region with ∼90% of the state's dairy population. Using an AI method, we process 316,882 images to estimate the facility-scale herd size across the SJV. The AI approach predicts herd size that strongly (>95%) correlates with that made by human visual inspection, providing a low-cost alternative to the labor-intensive inventory development process. We estimate SJV's dairy enteric and manure CH4 emissions for 2018 to be 496-763 Gg/yr (mean = 624; 95% confidence) using the predicted herd size. We also apply our AI approach to estimate CH4 emission reduction from anaerobic digester deployment. We identify 162 large (90th percentile) farms and estimate a CH4 reduction potential of 83 Gg CH4/yr for these large facilities from anaerobic digester adoption. The results indicate that our AI approach can be applied to characterize the manure system (e.g., use of an anaerobic lagoon) and estimate GHG emissions for other sectors.
Collapse
Affiliation(s)
- Seongeun Jeong
- Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, United States
| | - Marc L Fischer
- Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, United States
| | - Hanna Breunig
- Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, United States
| | - Alison R Marklein
- University of California, Riverside, 900 University Avenue, Riverside, California 92521, United States
| | - Francesca M Hopkins
- University of California, Riverside, 900 University Avenue, Riverside, California 92521, United States
| | - Sebastien C Biraud
- Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, United States
| |
Collapse
|
8
|
Artificial Intelligence Applications on Restaging [18F]FDG PET/CT in Metastatic Colorectal Cancer: A Preliminary Report of Morpho-Functional Radiomics Classification for Prediction of Disease Outcome. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062941] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The aim of this study was to investigate the application of [18F]FDG PET/CT images-based textural features analysis to propose radiomics models able to early predict disease progression (PD) and survival outcome in metastatic colorectal cancer (MCC) patients after first adjuvant therapy. For this purpose, 52 MCC patients who underwent [18F]FDGPET/CT during the disease restaging process after the first adjuvant therapy were analyzed. Follow-up data were recorded for a minimum of 12 months after PET/CT. Radiomics features from each avid lesion in PET and low-dose CT images were extracted. A hybrid descriptive-inferential method and the discriminant analysis (DA) were used for feature selection and for predictive model implementation, respectively. The performance of the features in predicting PD was performed for per-lesion analysis, per-patient analysis, and liver lesions analysis. All lesions were again considered to assess the diagnostic performance of the features in discriminating liver lesions. In predicting PD in the whole group of patients, on PET features radiomics analysis, among per-lesion analysis, only the GLZLM_GLNU feature was selected, while three features were selected from PET/CT images data set. The same features resulted more accurately by associating CT features with PET features (AUROC 65.22%). In per-patient analysis, three features for stand-alone PET images and one feature (i.e., HUKurtosis) for the PET/CT data set were selected. Focusing on liver metastasis, in per-lesion analysis, the same analysis recognized one PET feature (GLZLM_GLNU) from PET images and three features from PET/CT data set. Similarly, in liver lesions per-patient analysis, we found three PET features and a PET/CT feature (HUKurtosis). In discrimination of liver metastasis from the rest of the other lesions, optimal results of stand-alone PET imaging were found for one feature (SUVbwmin; AUROC 88.91%) and two features for merged PET/CT features analysis (AUROC 95.33%). In conclusion, our machine learning model on restaging [18F]FDGPET/CT was demonstrated to be feasible and potentially useful in the predictive evaluation of disease progression in MCC.
Collapse
|
9
|
Laudicella R, Comelli A, Liberini V, Vento A, Stefano A, Spataro A, Crocè L, Baldari S, Bambaci M, Deandreis D, Arico’ D, Ippolito M, Gaeta M, Alongi P, Minutoli F, Burger IA, Baldari S. [68Ga]DOTATOC PET/CT Radiomics to Predict the Response in GEP-NETs Undergoing [177Lu]DOTATOC PRRT: The “Theragnomics” Concept. Cancers (Basel) 2022; 14:cancers14040984. [PMID: 35205733 PMCID: PMC8870649 DOI: 10.3390/cancers14040984] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/12/2022] [Accepted: 02/14/2022] [Indexed: 02/01/2023] Open
Abstract
Despite impressive results, almost 30% of NET do not respond to PRRT and no well-established criteria are suitable to predict response. Therefore, we assessed the predictive value of radiomics [68Ga]DOTATOC PET/CT images pre-PRRT in metastatic GEP NET. We retrospectively analyzed the predictive value of radiomics in 324 SSTR-2-positive lesions from 38 metastatic GEP-NET patients (nine G1, 27 G2, and two G3) who underwent restaging [68Ga]DOTATOC PET/CT before complete PRRT with [177Lu]DOTATOC. Clinical, laboratory, and radiological follow-up data were collected for at least six months after the last cycle. Through LifeX, we extracted 65 PET features for each lesion. Grading, PRRT number of cycles, and cumulative activity, pre- and post-PRRT CgA values were also considered as additional clinical features. [68Ga]DOTATOC PET/CT follow-up with the same scanner for each patient determined the disease status (progression vs. response in terms of stability/reduction/disappearance) for each lesion. All features (PET and clinical) were also correlated with follow-up data in a per-site analysis (liver, lymph nodes, and bone), and for features significantly associated with response, the Δradiomics for each lesion was assessed on follow-up [68Ga]DOTATOC PET/CT performed until nine months post-PRRT. A statistical system based on the point-biserial correlation and logistic regression analysis was used for the reduction and selection of the features. Discriminant analysis was used, instead, to obtain the predictive model using the k-fold strategy to split data into training and validation sets. From the reduction and selection process, HISTO_Skewness and HISTO_Kurtosis were able to predict response with an area under the receiver operating characteristics curve (AUC ROC), sensitivity, and specificity of 0.745, 80.6%, 67.2% and 0.722, 61.2%, 75.9%, respectively. Moreover, a combination of three features (HISTO_Skewness; HISTO_Kurtosis, and Grading) did not improve the AUC significantly with 0.744. SUVmax. However, it could not predict response to PRRT (p = 0.49, AUC 0.523). The presented preliminary “theragnomics” model proved to be superior to conventional quantitative parameters to predict the response of GEP-NET lesions in patients treated with complete [177Lu]DOTATOC PRRT, regardless of the lesion site.
Collapse
Affiliation(s)
- Riccardo Laudicella
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and Morpho-Functional Imaging, University of Messina, 98125 Messina, Italy; (A.V.); (A.S.); (L.C.); (F.M.); (S.B.)
- Ri.MED Foundation, 90134 Palermo, Italy;
- Department of Nuclear Medicine, University Hospital Zürich, University of Zürich, 8091 Zürich, Switzerland;
- Nuclear Medicine Unit, Fondazione Istituto G.Giglio, 90015 Cefalù, Italy;
- Correspondence: ; Tel.: +39-320-032-0150
| | | | - Virginia Liberini
- Nuclear Medicine Unit, Department of Medical Sciences, University of Turin, 10126 Turin, Italy; (V.L.); (D.D.)
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100 Cuneo, Italy
| | - Antonio Vento
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and Morpho-Functional Imaging, University of Messina, 98125 Messina, Italy; (A.V.); (A.S.); (L.C.); (F.M.); (S.B.)
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy;
| | - Alessandro Spataro
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and Morpho-Functional Imaging, University of Messina, 98125 Messina, Italy; (A.V.); (A.S.); (L.C.); (F.M.); (S.B.)
| | - Ludovica Crocè
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and Morpho-Functional Imaging, University of Messina, 98125 Messina, Italy; (A.V.); (A.S.); (L.C.); (F.M.); (S.B.)
| | - Sara Baldari
- Nuclear Medicine Department, Cannizzaro Hospital, 95126 Catania, Italy; (S.B.); (M.I.)
| | - Michelangelo Bambaci
- Department of Nuclear Medicine, Humanitas Oncological Centre of Catania, 95125 Catania, Italy; (M.B.); (D.A.)
| | - Desiree Deandreis
- Nuclear Medicine Unit, Department of Medical Sciences, University of Turin, 10126 Turin, Italy; (V.L.); (D.D.)
| | - Demetrio Arico’
- Department of Nuclear Medicine, Humanitas Oncological Centre of Catania, 95125 Catania, Italy; (M.B.); (D.A.)
| | - Massimo Ippolito
- Nuclear Medicine Department, Cannizzaro Hospital, 95126 Catania, Italy; (S.B.); (M.I.)
| | - Michele Gaeta
- Section of Radiological Sciences, Department of Biomedical Sciences and Morphological and Functional Imaging, University of Messina, 98125 Messina, Italy;
| | - Pierpaolo Alongi
- Nuclear Medicine Unit, Fondazione Istituto G.Giglio, 90015 Cefalù, Italy;
| | - Fabio Minutoli
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and Morpho-Functional Imaging, University of Messina, 98125 Messina, Italy; (A.V.); (A.S.); (L.C.); (F.M.); (S.B.)
| | - Irene A. Burger
- Department of Nuclear Medicine, University Hospital Zürich, University of Zürich, 8091 Zürich, Switzerland;
- Department of Nuclear Medicine, Kantonsspital Baden, 5404 Baden, Switzerland
| | - Sergio Baldari
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and Morpho-Functional Imaging, University of Messina, 98125 Messina, Italy; (A.V.); (A.S.); (L.C.); (F.M.); (S.B.)
| |
Collapse
|
10
|
Deep Learning Networks for Automatic Retroperitoneal Sarcoma Segmentation in Computerized Tomography. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031665] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The volume estimation of retroperitoneal sarcoma (RPS) is often difficult due to its huge dimensions and irregular shape; thus, it often requires manual segmentation, which is time-consuming and operator-dependent. This study aimed to evaluate two fully automated deep learning networks (ENet and ERFNet) for RPS segmentation. This retrospective study included 20 patients with RPS who received an abdominal computed tomography (CT) examination. Forty-nine CT examinations, with a total of 72 lesions, were included. Manual segmentation was performed by two radiologists in consensus, and automatic segmentation was performed using ENet and ERFNet. Significant differences between manual and automatic segmentation were tested using the analysis of variance (ANOVA). A set of performance indicators for the shape comparison (namely sensitivity), positive predictive value (PPV), dice similarity coefficient (DSC), volume overlap error (VOE), and volumetric differences (VD) were calculated. There were no significant differences found between the RPS volumes obtained using manual segmentation and ENet (p-value = 0.935), manual segmentation and ERFNet (p-value = 0.544), or ENet and ERFNet (p-value = 0.119). The sensitivity, PPV, DSC, VOE, and VD for ENet and ERFNet were 91.54% and 72.21%, 89.85% and 87.00%, 90.52% and 74.85%, 16.87% and 36.85%, and 2.11% and -14.80%, respectively. By using a dedicated GPU, ENet took around 15 s for segmentation versus 13 s for ERFNet. In the case of CPU, ENet took around 2 min versus 1 min for ERFNet. The manual approach required approximately one hour per segmentation. In conclusion, fully automatic deep learning networks are reliable methods for RPS volume assessment. ENet performs better than ERFNet for automatic segmentation, though it requires more time.
Collapse
|
11
|
Yadav A, Saxena R, Kumar A, Walia TS, Zaguia A, Kamal SMM. FVC-NET: An Automated Diagnosis of Pulmonary Fibrosis Progression Prediction Using Honeycombing and Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2832400. [PMID: 35103054 PMCID: PMC8799953 DOI: 10.1155/2022/2832400] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 11/29/2021] [Accepted: 12/28/2021] [Indexed: 11/17/2022]
Abstract
Pulmonary fibrosis is a severe chronic lung disease that causes irreversible scarring in the tissues of the lungs, which results in the loss of lung capacity. The Forced Vital Capacity (FVC) of the patient is an interesting measure to investigate this disease to have the prognosis of the disease. This paper proposes a deep learning-based FVC-Net architecture to predict the progression of the disease from the patient's computed tomography (CT) scan and the patient's metadata. The input to the model combines the image score generated based on the degree of honeycombing for a patient identified based on segmented lung images and the metadata. This input is then fed to a 3-layer net to obtain the final output. The performance of the proposed FVC-Net model is compared with various contemporary state-of-the-art deep learning-based models, which are available on a cohort from the pulmonary fibrosis progression dataset. The model showcased significant improvement in the performance over other models for modified Laplace Log-Likelihood (-6.64). Finally, the paper concludes with some prospects to be explored in the proposed study.
Collapse
Affiliation(s)
- Anju Yadav
- Manipal University Jaipur, Jaipur, India
| | | | | | | | - Atef Zaguia
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia
| | | |
Collapse
|
12
|
Ahamed KU, Islam M, Uddin A, Akhter A, Paul BK, Yousuf MA, Uddin S, Quinn JM, Moni MA. A deep learning approach using effective preprocessing techniques to detect COVID-19 from chest CT-scan and X-ray images. Comput Biol Med 2021; 139:105014. [PMID: 34781234 PMCID: PMC8566098 DOI: 10.1016/j.compbiomed.2021.105014] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/01/2021] [Accepted: 11/01/2021] [Indexed: 12/16/2022]
Abstract
Coronavirus disease-19 (COVID-19) is a severe respiratory viral disease first reported in late 2019 that has spread worldwide. Although some wealthy countries have made significant progress in detecting and containing this disease, most underdeveloped countries are still struggling to identify COVID-19 cases in large populations. With the rising number of COVID-19 cases, there are often insufficient COVID-19 diagnostic kits and related resources in such countries. However, other basic diagnostic resources often do exist, which motivated us to develop Deep Learning models to assist clinicians and radiologists to provide prompt diagnostic support to the patients. In this study, we have developed a deep learning-based COVID-19 case detection model trained with a dataset consisting of chest CT scans and X-ray images. A modified ResNet50V2 architecture was employed as deep learning architecture in the proposed model. The dataset utilized to train the model was collected from various publicly available sources and included four class labels: confirmed COVID-19, normal controls and confirmed viral and bacterial pneumonia cases. The aggregated dataset was preprocessed through a sharpening filter before feeding the dataset into the proposed model. This model attained an accuracy of 96.452% for four-class cases (COVID-19/Normal/Bacterial pneumonia/Viral pneumonia), 97.242% for three-class cases (COVID-19/Normal/Bacterial pneumonia) and 98.954% for two-class cases (COVID-19/Viral pneumonia) using chest X-ray images. The model acquired a comprehensive accuracy of 99.012% for three-class cases (COVID-19/Normal/Community-acquired pneumonia) and 99.99% for two-class cases (Normal/COVID-19) using CT-scan images of the chest. This high accuracy presents a new and potentially important resource to enable radiologists to identify and rapidly diagnose COVID-19 cases with only basic but widely available equipment.
Collapse
Affiliation(s)
- Khabir Uddin Ahamed
- Department of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| | - Manowarul Islam
- Department of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh,Corresponding author
| | - Ashraf Uddin
- Department of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| | - Arnisha Akhter
- Department of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| | - Bikash Kumar Paul
- Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Bangladesh
| | - Mohammad Abu Yousuf
- Institute of Information Technology, Jahangirnagar University, Dhaka, Bangladesh
| | - Shahadat Uddin
- Complex Systems Research Group, Faculty of Engineering, The University of Sydney, Darlington, NSW, 2008, Australia
| | - Julian M.W. Quinn
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia
| | - Mohammad Ali Moni
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia,Artificial Intelligence & Digital Health Data Science, School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, St Lucia, QLD, 4072, Australia,Corresponding author. Artificial Intelligence & Digital Health Data Science, School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, St Lucia, QLD, 4072, Australia
| |
Collapse
|
13
|
Liu X, Sun Z, Han C, Cui Y, Huang J, Wang X, Zhang X, Wang X. Development and validation of the 3D U-Net algorithm for segmentation of pelvic lymph nodes on diffusion-weighted images. BMC Med Imaging 2021; 21:170. [PMID: 34774001 PMCID: PMC8590773 DOI: 10.1186/s12880-021-00703-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 11/08/2021] [Indexed: 12/16/2022] Open
Abstract
Background The 3D U-Net model has been proved to perform well in the automatic organ segmentation. The aim of this study is to evaluate the feasibility of the 3D U-Net algorithm for the automated detection and segmentation of lymph nodes (LNs) on pelvic diffusion-weighted imaging (DWI) images. Methods A total of 393 DWI images of patients suspected of having prostate cancer (PCa) between January 2019 and December 2020 were collected for model development. Seventy-seven DWI images from another group of PCa patients imaged between January 2021 and April 2021 were collected for temporal validation. Segmentation performance was assessed using the Dice score, positive predictive value (PPV), true positive rate (TPR), and volumetric similarity (VS), Hausdorff distance (HD), the Average distance (AVD), and the Mahalanobis distance (MHD) with manual annotation of pelvic LNs as the reference. The accuracy with which the suspicious metastatic LNs (short diameter > 0.8 cm) were detected was evaluated using the area under the curve (AUC) at the patient level, and the precision, recall, and F1-score were determined at the lesion level. The consistency of LN staging on an hold-out test dataset between the model and radiologist was assessed using Cohen’s kappa coefficient. Results In the testing set used for model development, the Dice score, TPR, PPV, VS, HD, AVD and MHD values for the segmentation of suspicious LNs were 0.85, 0.82, 0.80, 0.86, 2.02 (mm), 2.01 (mm), and 1.54 (mm) respectively. The precision, recall, and F1-score for the detection of suspicious LNs were 0.97, 0.98 and 0.97, respectively. In the temporal validation dataset, the AUC of the model for identifying PCa patients with suspicious LNs was 0.963 (95% CI: 0.892–0.993). High consistency of LN staging (Kappa = 0.922) was achieved between the model and expert radiologist. Conclusion The 3D U-Net algorithm can accurately detect and segment pelvic LNs based on DWI images.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jiahao Huang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
14
|
Using Convolutional Encoder Networks to Determine the Optimal Magnetic Resonance Image for the Automatic Segmentation of Multiple Sclerosis. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11188335] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multiple Sclerosis (MS) is a neuroinflammatory demyelinating disease that affects over 2,000,000 individuals worldwide. It is characterized by white matter lesions that are identified through the segmentation of magnetic resonance images (MRIs). Manual segmentation is very time-intensive because radiologists spend a great amount of time labeling T1-weighted, T2-weighted, and FLAIR MRIs. In response, deep learning models have been created to reduce segmentation time by automatically detecting lesions. These models often use individual MRI sequences as well as combinations, such as FLAIR2, which is the multiplication of FLAIR and T2 sequences. Unlike many other studies, this seeks to determine an optimal MRI sequence, thus reducing even more time by not having to obtain other MRI sequences. With this consideration in mind, four Convolutional Encoder Networks (CENs) with different network architectures (U-Net, U-Net++, Linknet, and Feature Pyramid Network) were used to ensure that the optimal MRI applies to a wide array of deep learning models. Each model had used a pretrained ResNeXt-50 encoder in order to conserve memory and to train faster. Training and testing had been performed using two public datasets with 30 and 15 patients. Fisher’s exact test was used to evaluate statistical significance, and the automatic segmentation times were compiled for the top two models. This work determined that FLAIR is the optimal sequence based on Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). By using FLAIR, the U-Net++ with the ResNeXt-50 achieved a high DSC of 0.7159.
Collapse
|
15
|
Stefano A, Comelli A. Customized Efficient Neural Network for COVID-19 Infected Region Identification in CT Images. J Imaging 2021; 7:131. [PMID: 34460767 PMCID: PMC8404925 DOI: 10.3390/jimaging7080131] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/28/2021] [Accepted: 08/01/2021] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND In the field of biomedical imaging, radiomics is a promising approach that aims to provide quantitative features from images. It is highly dependent on accurate identification and delineation of the volume of interest to avoid mistakes in the implementation of the texture-based prediction model. In this context, we present a customized deep learning approach aimed at addressing the real-time, and fully automated identification and segmentation of COVID-19 infected regions in computed tomography images. METHODS In a previous study, we adopted ENET, originally used for image segmentation tasks in self-driving cars, for whole parenchyma segmentation in patients with idiopathic pulmonary fibrosis which has several similarities to COVID-19 disease. To automatically identify and segment COVID-19 infected areas, a customized ENET, namely C-ENET, was implemented and its performance compared to the original ENET and some state-of-the-art deep learning architectures. RESULTS The experimental results demonstrate the effectiveness of our approach. Considering the performance obtained in terms of similarity of the result of the segmentation to the gold standard (dice similarity coefficient ~75%), our proposed methodology can be used for the identification and delineation of COVID-19 infected areas without any supervision of a radiologist, in order to obtain a volume of interest independent from the user. CONCLUSIONS We demonstrated that the proposed customized deep learning model can be applied to rapidly identify, and segment COVID-19 infected regions to subsequently extract useful information for assessing disease severity through radiomics analyses.
Collapse
Affiliation(s)
- Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
| | | |
Collapse
|
16
|
Salvaggio G, Comelli A, Portoghese M, Cutaia G, Cannella R, Vernuccio F, Stefano A, Dispensa N, La Tona G, Salvaggio L, Calamia M, Gagliardo C, Lagalla R, Midiri M. Deep Learning Network for Segmentation of the Prostate Gland With Median Lobe Enlargement in T2-weighted MR Images: Comparison With Manual Segmentation Method. Curr Probl Diagn Radiol 2021; 51:328-333. [PMID: 34315623 DOI: 10.1067/j.cpradiol.2021.06.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/20/2021] [Accepted: 06/16/2021] [Indexed: 12/16/2022]
Abstract
PURPOSE Aim of this study was to evaluate a fully automated deep learning network named Efficient Neural Network (ENet) for segmentation of prostate gland with median lobe enlargement compared to manual segmentation. MATERIALS AND METHODS One-hundred-three patients with median lobe enlargement on prostate MRI were retrospectively included. Ellipsoid formula, manual segmentation and automatic segmentation were used for prostate volume estimation using T2 weighted MRI images. ENet was used for automatic segmentation; it is a deep learning network developed for fast inference and high accuracy in augmented reality and automotive scenarios. Student t-test was performed to compare prostate volumes obtained with ellipsoid formula, manual segmentation, and automated segmentation. To provide an evaluation of the similarity or difference to manual segmentation, sensitivity, positive predictive value (PPV), dice similarity coefficient (DSC), volume overlap error (VOE), and volumetric difference (VD) were calculated. RESULTS Differences between prostate volume obtained from ellipsoid formula versus manual segmentation and versus automatic segmentation were statistically significant (P < 0.049318 and P < 0.034305, respectively), while no statistical difference was found between volume obtained from manual versus automatic segmentation (P = 0.438045). The performance of ENet versus manual segmentations was good providing a sensitivity of 93.51%, a PPV of 87.93%, a DSC of 90.38%, a VOE of 17.32% and a VD of 6.85%. CONCLUSION The presence of median lobe enlargement may lead to MRI volume overestimation when using the ellipsoid formula so that a segmentation method is recommended. ENet volume estimation showed great accuracy in evaluation of prostate volume similar to that of manual segmentation.
Collapse
Affiliation(s)
- Giuseppe Salvaggio
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Albert Comelli
- Ri.Med Foundation, Palermo, Italy; Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | - Marzia Portoghese
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Giuseppe Cutaia
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy; Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, Palermo, Italy.
| | - Roberto Cannella
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Federica Vernuccio
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | - Nino Dispensa
- Discipline Chirurgiche, Oncologiche e Stomatologiche - Unità operativa di Urologia, Università degli Studi di Palermo, Palermo, Italy
| | - Giuseppe La Tona
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Leonardo Salvaggio
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Mauro Calamia
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Cesare Gagliardo
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Roberto Lagalla
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Massimo Midiri
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| |
Collapse
|
17
|
Transfer Learning for an Automated Detection System of Fractures in Patients with Maxillofacial Trauma. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11146293] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
An original maxillofacial fracture detection system (MFDS), based on convolutional neural networks and transfer learning, is proposed to detect traumatic fractures in patients. A convolutional neural network pre-trained on non-medical images was re-trained and fine-tuned using computed tomography (CT) scans to produce a model for the classification of future CTs as either “fracture” or “noFracture”. The model was trained on a total of 148 CTs (120 patients labeled with “fracture” and 28 patients labeled with “noFracture”). The validation dataset, used for statistical analysis, was characterized by 30 patients (5 with “noFracture” and 25 with “fracture”). An additional 30 CT scans, comprising 25 “fracture” and 5 “noFracture” images, were used as the test dataset for final testing. Tests were carried out both by considering the single slices and by grouping the slices for patients. A patient was categorized as fractured if two consecutive slices were classified with a fracture probability higher than 0.99. The patients’ results show that the model accuracy in classifying the maxillofacial fractures is 80%. Even if the MFDS model cannot replace the radiologist’s work, it can provide valuable assistive support, reducing the risk of human error, preventing patient harm by minimizing diagnostic delays, and reducing the incongruous burden of hospitalization.
Collapse
|
18
|
Castaldo A, De Lucia DR, Pontillo G, Gatti M, Cocozza S, Ugga L, Cuocolo R. State of the Art in Artificial Intelligence and Radiomics in Hepatocellular Carcinoma. Diagnostics (Basel) 2021; 11:1194. [PMID: 34209197 PMCID: PMC8307071 DOI: 10.3390/diagnostics11071194] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/24/2021] [Accepted: 06/24/2021] [Indexed: 12/12/2022] Open
Abstract
The most common liver malignancy is hepatocellular carcinoma (HCC), which is also associated with high mortality. Often HCC develops in a chronic liver disease setting, and early diagnosis as well as accurate screening of high-risk patients is crucial for appropriate and effective management of these patients. While imaging characteristics of HCC are well-defined in the diagnostic phase, challenging cases still occur, and current prognostic and predictive models are limited in their accuracy. Radiomics and machine learning (ML) offer new tools to address these issues and may lead to scientific breakthroughs with the potential to impact clinical practice and improve patient outcomes. In this review, we will present an overview of these technologies in the setting of HCC imaging across different modalities and a range of applications. These include lesion segmentation, diagnosis, prognostic modeling and prediction of treatment response. Finally, limitations preventing clinical application of radiomics and ML at the present time are discussed, together with necessary future developments to bring the field forward and outside of a purely academic endeavor.
Collapse
Affiliation(s)
- Anna Castaldo
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Davide Raffaele De Lucia
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Giuseppe Pontillo
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Marco Gatti
- Radiology Unit, Department of Surgical Sciences, University of Turin, 10124 Turin, Italy;
| | - Sirio Cocozza
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Lorenzo Ugga
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Renato Cuocolo
- Department of Clinical Medicine and Surgery, University of Naples “Federico II”, 80131 Naples, Italy
| |
Collapse
|
19
|
Cuocolo R, Comelli A, Stefano A, Benfante V, Dahiya N, Stanzione A, Castaldo A, De Lucia DR, Yezzi A, Imbriaco M. Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset. J Magn Reson Imaging 2021; 54:452-459. [PMID: 33634932 DOI: 10.1002/jmri.27585] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 02/12/2021] [Accepted: 02/16/2021] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Prostate volume, as determined by magnetic resonance imaging (MRI), is a useful biomarker both for distinguishing between benign and malignant pathology and can be used either alone or combined with other parameters such as prostate-specific antigen. PURPOSE This study compared different deep learning methods for whole-gland and zonal prostate segmentation. STUDY TYPE Retrospective. POPULATION A total of 204 patients (train/test = 99/105) from the PROSTATEx public dataset. FIELD STRENGTH/SEQUENCE A 3 T, TSE T2 -weighted. ASSESSMENT Four operators performed manual segmentation of the whole-gland, central zone + anterior stroma + transition zone (TZ), and peripheral zone (PZ). U-net, efficient neural network (ENet), and efficient residual factorized ConvNet (ERFNet) were trained and tuned on the training data through 5-fold cross-validation to segment the whole gland and TZ separately, while PZ automated masks were obtained by the subtraction of the first two. STATISTICAL TESTS Networks were evaluated on the test set using various accuracy metrics, including the Dice similarity coefficient (DSC). Model DSC was compared in both the training and test sets using the analysis of variance test (ANOVA) and post hoc tests. Parameter number, disk size, training, and inference times determined network computational complexity and were also used to assess the model performance differences. A P < 0.05 was selected to indicate the statistical significance. RESULTS The best DSC (P < 0.05) in the test set was achieved by ENet: 91% ± 4% for the whole gland, 87% ± 5% for the TZ, and 71% ± 8% for the PZ. U-net and ERFNet obtained, respectively, 88% ± 6% and 87% ± 6% for the whole gland, 86% ± 7% and 84% ± 7% for the TZ, and 70% ± 8% and 65 ± 8% for the PZ. Training and inference time were lowest for ENet. DATA CONCLUSION Deep learning networks can accurately segment the prostate using T2 -weighted images. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Renato Cuocolo
- Department of Clinical Medicine and Surgery, University of Naples "Federico II", Naples, Italy.,Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples "Federico II", Naples, Italy
| | | | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | - Viviana Benfante
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | - Navdeep Dahiya
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Anna Castaldo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | | | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Massimo Imbriaco
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| |
Collapse
|
20
|
An Ensemble of Global and Local-Attention Based Convolutional Neural Networks for COVID-19 Diagnosis on Chest X-ray Images. Symmetry (Basel) 2021. [DOI: 10.3390/sym13010113] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
The recent Coronavirus Disease 2019 (COVID-19) pandemic has put a tremendous burden on global health systems. Medical practitioners are under great pressure for reliable screening of suspected cases employing adjunct diagnostic tools to standard point-of-care testing methodology. Chest X-rays (CXRs) are appearing as a prospective diagnostic tool with easy-to-acquire, low-cost and less cross-contamination risk features. Artificial intelligence (AI)-attributed CXR evaluation has shown great potential for distinguishing COVID-19-induced pneumonia from other associated clinical instances. However, one of the associated challenges with diagnostic imaging-based modeling is incorrect feature attribution, which leads the model to learn misguiding disease patterns, causing wrong predictions. Here, we demonstrate an effective deep learning-based methodology to mitigate the problem, thereby allowing the classification algorithm to learn from relevant features. The proposed deep-learning framework consists of an ensemble of convolutional neural network (CNN) models focusing on both global and local pathological features from CXR lung images, while the latter is extracted using a multi-instance learning scheme and a local attention mechanism. An inspection of a series of backbone CNN models using global and local features, and an ensemble of both features, trained from high-quality CXR images of 1311 patients, further augmented for achieving the symmetry in class distribution, to localize lung pathological features followed by the classification of COVID-19 and other related pneumonia, shows that a DenseNet161 architecture outperforms all other models, as evaluated on an independent test set of 159 patients with confirmed cases. Specifically, an ensemble of DenseNet161 models with global and local attention-based features achieve an average balanced accuracy of 91.2%, average precision of 92.4%, and F1-score of 91.9% in a multi-label classification framework comprising COVID-19, pneumonia, and control classes. The DenseNet161 ensembles were also found to be statistically significant from all other models in a comprehensive statistical analysis. The current study demonstrated that the proposed deep learning-based algorithm can accurately identify the COVID-19-related pneumonia in CXR images, along with differentiating non-COVID-19-associated pneumonia with high specificity, by effectively alleviating the incorrect feature attribution problem, and exploiting an enhanced feature descriptor.
Collapse
|
21
|
Comelli A, Dahiya N, Stefano A, Vernuccio F, Portoghese M, Cutaia G, Bruno A, Salvaggio G, Yezzi A. Deep Learning-Based Methods for Prostate Segmentation in Magnetic Resonance Imaging. APPLIED SCIENCES (BASEL, SWITZERLAND) 2021; 11:782. [PMID: 33680505 PMCID: PMC7932306 DOI: 10.3390/app11020782] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Magnetic Resonance Imaging-based prostate segmentation is an essential task for adaptive radiotherapy and for radiomics studies whose purpose is to identify associations between imaging features and patient outcomes. Because manual delineation is a time-consuming task, we present three deep-learning (DL) approaches, namely UNet, efficient neural network (ENet), and efficient residual factorized convNet (ERFNet), whose aim is to tackle the fully-automated, real-time, and 3D delineation process of the prostate gland on T2-weighted MRI. While UNet is used in many biomedical image delineation applications, ENet and ERFNet are mainly applied in self-driving cars to compensate for limited hardware availability while still achieving accurate segmentation. We apply these models to a limited set of 85 manual prostate segmentations using the k-fold validation strategy and the Tversky loss function and we compare their results. We find that ENet and UNet are more accurate than ERFNet, with ENet much faster than UNet. Specifically, ENet obtains a dice similarity coefficient of 90.89% and a segmentation time of about 6 s using central processing unit (CPU) hardware to simulate real clinical conditions where graphics processing unit (GPU) is not always available. In conclusion, ENet could be efficiently applied for prostate delineation even in small image training datasets with potential benefit for patient management personalization.
Collapse
Affiliation(s)
- Albert Comelli
- Ri.MED Foundation, Via Bandiera, 11, 90133 Palermo, Italy
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
| | - Navdeep Dahiya
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
| | - Federica Vernuccio
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Marzia Portoghese
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Giuseppe Cutaia
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Alberto Bruno
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Giuseppe Salvaggio
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
22
|
Chen WF, Ou HY, Liu KH, Li ZY, Liao CC, Wang SY, Huang W, Cheng YF, Pan CT. In-Series U-Net Network to 3D Tumor Image Reconstruction for Liver Hepatocellular Carcinoma Recognition. Diagnostics (Basel) 2020; 11:E11. [PMID: 33374672 PMCID: PMC7822491 DOI: 10.3390/diagnostics11010011] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 12/16/2020] [Accepted: 12/20/2020] [Indexed: 12/27/2022] Open
Abstract
Cancer is one of the common diseases. Quantitative biomarkers extracted from standard-of-care computed tomography (CT) scan can create a robust clinical decision tool for the diagnosis of hepatocellular carcinoma (HCC). According to the current clinical methods, the situation usually accounts for high expenditure of time and resources. To improve the current clinical diagnosis and therapeutic procedure, this paper proposes a deep learning-based approach, called Successive Encoder-Decoder (SED), to assist in the automatic interpretation of liver lesion/tumor segmentation through CT images. The SED framework consists of two different encoder-decoder networks connected in series. The first network aims to remove unwanted voxels and organs and to extract liver locations from CT images. The second network uses the results of the first network to further segment the lesions. For practical purpose, the predicted lesions on individual CTs were extracted and reconstructed on 3D images. The experiments conducted on 4300 CT images and LiTS dataset demonstrate that the liver segmentation and the tumor prediction achieved 0.92 and 0.75 in Dice score, respectively, by as-proposed SED method.
Collapse
Affiliation(s)
- Wen-Fan Chen
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan;
| | - Hsin-You Ou
- Liver Transplantation Program and Departments of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan; (H.-Y.O.); (C.-C.L.)
| | - Keng-Hao Liu
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| | - Zhi-Yun Li
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| | - Chien-Chang Liao
- Liver Transplantation Program and Departments of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan; (H.-Y.O.); (C.-C.L.)
| | - Shao-Yu Wang
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| | - Wen Huang
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| | - Yu-Fan Cheng
- Liver Transplantation Program and Departments of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan; (H.-Y.O.); (C.-C.L.)
| | - Cheng-Tang Pan
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan;
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| |
Collapse
|