1
|
Hosseini SA, Shiri I, Ghaffarian P, Hajianfar G, Avval AH, Seyfi M, Servaes S, Rosa-Neto P, Zaidi H, Ay MR. The effect of harmonization on the variability of PET radiomic features extracted using various segmentation methods. Ann Nucl Med 2024; 38:493-507. [PMID: 38575814 PMCID: PMC11217131 DOI: 10.1007/s12149-024-01923-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/07/2024] [Indexed: 04/06/2024]
Abstract
PURPOSE This study aimed to examine the robustness of positron emission tomography (PET) radiomic features extracted via different segmentation methods before and after ComBat harmonization in patients with non-small cell lung cancer (NSCLC). METHODS We included 120 patients (positive recurrence = 46 and negative recurrence = 74) referred for PET scanning as a routine part of their care. All patients had a biopsy-proven NSCLC. Nine segmentation methods were applied to each image, including manual delineation, K-means (KM), watershed, fuzzy-C-mean, region-growing, local active contour (LAC), and iterative thresholding (IT) with 40, 45, and 50% thresholds. Diverse image discretizations, both without a filter and with different wavelet decompositions, were applied to PET images. Overall, 6741 radiomic features were extracted from each image (749 radiomic features from each segmented area). Non-parametric empirical Bayes (NPEB) ComBat harmonization was used to harmonize the features. Linear Support Vector Classifier (LinearSVC) with L1 regularization For feature selection and Support Vector Machine classifier (SVM) with fivefold nested cross-validation was performed using StratifiedKFold with 'n_splits' set to 5 to predict recurrence in NSCLC patients and assess the impact of ComBat harmonization on the outcome. RESULTS From 749 extracted radiomic features, 206 (27%) and 389 (51%) features showed excellent reliability (ICC ≥ 0.90) against segmentation method variation before and after NPEB ComBat harmonization, respectively. Among all, 39 features demonstrated poor reliability, which declined to 10 after ComBat harmonization. The 64 fixed bin widths (without any filter) and wavelets (LLL)-based radiomic features set achieved the best performance in terms of robustness against diverse segmentation techniques before and after ComBat harmonization. The first-order and GLRLM and also first-order and NGTDM feature families showed the largest number of robust features before and after ComBat harmonization, respectively. In terms of predicting recurrence in NSCLC, our findings indicate that using ComBat harmonization can significantly enhance machine learning outcomes, particularly improving the accuracy of watershed segmentation, which initially had fewer reliable features than manual contouring. Following the application of ComBat harmonization, the majority of cases saw substantial increase in sensitivity and specificity. CONCLUSION Radiomic features are vulnerable to different segmentation methods. ComBat harmonization might be considered a solution to overcome the poor reliability of radiomic features.
Collapse
Affiliation(s)
- Seyyed Ali Hosseini
- Translational Neuroimaging Laboratory, The McGill University Research Centre for Studies in Aging, Douglas Hospital, McGill University, Montréal, QC, Canada
- Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montréal, QC, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Pardis Ghaffarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran
- PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | | | - Milad Seyfi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
| | - Stijn Servaes
- Translational Neuroimaging Laboratory, The McGill University Research Centre for Studies in Aging, Douglas Hospital, McGill University, Montréal, QC, Canada
- Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montréal, QC, Canada
| | - Pedro Rosa-Neto
- Translational Neuroimaging Laboratory, The McGill University Research Centre for Studies in Aging, Douglas Hospital, McGill University, Montréal, QC, Canada
- Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montréal, QC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
- University Research and Innovation Center, Óbudabuda University, Budapest, Hungary.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
2
|
Hajianfar G, Sabouri M, Salimi Y, Amini M, Bagheri S, Jenabi E, Hekmat S, Maghsudi M, Mansouri Z, Khateri M, Hosein Jamshidi M, Jafari E, Bitarafan Rajabi A, Assadi M, Oveisi M, Shiri I, Zaidi H. Artificial intelligence-based analysis of whole-body bone scintigraphy: The quest for the optimal deep learning algorithm and comparison with human observer performance. Z Med Phys 2024; 34:242-257. [PMID: 36932023 PMCID: PMC11156776 DOI: 10.1016/j.zemedi.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 12/22/2022] [Accepted: 01/18/2023] [Indexed: 03/17/2023]
Abstract
PURPOSE Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers. MATERIALS AND METHODS After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers. RESULTS DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time. CONCLUSION Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images.
Collapse
Affiliation(s)
- Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Maziar Sabouri
- Department of Medical Physics, School of Medicine, Iran University of Medical Science, Tehran, Iran; Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Soroush Bagheri
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sepideh Hekmat
- Hasheminejad Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Mohammad Hosein Jamshidi
- Department of Medical Imaging and Radiation Sciences, School of Allied Medical Sciences, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Ahmad Bitarafan Rajabi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Mehrdad Oveisi
- Department of Computer Science, University of British Columbia, Vancouver, BC, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
3
|
Fraioli F, Albert N, Boellaard R, Galazzo IB, Brendel M, Buvat I, Castellaro M, Cecchin D, Fernandez PA, Guedj E, Hammers A, Kaplar Z, Morbelli S, Papp L, Shi K, Tolboom N, Traub-Weidinger T, Verger A, Van Weehaeghe D, Yakushev I, Barthel H. Perspectives of the European Association of Nuclear Medicine on the role of artificial intelligence (AI) in molecular brain imaging. Eur J Nucl Med Mol Imaging 2024; 51:1007-1011. [PMID: 38097746 DOI: 10.1007/s00259-023-06553-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Affiliation(s)
- Francesco Fraioli
- Institute of Nuclear Medicine, University College London Hospitals, 5Th Floor UCH, 235 Euston Rd, London, NW1 2BU, UK.
| | - Nathalie Albert
- Department of Nuclear Medicine, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location VUmc, Amsterdam, The Netherlands
| | | | - Matthias Brendel
- Department of Nuclear Medicine, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Irene Buvat
- Institut Curie - Inserm Laboratory of Translational Imaging in Oncology, Paris, France
| | - Marco Castellaro
- Department of Information Engineering, University-Hospital of Padova, Padua, Italy
| | - Diego Cecchin
- Nuclear Medicine Unit, Department of Medicine - DIMED, University-Hospital of Padova, Padua, Italy
| | - Pablo Aguiar Fernandez
- CIMUS, Universidade Santiago de Compostela & Nuclear Medicine Dept, Univ. Hospital IDIS, Santiago de Compostela, Spain
| | - Eric Guedj
- Département de Médecine Nucléaire, Aix Marseille Univ, APHM, CNRS, Centrale Marseille, Institut Fresnel, Hôpital de La Timone, CERIMED, Marseille, France
| | - Alexander Hammers
- School of Biomedical Engineering and Imaging Sciences, King's College London St Thomas' Hospital, London, SE1 7EH, UK
| | - Zoltan Kaplar
- Institute of Nuclear Medicine, University College London Hospitals, 5Th Floor UCH, 235 Euston Rd, London, NW1 2BU, UK
| | - Silvia Morbelli
- Nuclear Medicine Unit, AOU Città Della Salute E Della Scienza Di Torino, University of Turin, Turin, Italy
| | - Laszlo Papp
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Kuangyu Shi
- Lab for Artificial Intelligence and Translational Theranostic, Dept. of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Nelleke Tolboom
- Department of Radiology and Nuclear Medicine, Utrecht University Medical Center, Utrecht, The Netherlands
| | - Tatjana Traub-Weidinger
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Antoine Verger
- Department of Nuclear Medicine and Nancyclotep Imaging Platform, CHRU Nancy, Université de Lorraine, IADI, INSERM U1254, Nancy, France
| | - Donatienne Van Weehaeghe
- Department of Radiology and Nuclear Medicine, Ghent University Hospital, C. Heymanslaan 10, 9000, Ghent, Belgium
| | - Igor Yakushev
- Department of Nuclear Medicine, School of Medicine, Technical University of Munich, Munich, Germany
| | - Henryk Barthel
- Department of Nuclear Medicine, Leipzig University Medical Centre, Leipzig, Germany
| |
Collapse
|
4
|
Shiri I, Salimi Y, Sirjani N, Razeghi B, Bagherieh S, Pakbin M, Mansouri Z, Hajianfar G, Avval AH, Askari D, Ghasemian M, Sandoughdaran S, Sohrabi A, Sadati E, Livani S, Iranpour P, Kolahi S, Khosravi B, Bijari S, Sayfollahi S, Atashzar MR, Hasanian M, Shahhamzeh A, Teimouri A, Goharpey N, Shirzad-Aski H, Karimi J, Radmard AR, Rezaei-Kalantari K, Oghli MG, Oveisi M, Vafaei Sadr A, Voloshynovskiy S, Zaidi H. Differential privacy preserved federated learning for prognostic modeling in COVID-19 patients using large multi-institutional chest CT dataset. Med Phys 2024. [PMID: 38335175 DOI: 10.1002/mp.16964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 01/10/2024] [Accepted: 01/21/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Notwithstanding the encouraging results of previous studies reporting on the efficiency of deep learning (DL) in COVID-19 prognostication, clinical adoption of the developed methodology still needs to be improved. To overcome this limitation, we set out to predict the prognosis of a large multi-institutional cohort of patients with COVID-19 using a DL-based model. PURPOSE This study aimed to evaluate the performance of deep privacy-preserving federated learning (DPFL) in predicting COVID-19 outcomes using chest CT images. METHODS After applying inclusion and exclusion criteria, 3055 patients from 19 centers, including 1599 alive and 1456 deceased, were enrolled in this study. Data from all centers were split (randomly with stratification respective to each center and class) into a training/validation set (70%/10%) and a hold-out test set (20%). For the DL model, feature extraction was performed on 2D slices, and averaging was performed at the final layer to construct a 3D model for each scan. The DensNet model was used for feature extraction. The model was developed using centralized and FL approaches. For FL, we employed DPFL approaches. Membership inference attack was also evaluated in the FL strategy. For model evaluation, different metrics were reported in the hold-out test sets. In addition, models trained in two scenarios, centralized and FL, were compared using the DeLong test for statistical differences. RESULTS The centralized model achieved an accuracy of 0.76, while the DPFL model had an accuracy of 0.75. Both the centralized and DPFL models achieved a specificity of 0.77. The centralized model achieved a sensitivity of 0.74, while the DPFL model had a sensitivity of 0.73. A mean AUC of 0.82 and 0.81 with 95% confidence intervals of (95% CI: 0.79-0.85) and (95% CI: 0.77-0.84) were achieved by the centralized model and the DPFL model, respectively. The DeLong test did not prove statistically significant differences between the two models (p-value = 0.98). The AUC values for the inference attacks fluctuate between 0.49 and 0.51, with an average of 0.50 ± 0.003 and 95% CI for the mean AUC of 0.500 to 0.501. CONCLUSION The performance of the proposed model was comparable to centralized models while operating on large and heterogeneous multi-institutional datasets. In addition, the model was resistant to inference attacks, ensuring the privacy of shared data during the training process.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Nasim Sirjani
- Research and Development Department, Med Fanavarn Plus Co, Karaj, Iran
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Sara Bagherieh
- School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Masoumeh Pakbin
- Imaging Department, Qom University of Medical Sciences, Qom, Iran
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Dariush Askari
- Department of Radiology Technology, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammadreza Ghasemian
- Department of Radiology, Shahid Beheshti Hospital, Qom University of Medical Sciences, Qom, Iran
| | - Saleh Sandoughdaran
- Department of Clinical Oncology, Royal Surrey County Hospital, Guildford, UK
| | - Ahmad Sohrabi
- Radin Makian Azma Mehr Ltd., Radinmehr Veterinary Laboratory, Iran University of Medical Sciences, Gorgan, Iran
| | - Elham Sadati
- Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| | - Somayeh Livani
- Clinical Research Development Unit (CRDU), Sayad Shirazi Hospital, Golestan University of Medical Sciences, Gorgan, Iran
| | - Pooya Iranpour
- Medical Imaging Research Center, Department of Radiology, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Shahriar Kolahi
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Bardia Khosravi
- Digestive Diseases Research Center, Digestive Diseases Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Salar Bijari
- Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| | - Sahar Sayfollahi
- Department of Neurosurgery, Faculty of Medical Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Mohammad Reza Atashzar
- Department of Immunology, School of Medicine, Fasa University of Medical Sciences, Fasa, Iran
| | - Mohammad Hasanian
- Department of Radiology, Arak University of Medical Sciences, Arak, Iran
| | - Alireza Shahhamzeh
- Clinical research development center, Qom University of Medical Sciences, Qom, Iran
| | - Arash Teimouri
- Medical Imaging Research Center, Department of Radiology, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Neda Goharpey
- Department of radiation oncology, Shohada-e Tajrish Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | | - Jalal Karimi
- Department of Infectious Disease, School of Medicine, Fasa University of Medical Sciences, Fasa, Iran
| | - Amir Reza Radmard
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Kiara Rezaei-Kalantari
- Rajaie Cardiovascular, Medical & Research Center, Iran University of Medical Science, Tehran, Iran
| | | | - Mehrdad Oveisi
- Department of Computer Science, University of British Columbia, Vancouver, British Columbia, Canada
| | - Alireza Vafaei Sadr
- Department of Public Health Sciences, College of Medicine, Pennsylvania State University, Hershey, Pennsylvania, USA
| | | | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- University Research and Innovation Center, Óbuda University, Budapest, Hungary
| |
Collapse
|
5
|
Shiri I, Razeghi B, Ferdowsi S, Salimi Y, Gündüz D, Teodoro D, Voloshynovskiy S, Zaidi H. PRIMIS: Privacy-preserving medical image sharing via deep sparsifying transform learning with obfuscation. J Biomed Inform 2024; 150:104583. [PMID: 38191010 DOI: 10.1016/j.jbi.2024.104583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 11/19/2023] [Accepted: 01/02/2024] [Indexed: 01/10/2024]
Abstract
OBJECTIVE The primary objective of our study is to address the challenge of confidentially sharing medical images across different centers. This is often a critical necessity in both clinical and research environments, yet restrictions typically exist due to privacy concerns. Our aim is to design a privacy-preserving data-sharing mechanism that allows medical images to be stored as encoded and obfuscated representations in the public domain without revealing any useful or recoverable content from the images. In tandem, we aim to provide authorized users with compact private keys that could be used to reconstruct the corresponding images. METHOD Our approach involves utilizing a neural auto-encoder. The convolutional filter outputs are passed through sparsifying transformations to produce multiple compact codes. Each code is responsible for reconstructing different attributes of the image. The key privacy-preserving element in this process is obfuscation through the use of specific pseudo-random noise. When applied to the codes, it becomes computationally infeasible for an attacker to guess the correct representation for all the codes, thereby preserving the privacy of the images. RESULTS The proposed framework was implemented and evaluated using chest X-ray images for different medical image analysis tasks, including classification, segmentation, and texture analysis. Additionally, we thoroughly assessed the robustness of our method against various attacks using both supervised and unsupervised algorithms. CONCLUSION This study provides a novel, optimized, and privacy-assured data-sharing mechanism for medical images, enabling multi-party sharing in a secure manner. While we have demonstrated its effectiveness with chest X-ray images, the mechanism can be utilized in other medical images modalities as well.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Switzerland; Idiap Research Institute, Switzerland
| | - Sohrab Ferdowsi
- Department of Radiology and Medical Informatics, University of Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Deniz Gündüz
- Department of Electrical and Electronic Engineering, Imperial College London, UK
| | - Douglas Teodoro
- Department of Radiology and Medical Informatics, University of Geneva, Switzerland
| | | | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, The Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
6
|
Shiri I, Amini M, Yousefirizi F, Vafaei Sadr A, Hajianfar G, Salimi Y, Mansouri Z, Jenabi E, Maghsudi M, Mainta I, Becker M, Rahmim A, Zaidi H. Information fusion for fully automated segmentation of head and neck tumors from PET and CT images. Med Phys 2024; 51:319-333. [PMID: 37475591 DOI: 10.1002/mp.16615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/16/2023] [Accepted: 06/19/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, USA
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Minerva Becker
- Service of Radiology, Geneva University Hospital, Geneva, Switzerland
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
- Department of Radiology and Physics, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
7
|
Vafaei Sadr A, Bülow R, von Stillfried S, Schmitz NEJ, Pilva P, Hölscher DL, Ha PP, Schweiker M, Boor P. Operational greenhouse-gas emissions of deep learning in digital pathology: a modelling study. Lancet Digit Health 2024; 6:e58-e69. [PMID: 37996339 PMCID: PMC10728828 DOI: 10.1016/s2589-7500(23)00219-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 10/04/2023] [Accepted: 10/16/2023] [Indexed: 11/25/2023]
Abstract
BACKGROUND Deep learning is a promising way to improve health care. Image-processing medical disciplines, such as pathology, are expected to be transformed by deep learning. The first clinically applicable deep-learning diagnostic support tools are already available in cancer pathology, and their number is increasing. However, data on the environmental sustainability of these tools are scarce. We aimed to conduct an environmental-sustainability analysis of a theoretical implementation of deep learning in patient-care pathology. METHODS For this modelling study, we first assembled and calculated relevant data and parameters of a digital-pathology workflow. Data were breast and prostate specimens from the university clinic at the Institute of Pathology of the Rheinisch-Westfälische Technische Hochschule Aachen (Aachen, Germany), for which commercially available deep learning was already available. Only specimens collected between Jan 1 and Dec 31, 2019 were used, to omit potential biases due to the COVID-19 pandemic. Our final selection was based on 2 representative weeks outside holidays, covering different types of specimens. To calculate carbon dioxide (CO2) or CO2 equivalent (CO2 eq) emissions of deep learning in pathology, we gathered relevant data for exact numbers and sizes of whole-slide images (WSIs), which were generated by scanning histopathology samples of prostate and breast specimens. We also evaluated different data input scenarios (including all slide tiles, only tiles containing tissue, or only tiles containing regions of interest). To convert estimated energy consumption from kWh to CO2 eq, we used the internet protocol address of the computational server and the Electricity Maps database to obtain information on the sources of the local electricity grid (ie, renewable vs non-renewable), and estimated the number of trees and proportion of the local and world's forests needed to sequester the CO2 eq emissions. We calculated the computational requirements and CO2 eq emissions of 30 deep-learning models that varied in task and size. The first scenario represented the use of one commercially available deep-learning model for one task in one case (1-task), the second scenario considered two deep-learning models for two tasks per case (2-task), the third scenario represented a future, potentially automated workflow that could handle 7 tasks per case (7-task), and the fourth scenario represented the use of a single potential, large, computer-vision model that could conduct multiple tasks (multitask). We also compared the performance (ie, accuracy) and CO2 eq emissions of different deep-learning models for the classification of renal cell carcinoma on WSIs, also from Rheinisch-Westfälische Technische Hochschule Aachen. We also tested other approaches to reducing CO2 eq emissions, including model pruning and an alternative method for histopathology analysis (pathomics). FINDINGS The pathology database contained 35 552 specimens (237 179 slides), 6420 of which were prostate specimens (10 115 slides) and 11 801 of which were breast specimens (19 763 slides). We selected and subsequently digitised 140 slides from eight breast-cancer cases and 223 slides from five prostate-cancer cases. Applying large deep-learning models on all WSI tiles of prostate and breast pathology cases would result in yearly CO2 eq emissions of 7·65 metric tons (t; 95% CI 7·62-7·68) with the use of a single deep-learning model per case; yearly CO2 eq emissions were up to 100·56 t (100·21-100·99) with the use of seven deep-learning models per case. CO2 eq emissions for different deep-learning model scenarios, data inputs, and deep-learning model sizes for all slides varied from 3·61 t (3·59-3·63) to 2795·30 t (1177·51-6482·13. For the estimated number of overall pathology cases worldwide, the yearly CO2 eq emissions varied, reaching up to 16 megatons (Mt) of CO2 eq, requiring up to 86 590 km2 (0·22%) of world forest to sequester the CO2 eq emissions. Use of the 7-task scenario and small deep-learning models on slides containing tissue only could substantially reduce CO2 eq emissions worldwide by up to 141 times (0·1 Mt, 95% CI 0·1-0·1). Considering the local environment in Aachen, Germany, the maximum CO2 eq emission from the use of deep learning in digital pathology only would require 32·8% (95% CI 13·8-76·6) of the local forest to sequester the CO2 eq emissions. A single pathomics run on a tissue could provide information that was comparable to or even better than the output of multitask deep-learning models, but with 147 times reduced CO2 eq emissions. INTERPRETATION Our findings suggest that widespread use of deep learning in pathology might have considerable global-warming potential. The medical community, policy decision makers, and the public should be aware of this potential and encourage the use of CO2 eq emissions reduction strategies where possible. FUNDING German Research Foundation, European Research Council, German Federal Ministry of Education and Research, Health, Economic Affairs and Climate Action, and the Innovation Fund of the Federal Joint Committee.
Collapse
Affiliation(s)
- Alireza Vafaei Sadr
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany; Department of Public Health Sciences, College of Medicine, Pennsylvania State University, Hershey, PA, USA
| | - Roman Bülow
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Saskia von Stillfried
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Nikolas E J Schmitz
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Pourya Pilva
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - David L Hölscher
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Peiman Pilehchi Ha
- Healthy Living Spaces Lab, Institute for Occupational, Social and Environmental Medicine, Medical Faculty, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Marcel Schweiker
- Healthy Living Spaces Lab, Institute for Occupational, Social and Environmental Medicine, Medical Faculty, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Peter Boor
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany; Department of Nephrology and Immunology, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany.
| |
Collapse
|
8
|
Shiri I, Salimi Y, Maghsudi M, Jenabi E, Harsini S, Razeghi B, Mostafaei S, Hajianfar G, Sanaat A, Jafari E, Samimi R, Khateri M, Sheikhzadeh P, Geramifar P, Dadgar H, Bitrafan Rajabi A, Assadi M, Bénard F, Vafaei Sadr A, Voloshynovskiy S, Mainta I, Uribe C, Rahmim A, Zaidi H. Differential privacy preserved federated transfer learning for multi-institutional 68Ga-PET image artefact detection and disentanglement. Eur J Nucl Med Mol Imaging 2023; 51:40-53. [PMID: 37682303 PMCID: PMC10684636 DOI: 10.1007/s00259-023-06418-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/24/2023] [Indexed: 09/09/2023]
Abstract
PURPOSE Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Department of Cardiology, Inselspital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Harsini
- BC Cancer Research Institute, Vancouver, BC, Canada
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Rezvan Samimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Peyman Sheikhzadeh
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Bitrafan Rajabi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
- Echocardiography Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - François Bénard
- BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA, 17033, USA
| | | | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
9
|
Becker M, de Vito C, Dulguerov N, Zaidi H. PET/MR Imaging in Head and Neck Cancer. Magn Reson Imaging Clin N Am 2023; 31:539-564. [PMID: 37741640 DOI: 10.1016/j.mric.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2023]
Abstract
Head and neck squamous cell carcinoma (HNSCC) can either be examined with hybrid PET/MR imaging systems or sequentially, using PET/CT and MR imaging. Regardless of the acquisition technique, the superiority of MR imaging compared to CT lies in its potential to interrogate tumor and surrounding tissues with different sequences, including perfusion and diffusion. For this reason, PET/MR imaging is preferable for the detection and assessment of locoregional residual/recurrent HNSCC after therapy. In addition, MR imaging interpretation is facilitated when combined with PET. Nevertheless, distant metastases and distant second primary tumors are detected equally well with PET/MR imaging and PET/CT.
Collapse
Affiliation(s)
- Minerva Becker
- Diagnostic Department, Division of Radiology, Unit of Head and Neck and Maxillofacial Radiology, Geneva University Hospitals, University of Geneva, Rue Gabrielle-Perret-Gentil 4, Geneva 14 1211, Switzerland.
| | - Claudio de Vito
- Diagnostic Department, Division of Clinical Pathology, Geneva University Hospitals, Rue Gabrielle-Perret-Gentil 4, Geneva 14 1211, Switzerland
| | - Nicolas Dulguerov
- Department of Clinical Neurosciences, Clinic of Otorhinolaryngology, Head and Neck Surgery, Unit of Cervicofacial Surgery, Geneva University Hospitals, Rue Gabrielle-Perret-Gentil 4, Geneva 14 1211, Switzerland
| | - Habib Zaidi
- Diagnostic Department, Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospitals, University of Geneva, Rue Gabrielle-Perret-Gentil 4, Geneva 14 1211, Switzerland; Geneva University Neurocenter, University of Geneva, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
10
|
He J, Zhang Y, Chung M, Wang M, Wang K, Ma Y, Ding X, Li Q, Pu Y. Whole-body tumor segmentation from PET/CT images using a two-stage cascaded neural network with camouflaged object detection mechanisms. Med Phys 2023; 50:6151-6162. [PMID: 37134002 DOI: 10.1002/mp.16438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 03/25/2023] [Accepted: 04/12/2023] [Indexed: 05/04/2023] Open
Abstract
BACKGROUND Whole-body Metabolic Tumor Volume (MTVwb) is an independent prognostic factor for overall survival in lung cancer patients. Automatic segmentation methods have been proposed for MTV calculation. Nevertheless, most of existing methods for patients with lung cancer only segment tumors in the thoracic region. PURPOSE In this paper, we present a Two-Stage cascaded neural network integrated with Camouflaged Object Detection mEchanisms (TS-Code-Net) for automatic segmenting tumors from whole-body PET/CT images. METHODS Firstly, tumors are detected from the Maximum Intensity Projection (MIP) images of PET/CT scans, and tumors' approximate localizations along z-axis are identified. Secondly, the segmentations are performed on PET/CT slices that contain tumors identified by the first step. Camouflaged object detection mechanisms are utilized to distinguish the tumors from their surrounding regions that have similar Standard Uptake Values (SUV) and texture appearance. Finally, the TS-Code-Net is trained by minimizing the total loss that incorporates the segmentation accuracy loss and the class imbalance loss. RESULTS The performance of the TS-Code-Net is tested on a whole-body PET/CT image data-set including 480 Non-Small Cell Lung Cancer (NSCLC) patients with five-fold cross-validation using image segmentation metrics. Our method achieves 0.70, 0.76, and 0.70, for Dice, Sensitivity and Precision, respectively, which demonstrates the superiority of the TS-Code-Net over several existing methods related to metastatic lung cancer segmentation from whole-body PET/CT images. CONCLUSIONS The proposed TS-Code-Net is effective for whole-body tumor segmentation of PET/CT images. Codes for TS-Code-Net are available at: https://github.com/zyj19/TS-Code-Net.
Collapse
Affiliation(s)
- Jiangping He
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Yangjie Zhang
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Maggie Chung
- Department of Radiology, University of California, San Francisco, California, USA
| | - Michael Wang
- Department of Pathology, University of California, San Francisco, California, USA
| | - Kun Wang
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Yan Ma
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Xiaoyang Ding
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Qiang Li
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Yonglin Pu
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
11
|
Zhao S, Wang J, Jin C, Zhang X, Xue C, Zhou R, Zhong Y, Liu Y, He X, Zhou Y, Xu C, Zhang L, Qian W, Zhang H, Zhang X, Tian M. Stacking Ensemble Learning-Based [ 18F]FDG PET Radiomics for Outcome Prediction in Diffuse Large B-Cell Lymphoma. J Nucl Med 2023; 64:1603-1609. [PMID: 37500261 DOI: 10.2967/jnumed.122.265244] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/31/2023] [Indexed: 07/29/2023] Open
Abstract
This study aimed to develop an analytic approach based on [18F]FDG PET radiomics using stacking ensemble learning to improve the outcome prediction in diffuse large B-cell lymphoma (DLBCL). Methods: In total, 240 DLBCL patients from 2 medical centers were divided into the training set (n = 141), internal testing set (n = 61), and external testing set (n = 38). Radiomics features were extracted from pretreatment [18F]FDG PET scans at the patient level using 4 semiautomatic segmentation methods (SUV threshold of 2.5, SUV threshold of 4.0 [SUV4.0], 41% of SUVmax, and SUV threshold of mean liver uptake [PERCIST]). All extracted features were harmonized with the ComBat method. The intraclass correlation coefficient was used to evaluate the reliability of radiomics features extracted by different segmentation methods. Features from the most reliable segmentation method were selected by Pearson correlation coefficient analysis and the LASSO (least absolute shrinkage and selection operator) algorithm. A stacking ensemble learning approach was applied to build radiomics-only and combined clinical-radiomics models for prediction of 2-y progression-free survival and overall survival based on 4 machine learning classifiers (support vector machine, random forests, gradient boosting decision tree, and adaptive boosting). Confusion matrix, receiver-operating-characteristic curve analysis, and survival analysis were used to evaluate the model performance. Results: Among 4 semiautomatic segmentation methods, SUV4.0 segmentation yielded the highest interobserver reliability, with 830 (66.7%) selected radiomics features. The combined model constructed by the stacking method achieved the best discrimination performance. For progression-free survival prediction in the external testing set, the areas under the receiver-operating-characteristic curve and accuracy of the stacking-based combined model were 0.771 and 0.789, respectively. For overall survival prediction, the stacking-based combined model achieved an area under the curve of 0.725 and an accuracy of 0.763 in the external testing set. The combined model also demonstrated a more distinct risk stratification than the International Prognostic Index in all sets (log-rank test, all P < 0.05). Conclusion: The combined model that incorporates [18F]FDG PET radiomics and clinical characteristics based on stacking ensemble learning could enable improved risk stratification in DLBCL.
Collapse
Affiliation(s)
- Shuilin Zhao
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
- Cancer Center, Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital, Hangzhou Medical College, Hangzhou, China
| | - Jing Wang
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
| | - Chentao Jin
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
| | - Xiang Zhang
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
| | - Chenxi Xue
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
| | - Rui Zhou
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
| | - Yan Zhong
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
| | - Yuwei Liu
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
| | - Xuexin He
- Department of Medical Oncology, Huashan Hospital of Fudan University, Shanghai, China
| | - Youyou Zhou
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
| | - Caiyun Xu
- Department of Nuclear Medicine, First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou, China
| | - Lixia Zhang
- Department of Nuclear Medicine, First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou, China
| | - Wenbin Qian
- Department of Hematology, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Hong Zhang
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China;
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
- Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China; and
| | - Xiaohui Zhang
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
| | - Mei Tian
- Department of Nuclear Medicine and PET Center, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China;
- Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou, China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, China
- Human Phenome Institute, Fudan University, Shanghai, China
| |
Collapse
|
12
|
Amini M, Pursamimi M, Hajianfar G, Salimi Y, Saberi A, Mehri-Kakavand G, Nazari M, Ghorbani M, Shalbaf A, Shiri I, Zaidi H. Machine learning-based diagnosis and risk classification of coronary artery disease using myocardial perfusion imaging SPECT: A radiomics study. Sci Rep 2023; 13:14920. [PMID: 37691039 PMCID: PMC10493219 DOI: 10.1038/s41598-023-42142-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 09/06/2023] [Indexed: 09/12/2023] Open
Abstract
This study aimed to investigate the diagnostic performance of machine learning-based radiomics analysis to diagnose coronary artery disease status and risk from rest/stress Myocardial Perfusion Imaging (MPI) single-photon emission computed tomography (SPECT). A total of 395 patients suspicious of coronary artery disease who underwent 2-day stress-rest protocol MPI SPECT were enrolled in this study. The left ventricle myocardium, excluding the cardiac cavity, was manually delineated on rest and stress images to define a volume of interest. Added to clinical features (age, sex, family history, diabetes status, smoking, and ejection fraction), a total of 118 radiomics features, were extracted from rest and stress MPI SPECT images to establish different feature sets, including Rest-, Stress-, Delta-, and Combined-radiomics (all together) feature sets. The data were randomly divided into 80% and 20% subsets for training and testing, respectively. The performance of classifiers built from combinations of three feature selections, and nine machine learning algorithms was evaluated for two different diagnostic tasks, including 1) normal/abnormal (no CAD vs. CAD) classification, and 2) low-risk/high-risk CAD classification. Different metrics, including the area under the ROC curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE), were reported for models' evaluation. Overall, models built on the Stress feature set (compared to other feature sets), and models to diagnose the second task (compared to task 1 models) revealed better performance. The Stress-mRMR-KNN (feature set-feature selection-classifier) reached the highest performance for task 1 with AUC, ACC, SEN, and SPE equal to 0.61, 0.63, 0.64, and 0.6, respectively. The Stress-Boruta-GB model achieved the highest performance for task 2 with AUC, ACC, SEN, and SPE of 0.79, 0.76, 0.75, and 0.76, respectively. Diabetes status from the clinical feature family, and dependence count non-uniformity normalized, from the NGLDM family, which is representative of non-uniformity in the region of interest were the most frequently selected features from stress feature set for CAD risk classification. This study revealed promising results for CAD risk classification using machine learning models built on MPI SPECT radiomics. The proposed models are helpful to alleviate the labor-intensive MPI SPECT interpretation process regarding CAD status and can potentially expedite the diagnostic process.
Collapse
Affiliation(s)
- Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Mohamad Pursamimi
- Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Abdollah Saberi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Ghazal Mehri-Kakavand
- Department of Medical Physics, School of Medicine, Semnan University of Medical Sciences, Semnan, Iran
| | - Mostafa Nazari
- Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mahdi Ghorbani
- Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
- Department of Cardiology, Inselspital, University of Bern, Bern, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- University Research and Innovation Center, Obuda University, Budapest, Hungary.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University of Medical Center Groningen, Groningen, The Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
13
|
Sabouri M, Hajianfar G, Hosseini Z, Amini M, Mohebi M, Ghaedian T, Madadi S, Rastgou F, Oveisi M, Bitarafan Rajabi A, Shiri I, Zaidi H. Myocardial Perfusion SPECT Imaging Radiomic Features and Machine Learning Algorithms for Cardiac Contractile Pattern Recognition. J Digit Imaging 2023; 36:497-509. [PMID: 36376780 PMCID: PMC10039187 DOI: 10.1007/s10278-022-00705-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 08/31/2022] [Accepted: 09/15/2022] [Indexed: 11/16/2022] Open
Abstract
A U-shaped contraction pattern was shown to be associated with a better Cardiac resynchronization therapy (CRT) response. The main goal of this study is to automatically recognize left ventricular contractile patterns using machine learning algorithms trained on conventional quantitative features (ConQuaFea) and radiomic features extracted from Gated single-photon emission computed tomography myocardial perfusion imaging (GSPECT MPI). Among 98 patients with standard resting GSPECT MPI included in this study, 29 received CRT therapy and 69 did not (also had CRT inclusion criteria but did not receive treatment yet at the time of data collection, or refused treatment). A total of 69 non-CRT patients were employed for training, and the 29 were employed for testing. The models were built utilizing features from three distinct feature sets (ConQuaFea, radiomics, and ConQuaFea + radiomics (combined)), which were chosen using Recursive feature elimination (RFE) feature selection (FS), and then trained using seven different machine learning (ML) classifiers. In addition, CRT outcome prediction was assessed by different treatment inclusion criteria as the study's final phase. The MLP classifier had the highest performance among ConQuaFea models (AUC, SEN, SPE = 0.80, 0.85, 0.76). RF achieved the best performance in terms of AUC, SEN, and SPE with values of 0.65, 0.62, and 0.68, respectively, among radiomic models. GB and RF approaches achieved the best AUC, SEN, and SPE values of 0.78, 0.92, and 0.63 and 0.74, 0.93, and 0.56, respectively, among the combined models. A promising outcome was obtained when using radiomic and ConQuaFea from GSPECT MPI to detect left ventricular contractile patterns by machine learning.
Collapse
Affiliation(s)
- Maziar Sabouri
- Department of Medical Physics, School of Medicine, Iran University of Medical Science, Tehran, Iran
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Science, Tehran, Iran
| | - Ghasem Hajianfar
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Science, Tehran, Iran
| | - Zahra Hosseini
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Science, Tehran, Iran
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Mobin Mohebi
- Department of Biomedical Engineering, Tarbiat Modares University, Tehran, Iran
| | - Tahereh Ghaedian
- Nuclear Medicine and Molecular Imaging Research Center, School of Medicine, Namazi Teaching Hospital, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Shabnam Madadi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Science, Tehran, Iran
| | - Fereydoon Rastgou
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Science, Tehran, Iran
| | - Mehrdad Oveisi
- Comprehensive Cancer Centre, School of Cancer & Pharmaceutical Sciences, Faculty of Life Sciences & Medicine, King's College London, London, UK
- Department of Computer Science, University of British Columbia, Vancouver BC, Canada
| | - Ahmad Bitarafan Rajabi
- Echocardiography Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran.
- Cardiovascular Interventional Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran.
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
14
|
Salehi M, Vafaei Sadr A, Mahdavi SR, Arabi H, Shiri I, Reiazi R. Deep Learning-based Non-rigid Image Registration for High-dose Rate Brachytherapy in Inter-fraction Cervical Cancer. J Digit Imaging 2023; 36:574-587. [PMID: 36417026 PMCID: PMC10039214 DOI: 10.1007/s10278-022-00732-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 07/04/2022] [Accepted: 07/18/2022] [Indexed: 11/25/2022] Open
Abstract
In this study, an inter-fraction organ deformation simulation framework for the locally advanced cervical cancer (LACC), which considers the anatomical flexibility, rigidity, and motion within an image deformation, was proposed. Data included 57 CT scans (7202 2D slices) of patients with LACC randomly divided into the train (n = 42) and test (n = 15) datasets. In addition to CT images and the corresponding RT structure (bladder, cervix, and rectum), the bone was segmented, and the coaches were eliminated. The correlated stochastic field was simulated using the same size as the target image (used for deformation) to produce the general random deformation. The deformation field was optimized to have a maximum amplitude in the rectum region, a moderate amplitude in the bladder region, and an amplitude as minimum as possible within bony structures. The DIRNet is a convolutional neural network that consists of convolutional regressors, spatial transformation, as well as resampling blocks. It was implemented by different parameters. Mean Dice indices of 0.89 ± 0.02, 0.96 ± 0.01, and 0.93 ± 0.02 were obtained for the cervix, bladder, and rectum (defined as at-risk organs), respectively. Furthermore, a mean average symmetric surface distance of 1.61 ± 0.46 mm for the cervix, 1.17 ± 0.15 mm for the bladder, and 1.06 ± 0.42 mm for the rectum were achieved. In addition, a mean Jaccard of 0.86 ± 0.04 for the cervix, 0.93 ± 0.01 for the bladder, and 0.88 ± 0.04 for the rectum were observed on the test dataset (15 subjects). Deep learning-based non-rigid image registration is, therefore, proposed for the high-dose-rate brachytherapy in inter-fraction cervical cancer since it outperformed conventional algorithms.
Collapse
Affiliation(s)
- Mohammad Salehi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Alireza Vafaei Sadr
- Department of Theoretical Physics and Center for Astroparticle Physics, University of Geneva, Geneva, Switzerland
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Seied Rabi Mahdavi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
| | - Reza Reiazi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran.
- Division of Radiation Oncology, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, USA.
| |
Collapse
|
15
|
Dai J, Wang H, Xu Y, Chen X, Tian R. Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
Affiliation(s)
- Jiaona Dai
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Hui Wang
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuchao Xu
- School of Nuclear Science and Technology, University of South China, Hengyang City 421001, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu 610041, China.
| | - Rong Tian
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
16
|
Shiri I, Vafaei Sadr A, Akhavan A, Salimi Y, Sanaat A, Amini M, Razeghi B, Saberi A, Arabi H, Ferdowsi S, Voloshynovskiy S, Gündüz D, Rahmim A, Zaidi H. Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning. Eur J Nucl Med Mol Imaging 2023; 50:1034-1050. [PMID: 36508026 PMCID: PMC9742659 DOI: 10.1007/s00259-022-06053-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 11/18/2022] [Indexed: 12/15/2022]
Abstract
PURPOSE Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Alireza Vafaei Sadr
- Department of Theoretical Physics and Center for Astroparticle Physics, University of Geneva, Geneva, Switzerland.,Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Azadeh Akhavan
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Abdollah Saberi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | | | | | - Deniz Gündüz
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, Canada.,Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland. .,Geneva University Neurocenter, Geneva University, Geneva, Switzerland. .,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands. .,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
17
|
Nguyen TX, Ran AR, Hu X, Yang D, Jiang M, Dou Q, Cheung CY. Federated Learning in Ocular Imaging: Current Progress and Future Direction. Diagnostics (Basel) 2022; 12:2835. [PMID: 36428895 PMCID: PMC9689273 DOI: 10.3390/diagnostics12112835] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/11/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Advances in artificial intelligence deep learning (DL) have made tremendous impacts on the field of ocular imaging over the last few years. Specifically, DL has been utilised to detect and classify various ocular diseases on retinal photographs, optical coherence tomography (OCT) images, and OCT-angiography images. In order to achieve good robustness and generalisability of model performance, DL training strategies traditionally require extensive and diverse training datasets from various sites to be transferred and pooled into a "centralised location". However, such a data transferring process could raise practical concerns related to data security and patient privacy. Federated learning (FL) is a distributed collaborative learning paradigm which enables the coordination of multiple collaborators without the need for sharing confidential data. This distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data pooling or centralisation. This review article aims to introduce the concept of FL, provide current evidence of FL in ocular imaging, and discuss potential challenges as well as future applications.
Collapse
Affiliation(s)
- Truong X. Nguyen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Xiaoyan Hu
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Meirui Jiang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
18
|
Manafi-Farid R, Askari E, Shiri I, Pirich C, Asadi M, Khateri M, Zaidi H, Beheshti M. [ 18F]FDG-PET/CT radiomics and artificial intelligence in lung cancer: Technical aspects and potential clinical applications. Semin Nucl Med 2022; 52:759-780. [PMID: 35717201 DOI: 10.1053/j.semnuclmed.2022.04.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 02/07/2023]
Abstract
Lung cancer is the second most common cancer and the leading cause of cancer-related death worldwide. Molecular imaging using [18F]fluorodeoxyglucose Positron Emission Tomography and/or Computed Tomography ([18F]FDG-PET/CT) plays an essential role in the diagnosis, evaluation of response to treatment, and prediction of outcomes. The images are evaluated using qualitative and conventional quantitative indices. However, there is far more information embedded in the images, which can be extracted by sophisticated algorithms. Recently, the concept of uncovering and analyzing the invisible data extracted from medical images, called radiomics, is gaining more attention. Currently, [18F]FDG-PET/CT radiomics is growingly evaluated in lung cancer to discover if it enhances the diagnostic performance or implication of [18F]FDG-PET/CT in the management of lung cancer. In this review, we provide a short overview of the technical aspects, as they are discussed in different articles of this special issue. We mainly focus on the diagnostic performance of the [18F]FDG-PET/CT-based radiomics and the role of artificial intelligence in non-small cell lung cancer, impacting the early detection, staging, prediction of tumor subtypes, biomarkers, and patient's outcomes.
Collapse
Affiliation(s)
- Reyhaneh Manafi-Farid
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Emran Askari
- Department of Nuclear Medicine, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Christian Pirich
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Mahboobeh Asadi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Maziar Khateri
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| | - Mohsen Beheshti
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria.
| |
Collapse
|