1
|
Dorri Giv M, Arabi H, Naseri S, Alipour Firouzabad L, Aghaei A, Askari E, Raeisi N, Saber Tanha A, Bakhshi Golestani Z, Dabbagh Kakhki AH, Dabbagh Kakhki VR. Evaluation of the prostate cancer and its metastases in the [ 68 Ga]Ga-PSMA PET/CT images: deep learning method vs. conventional PET/CT processing. Nucl Med Commun 2024; 45:974-983. [PMID: 39224922 DOI: 10.1097/mnm.0000000000001891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
PURPOSE This study demonstrates the feasibility and benefits of using a deep learning-based approach for attenuation correction in [ 68 Ga]Ga-PSMA PET scans. METHODS A dataset of 700 prostate cancer patients (mean age: 67.6 ± 5.9 years, range: 45-85 years) who underwent [ 68 Ga]Ga-PSMA PET/computed tomography was collected. A deep learning model was trained to perform attenuation correction on these images. Quantitative accuracy was assessed using clinical data from 92 patients, comparing the deep learning-based attenuation correction (DLAC) to computed tomography-based PET attenuation correction (PET-CTAC) using mean error, mean absolute error, and root mean square error based on standard uptake value. Clinical evaluation was conducted by three specialists who performed a blinded assessment of lesion detectability and overall image quality in a subset of 50 subjects, comparing DLAC and PET-CTAC images. RESULTS The DLAC model yielded mean error, mean absolute error, and root mean square error values of -0.007 ± 0.032, 0.08 ± 0.033, and 0.252 ± 125 standard uptake value, respectively. Regarding lesion detection and image quality, DLAC showed superior performance in 16 of the 50 cases, while in 56% of the cases, the images generated by DLAC and PET-CTAC were found to have closely comparable quality and lesion detectability. CONCLUSION This study highlights significant improvements in image quality and lesion detection capabilities through the integration of DLAC in [ 68 Ga]Ga-PSMA PET imaging. This innovative approach not only addresses challenges such as bladder radioactivity but also represents a promising method to minimize patient radiation exposure by integrating low-dose computed tomography and DLAC, ultimately improving diagnostic accuracy and patient outcomes.
Collapse
Affiliation(s)
- Masoumeh Dorri Giv
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology and Medical Informatics, Geneva University Hospital, Geneva, Switzerland,
| | - Shahrokh Naseri
- Department of Medical Physics, Faculty of Medicine, Mashhad University of Medical Science, Mashhad,
| | - Leila Alipour Firouzabad
- Department of Radition Technology, Radiation Biology Research Center, Iran University of Medical Sciences, Tehran and
| | - Atena Aghaei
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Emran Askari
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Nasrin Raeisi
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Amin Saber Tanha
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Zahra Bakhshi Golestani
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | | | - Vahid Reza Dabbagh Kakhki
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| |
Collapse
|
2
|
Dell'Oro M, Huff DT, Lokre O, Kendrick J, Munian Govindan R, Ong JSL, Ebert MA, Perk TG, Francis RJ. Assessing the Heterogeneity of Response of [ 68Ga] Ga-PSMA-11 PET/CT Lesions in Patients With Biochemical Recurrence of Prostate Cancer. Clin Genitourin Cancer 2024; 22:102155. [PMID: 39096564 DOI: 10.1016/j.clgc.2024.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 07/02/2024] [Accepted: 07/03/2024] [Indexed: 08/05/2024]
Abstract
INTRODUCTION Treatment of men with metastatic prostate cancer can be difficult due to the heterogeneity of response of lesions. [68Ga]Ga-PSMA-11 (PSMA) PET/CT assists with monitoring and directing clinical intervention; however, the impact of response heterogeneity has yet to be related to outcome measures. The aim of this study was to assess the impact of quantitative imaging information on the value of PSMA PET/CT to assess patient outcomes in response evaluation. PATIENTS AND METHODS Baseline and follow-up (6 months) PSMA PET/CT of 162 men with oligometastatic PC treated with standard clinical care were acquired between 2015 and 2016 for analysis. An augmentative software medical device was used to track lesions between scans and quantify lesion change to categorize them as either new, increasing, stable, decreasing, or disappeared. Quantitative imaging features describing the size, intensity, extent, change, and heterogeneity of change (based on percent change in SUVtotal) among lesions were extracted and evaluated for association with overall survival (OS) using Cox regression models. Model performance was evaluated using the c-index. RESULTS Forty-one (25%) of subjects demonstrated heterogeneous response at follow-up, defined as having at least 1 new or increasing lesion and at least 1 decreasing or disappeared lesion. Subjects with heterogeneous response demonstrated significantly shorter OS than subjects without (median OS = 76.6 months vs. median OS not reached, P < .05, c-index = 0.61). In univariate analyses, SUVtotal at follow-up was most strongly associated with OS (HR = 1.29 [1.19, 1.40], P < .001, c-index = 0.73). Multivariable models applied using heterogeneity of change features demonstrated higher performance (c-index = 0.79) than models without (c-index = 0.71-0.76, P < .05). CONCLUSION Augmentative software tools enhance the evaluation change on serial PSMA PET scans and can facilitate lesional evaluation between timepoints. This study demonstrates that a heterogeneous response at a lesional level may impact adversely on patient outcomes and supports further investigation to evaluate the role of imaging to guide individualized patient management to improve clinical outcomes.
Collapse
Affiliation(s)
- Mikaela Dell'Oro
- Australian Centre for Quantitative Imaging, School of Medicine, The University of Western Australia, Perth, Australia.
| | | | | | - Jake Kendrick
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Australia; Centre for Advanced Technologies in Cancer Research, Perth, Australia
| | | | - Jeremy S L Ong
- Department of Nuclear Medicine, Fiona Stanley Hospital, Murdoch, Australia
| | - Martin A Ebert
- Australian Centre for Quantitative Imaging, School of Medicine, The University of Western Australia, Perth, Australia; School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Australia; Centre for Advanced Technologies in Cancer Research, Perth, Australia; Department of Radiation Oncology, Sir Charles Gairdner Hospital, Nedlands, Australia
| | | | - Roslyn J Francis
- Australian Centre for Quantitative Imaging, School of Medicine, The University of Western Australia, Perth, Australia; Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Nedlands, Australia
| |
Collapse
|
3
|
Pang L, Zhang Z, Liu G, Hu P, Chen S, Gu Y, Huang Y, Zhang J, Shi Y, Cao T, Zhang Y, Shi H. Comparison of the Accuracy of a Deep Learning Method for Lesion Detection in PET/CT and PET/MRI Images. Mol Imaging Biol 2024; 26:802-811. [PMID: 39141195 DOI: 10.1007/s11307-024-01943-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 07/30/2024] [Accepted: 07/31/2024] [Indexed: 08/15/2024]
Abstract
PURPOSE Develop a universal lesion recognition algorithm for PET/CT and PET/MRI, validate it, and explore factors affecting performance. PROCEDURES The 2022 AutoPet Challenge's 1014 PET/CT dataset was used to train the lesion detection model based on 2D and 3D fractional-residual (F-Res) models. To extend this to PET/MRI, a network for converting MR images to synthetic CT (sCT) was developed, using 41 sets of whole-body MR and corresponding CT data. 38 patients' PET/CT and PET/MRI data were used to verify the universal lesion recognition algorithm. Image quality was assessed using signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). Total lesion glycolysis (TLG), metabolic tumor volume (MTV), and lesion count were calculated from the resultant lesion masks. Experienced physicians reviewed and corrected the model's outputs, establishing the ground truth. The performance of the lesion detection deep-learning model on different PET images was assessed by detection accuracy, precision, recall, and dice coefficients. Data with a detection accuracy score (DAS) less than 1 was used for analysis of outliers. RESULTS Compared to PET/CT, PET/MRI scans had a significantly longer delay time (135 ± 45 min vs 61 ± 12 min) and lower SNR (6.17 ± 1.11 vs 9.27 ± 2.77). However, CNR values were similar (7.37 ± 5.40 vs 5.86 ± 6.69). PET/MRI detected more lesions (with a mean difference of -3.184). TLG and MTV showed no significant differences between PET/CT and PET/MRI (TLG: 119.18 ± 203.15 vs 123.57 ± 151.58, p = 0.41; MTV: 36.58 ± 57.00 vs 39.16 ± 48.34, p = 0.33). A total of 12 PET/CT and 14 PET/MRI datasets were included in the analysis of outliers. Outlier analysis revealed PET/CT anomalies in intestines, ureters, and muscles, while PET/MRI anomalies were in intestines, testicles, and low tracer uptake regions, with false positives in ureters (PET/CT) and intestines/testicles (PET/MRI). CONCLUSION The deep learning lesion detection model performs well with both PET/CT and PET/MRI. SNR, CNR and reconstruction parameters minimally impact recognition accuracy, but delay time post-injection is significant.
Collapse
Affiliation(s)
- Lifang Pang
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, No. 180, Fenglin Road, Shanghai, 200032, People's Republic of China
- Shanghai Institute of Medical Imaging, Shanghai, 200032, China
- Institute of Nuclear Medicine, Fudan University, Shanghai, 200032, China
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Zheng Zhang
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Guobing Liu
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, No. 180, Fenglin Road, Shanghai, 200032, People's Republic of China
- Shanghai Institute of Medical Imaging, Shanghai, 200032, China
- Institute of Nuclear Medicine, Fudan University, Shanghai, 200032, China
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Pengcheng Hu
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, No. 180, Fenglin Road, Shanghai, 200032, People's Republic of China
- Shanghai Institute of Medical Imaging, Shanghai, 200032, China
- Institute of Nuclear Medicine, Fudan University, Shanghai, 200032, China
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Shuguang Chen
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, No. 180, Fenglin Road, Shanghai, 200032, People's Republic of China
- Shanghai Institute of Medical Imaging, Shanghai, 200032, China
- Institute of Nuclear Medicine, Fudan University, Shanghai, 200032, China
| | - Yushen Gu
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, No. 180, Fenglin Road, Shanghai, 200032, People's Republic of China
- Shanghai Institute of Medical Imaging, Shanghai, 200032, China
- Institute of Nuclear Medicine, Fudan University, Shanghai, 200032, China
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Yukun Huang
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Jia Zhang
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Yuhang Shi
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Tuoyu Cao
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Yiqiu Zhang
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, No. 180, Fenglin Road, Shanghai, 200032, People's Republic of China.
- Shanghai Institute of Medical Imaging, Shanghai, 200032, China.
- Institute of Nuclear Medicine, Fudan University, Shanghai, 200032, China.
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China.
| | - Hongcheng Shi
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, No. 180, Fenglin Road, Shanghai, 200032, People's Republic of China.
- Shanghai Institute of Medical Imaging, Shanghai, 200032, China.
- Institute of Nuclear Medicine, Fudan University, Shanghai, 200032, China.
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China.
| |
Collapse
|
4
|
Lancia A, Ingrosso G, Detti B, Festa E, Bonzano E, Linguanti F, Camilli F, Bertini N, La Mattina S, Orsatti C, Francolini G, Abenavoli EM, Livi L, Aristei C, de Jong D, Al Feghali KA, Siva S, Becherini C. Biology-guided radiotherapy in metastatic prostate cancer: time to push the envelope? Front Oncol 2024; 14:1455428. [PMID: 39314633 PMCID: PMC11417306 DOI: 10.3389/fonc.2024.1455428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Accepted: 08/19/2024] [Indexed: 09/25/2024] Open
Abstract
The therapeutic landscape of metastatic prostate cancer has undergone a profound revolution in recent years. In addition to the introduction of novel molecules in the clinics, the field has witnessed a tremendous development of functional imaging modalities adding new biological insights which can ultimately inform tailored treatment strategies, including local therapies. The evolution and rise of Stereotactic Body Radiotherapy (SBRT) have been particularly notable in patients with oligometastatic disease, where it has been demonstrated to be a safe and effective treatment strategy yielding favorable results in terms of disease control and improved oncological outcomes. The possibility of debulking all sites of disease, matched with the ambition of potentially extending this treatment paradigm to polymetastatic patients in the not-too-distant future, makes Biology-guided Radiotherapy (BgRT) an attractive paradigm which can be used in conjunction with systemic therapy in the management of patients with metastatic prostate cancer.
Collapse
Affiliation(s)
- Andrea Lancia
- Department of Radiation Oncology, San Matteo Hospital Foundation Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Pavia, Italy
| | | | - Beatrice Detti
- Radiotherapy Unit Prato, Usl Centro Toscana, Presidio Villa Fiorita, Prato, Italy
| | - Eleonora Festa
- Radiation Oncology Section, University of Perugia, Perugia, Italy
| | - Elisabetta Bonzano
- Department of Radiation Oncology, San Matteo Hospital Foundation Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Pavia, Italy
| | | | - Federico Camilli
- Radiation Oncology Section, University of Perugia, Perugia, Italy
| | - Niccolò Bertini
- Radiation Oncology Unit, Oncology Department, Azienda Ospedaliero Universitaria Careggi, Florence, Italy
| | - Salvatore La Mattina
- Department of Radiation Oncology, San Matteo Hospital Foundation Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Pavia, Italy
| | - Carolina Orsatti
- Radiation Oncology Unit, Oncology Department, Azienda Ospedaliero Universitaria Careggi, Florence, Italy
| | - Giulio Francolini
- Radiation Oncology Unit, Oncology Department, Azienda Ospedaliero Universitaria Careggi, Florence, Italy
| | | | - Lorenzo Livi
- Radiation Oncology Unit, Oncology Department, Azienda Ospedaliero Universitaria Careggi, Florence, Italy
| | - Cynthia Aristei
- Radiation Oncology Section, University of Perugia, Perugia, Italy
| | - Dorine de Jong
- Medical Affairs, RefleXion Medical, Inc., Hayward, CA, United States
| | | | - Shankar Siva
- Department of Radiation Oncology, Sir Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
- Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, VIC, Australia
| | - Carlotta Becherini
- Radiation Oncology Unit, Oncology Department, Azienda Ospedaliero Universitaria Careggi, Florence, Italy
| |
Collapse
|
5
|
Seifert R, Gafita A, Solnes LB, Iagaru A. Prostate-specific Membrane Antigen: Interpretation Criteria, Standardized Reporting, and the Use of Machine Learning. PET Clin 2024; 19:363-369. [PMID: 38705743 DOI: 10.1016/j.cpet.2024.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Prostate-specific membrane antigen targeting positron emission tomography (PSMA-PET) is routinely used for the staging and restaging of patients with various stages of prostate cancer. For clear communication with referring physicians and to improve inter-reader agreement, the use of standardized reporting templates is mandatory. Increasingly, tumor volume is used by reporting and response assessment frameworks to prognosticate patient outcome or measure response to therapy. However, the quantification of tumor volume is often too time-consuming in routine clinical practice. Machine learning-based tools can facilitate the quantification of tumor volume for improved outcome prognostication.
Collapse
Affiliation(s)
- Robert Seifert
- Department of Nuclear Medicine, Inselspital, University Hospital Bern, Bern, Switzerland; Department of Nuclear Medicine, University of Duisburg-Essen and German Cancer Consortium (DKTK)-University Hospital Essen, Essen, Germany.
| | - Andrei Gafita
- Division of Nuclear Medicine and Molecular Imaging, Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Lilja B Solnes
- Division of Nuclear Medicine and Molecular Imaging, Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Andrei Iagaru
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Drive H2200, Stanford 94305, USA
| |
Collapse
|
6
|
Moraitis A, Küper A, Tran-Gia J, Eberlein U, Chen Y, Seifert R, Shi K, Kim M, Herrmann K, Fragoso Costa P, Kersting D. Future Perspectives of Artificial Intelligence in Bone Marrow Dosimetry and Individualized Radioligand Therapy. Semin Nucl Med 2024; 54:460-469. [PMID: 39013673 DOI: 10.1053/j.semnuclmed.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Accepted: 06/20/2024] [Indexed: 07/18/2024]
Abstract
Radioligand therapy is an emerging and effective treatment option for various types of malignancies, but may be intricately linked to hematological side effects such as anemia, lymphopenia or thrombocytopenia. The safety and efficacy of novel theranostic agents, targeting increasingly complex targets, can be well served by comprehensive dosimetry. However, optimization in patient management and patient selection based on risk-factors predicting adverse events and built upon reliable dose-response relations is still an open demand. In this context, artificial intelligence methods, especially machine learning and deep learning algorithms, may play a crucial role. This review provides an overview of upcoming opportunities for integrating artificial intelligence methods into the field of dosimetry in nuclear medicine by improving bone marrow and blood dosimetry accuracy, enabling early identification of potential hematological risk-factors, and allowing for adaptive treatment planning. It will further exemplify inspirational success stories from neighboring disciplines that may be translated to nuclear medicine practices, and will provide conceptual suggestions for future directions. In the future, we expect artificial intelligence-assisted (predictive) dosimetry combined with clinical parameters to pave the way towards truly personalized theranostics in radioligand therapy.
Collapse
Affiliation(s)
- Alexandros Moraitis
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany.
| | - Alina Küper
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Johannes Tran-Gia
- Department of Nuclear Medicine, University Hospital Würzburg, Würzburg, Germany
| | - Uta Eberlein
- Department of Nuclear Medicine, University Hospital Würzburg, Würzburg, Germany
| | - Yizhou Chen
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Robert Seifert
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Moon Kim
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Pedro Fragoso Costa
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - David Kersting
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| |
Collapse
|
7
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
8
|
Jafari E, Zarei A, Dadgar H, Keshavarz A, Manafi-Farid R, Rostami H, Assadi M. A convolutional neural network-based system for fully automatic segmentation of whole-body [ 68Ga]Ga-PSMA PET images in prostate cancer. Eur J Nucl Med Mol Imaging 2024; 51:1476-1487. [PMID: 38095671 DOI: 10.1007/s00259-023-06555-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 11/30/2023] [Indexed: 03/22/2024]
Abstract
PURPOSE The aim of this study was development and evaluation of a fully automated tool for the detection and segmentation of mPCa lesions in whole-body [68Ga]Ga-PSMA-11 PET scans by using a nnU-Net framework. METHODS In this multicenter study, a cohort of 412 patients from three different center with all indication of PCa who underwent [68Ga]Ga-PSMA-11 PET/CT were enrolled. Two hundred cases of center 1 dataset were used for training the model. A fully 3D convolutional neural network (CNN) is proposed which is based on the self-configuring nnU-Net framework. A subset of center 1 dataset and cases of center 2 and center 3 were used for testing of model. The performance of the segmentation pipeline that was developed was evaluated by comparing the fully automatic segmentation mask with the manual segmentation of the corresponding internal and external test sets in three levels including patient-level scan classification, lesion-level detection, and voxel-level segmentation. In addition, for comparison of PET-derived quantitative biomarkers between automated and manual segmentation, whole-body PSMA tumor volume (PSMA-TV) and total lesions PSMA uptake (TL-PSMA) were calculated. RESULTS In terms of patient-level classification, the model achieved an accuracy of 83%, sensitivity of 92%, PPV of 77%, and NPV of 91% for the internal testing set. For lesion-level detection, the model achieved an accuracy of 87-94%, sensitivity of 88-95%, PPV of 98-100%, and F1-score of 93-97% for all testing sets. For voxel-level segmentation, the automated method achieved average values of 65-70% for DSC, 72-79% for PPV, 53-58% for IoU, and 62-73% for sensitivity in all testing sets. In the evaluation of volumetric parameters, there was a strong correlation between the manual and automated measurements of PSMA-TV and TL-PSMA for all centers. CONCLUSIONS The deep learning networks presented here offer promising solutions for automatically segmenting malignant lesions in prostate cancer patients using [68Ga]Ga-PSMA PET. These networks achieve a high level of accuracy in whole-body segmentation, as measured by the DSC and PPV at the voxel level. The resulting segmentations can be used for extraction of PET-derived quantitative biomarkers and utilized for treatment response assessment and radiomic studies.
Collapse
Affiliation(s)
- Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Amin Zarei
- IoT and Signal Processing Research Group, ICT Research Institute, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, Iran
| | - Habibollah Dadgar
- Cancer Research Center, RAZAVI Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Keshavarz
- IoT and Signal Processing Research Group, ICT Research Institute, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, Iran
| | - Reyhaneh Manafi-Farid
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Rostami
- Computer Engineering Department, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran.
| |
Collapse
|
9
|
Huang B, Yang Q, Li X, Wu Y, Liu Z, Pan Z, Zhong S, Song S, Zuo C. Deep learning-based whole-body characterization of prostate cancer lesions on [ 68Ga]Ga-PSMA-11 PET/CT in patients with post-prostatectomy recurrence. Eur J Nucl Med Mol Imaging 2024; 51:1173-1184. [PMID: 38049657 DOI: 10.1007/s00259-023-06551-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 11/28/2023] [Indexed: 12/06/2023]
Abstract
PURPOSE The automatic segmentation and detection of prostate cancer (PC) lesions throughout the body are extremely challenging due to the lesions' complexity and variability in appearance, shape, and location. In this study, we investigated the performance of a three-dimensional (3D) convolutional neural network (CNN) to automatically characterize metastatic lesions throughout the body in a dataset of PC patients with recurrence after radical prostatectomy. METHODS We retrospectively collected [68 Ga]Ga-PSMA-11 PET/CT images from 116 patients with metastatic PC at two centers: center 1 provided the data for fivefold cross validation (n = 78) and internal testing (n = 19), and center 2 provided the data for external testing (n = 19). PET and CT data were jointly input into a 3D U-Net to achieve whole-body segmentation and detection of PC lesions. The performance in both the segmentation and the detection of lesions throughout the body was evaluated using established metrics, including the Dice similarity coefficient (DSC) for segmentation and the recall, precision, and F1-score for detection. The correlation and consistency between tumor burdens (PSMA-TV and TL-PSMA) calculated from automatic segmentation and artificial ground truth were assessed by linear regression and Bland‒Altman plots. RESULTS On the internal test set, the DSC, precision, recall, and F1-score values were 0.631, 0.961, 0.721, and 0.824, respectively. On the external test set, the corresponding values were 0.596, 0.888, 0.792, and 0.837, respectively. Our approach outperformed previous studies in segmenting and detecting metastatic lesions throughout the body. Tumor burden indicators derived from deep learning and ground truth showed strong correlation (R2 ≥ 0.991, all P < 0.05) and consistency. CONCLUSION Our 3D CNN accurately characterizes whole-body tumors in relapsed PC patients; its results are highly consistent with those of manual contouring. This automatic method is expected to improve work efficiency and to aid in the assessment of tumor burden.
Collapse
Affiliation(s)
- Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Qinqin Yang
- Department of Nuclear Medicine, Shanghai Changhai Hospital, Shanghai, China
| | - Xiao Li
- Department of Nuclear Medicine, Shanghai Changhai Hospital, Shanghai, China
| | - Yuxuan Wu
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhantao Liu
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhaohong Pan
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Shaonan Zhong
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China.
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China.
| | - Changjing Zuo
- Department of Nuclear Medicine, Shanghai Changhai Hospital, Shanghai, China.
| |
Collapse
|
10
|
Yazdani E, Karamzadeh-Ziarati N, Cheshmi SS, Sadeghi M, Geramifar P, Vosoughi H, Jahromi MK, Kheradpisheh SR. Automated segmentation of lesions and organs at risk on [ 68Ga]Ga-PSMA-11 PET/CT images using self-supervised learning with Swin UNETR. Cancer Imaging 2024; 24:30. [PMID: 38424612 PMCID: PMC10903052 DOI: 10.1186/s40644-024-00675-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 02/15/2024] [Indexed: 03/02/2024] Open
Abstract
BACKGROUND Prostate-specific membrane antigen (PSMA) PET/CT imaging is widely used for quantitative image analysis, especially in radioligand therapy (RLT) for metastatic castration-resistant prostate cancer (mCRPC). Unknown features influencing PSMA biodistribution can be explored by analyzing segmented organs at risk (OAR) and lesions. Manual segmentation is time-consuming and labor-intensive, so automated segmentation methods are desirable. Training deep-learning segmentation models is challenging due to the scarcity of high-quality annotated images. Addressing this, we developed shifted windows UNEt TRansformers (Swin UNETR) for fully automated segmentation. Within a self-supervised framework, the model's encoder was pre-trained on unlabeled data. The entire model was fine-tuned, including its decoder, using labeled data. METHODS In this work, 752 whole-body [68Ga]Ga-PSMA-11 PET/CT images were collected from two centers. For self-supervised model pre-training, 652 unlabeled images were employed. The remaining 100 images were manually labeled for supervised training. In the supervised training phase, 5-fold cross-validation was used with 64 images for model training and 16 for validation, from one center. For testing, 20 hold-out images, evenly distributed between two centers, were used. Image segmentation and quantification metrics were evaluated on the test set compared to the ground-truth segmentation conducted by a nuclear medicine physician. RESULTS The model generates high-quality OARs and lesion segmentation in lesion-positive cases, including mCRPC. The results show that self-supervised pre-training significantly improved the average dice similarity coefficient (DSC) for all classes by about 3%. Compared to nnU-Net, a well-established model in medical image segmentation, our approach outperformed with a 5% higher DSC. This improvement was attributed to our model's combined use of self-supervised pre-training and supervised fine-tuning, specifically when applied to PET/CT input. Our best model had the lowest DSC for lesions at 0.68 and the highest for liver at 0.95. CONCLUSIONS We developed a state-of-the-art neural network using self-supervised pre-training on whole-body [68Ga]Ga-PSMA-11 PET/CT images, followed by fine-tuning on a limited set of annotated images. The model generates high-quality OARs and lesion segmentation for PSMA image analysis. The generalizable model holds potential for various clinical applications, including enhanced RLT and patient-specific internal dosimetry.
Collapse
Affiliation(s)
- Elmira Yazdani
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran, 14155-6183, Iran
- Fintech in Medicine Research Center, Iran University of Medical Sciences, Tehran, Iran
| | | | - Seyyed Saeid Cheshmi
- Department of Computer and Data Sciences, Faculty of Mathematical Sciences, Shahid Beheshti University, Tehran, Iran
| | - Mahdi Sadeghi
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran, 14155-6183, Iran.
- Fintech in Medicine Research Center, Iran University of Medical Sciences, Tehran, Iran.
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| | - Habibeh Vosoughi
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran
- Nuclear Medicine and Molecular Imaging Department, Imam Reza International University, Razavi Hospital, Mashhad, Iran
| | - Mahmood Kazemi Jahromi
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran, 14155-6183, Iran
- Fintech in Medicine Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Saeed Reza Kheradpisheh
- Department of Computer and Data Sciences, Faculty of Mathematical Sciences, Shahid Beheshti University, Tehran, Iran.
| |
Collapse
|
11
|
Yang X, Silosky M, Wehrend J, Litwiller DV, Nachiappan M, Metzler SD, Ghosh D, Xing F, Chin BB. Improving Generalizability of PET DL Algorithms: List-Mode Reconstructions Improve DOTATATE PET Hepatic Lesion Detection Performance. Bioengineering (Basel) 2024; 11:226. [PMID: 38534501 DOI: 10.3390/bioengineering11030226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 02/18/2024] [Accepted: 02/23/2024] [Indexed: 03/28/2024] Open
Abstract
Deep learning (DL) algorithms used for DOTATATE PET lesion detection typically require large, well-annotated training datasets. These are difficult to obtain due to low incidence of gastroenteropancreatic neuroendocrine tumors (GEP-NETs) and the high cost of manual annotation. Furthermore, networks trained and tested with data acquired from site specific PET/CT instrumentation, acquisition and processing protocols have reduced performance when tested with offsite data. This lack of generalizability requires even larger, more diverse training datasets. The objective of this study is to investigate the feasibility of improving DL algorithm performance by better matching the background noise in training datasets to higher noise, out-of-domain testing datasets. 68Ga-DOTATATE PET/CT datasets were obtained from two scanners: Scanner1, a state-of-the-art digital PET/CT (GE DMI PET/CT; n = 83 subjects), and Scanner2, an older-generation analog PET/CT (GE STE; n = 123 subjects). Set1, the data set from Scanner1, was reconstructed with standard clinical parameters (5 min; Q.Clear) and list-mode reconstructions (VPFXS 2, 3, 4, and 5-min). Set2, data from Scanner2 representing out-of-domain clinical scans, used standard iterative reconstruction (5 min; OSEM). A deep neural network was trained with each dataset: Network1 for Scanner1 and Network2 for Scanner2. DL performance (Network1) was tested with out-of-domain test data (Set2). To evaluate the effect of training sample size, we tested DL model performance using a fraction (25%, 50% and 75%) of Set1 for training. Scanner1, list-mode 2-min reconstructed data demonstrated the most similar noise level compared that of Set2, resulting in the best performance (F1 = 0.713). This was not significantly different compared to the highest performance, upper-bound limit using in-domain training for Network2 (F1 = 0.755; p-value = 0.103). Regarding sample size, the F1 score significantly increased from 25% training data (F1 = 0.478) to 100% training data (F1 = 0.713; p < 0.001). List-mode data from modern PET scanners can be reconstructed to better match the noise properties of older scanners. Using existing data and their associated annotations dramatically reduces the cost and effort in generating these datasets and significantly improves the performance of existing DL algorithms. List-mode reconstructions can provide an efficient, low-cost method to improve DL algorithm generalizability.
Collapse
Affiliation(s)
- Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| | - Michael Silosky
- Department of Radiology, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| | - Jonathan Wehrend
- Department of Radiology, Santa Clara Valley Medical Center, San Jose, CA 95128, USA
| | | | - Muthiah Nachiappan
- Department of Radiology, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| | - Scott D Metzler
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| | - Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
- The Computational Bioscience Program, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
- University of Colorado Cancer Center, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| | - Bennett B Chin
- Department of Radiology, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
- University of Colorado Cancer Center, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| |
Collapse
|
12
|
Yang X, Chin BB, Silosky M, Wehrend J, Litwiller DV, Ghosh D, Xing F. Learning Without Real Data Annotations to Detect Hepatic Lesions in PET Images. IEEE Trans Biomed Eng 2024; 71:679-688. [PMID: 37708016 DOI: 10.1109/tbme.2023.3315268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
OBJECTIVE Deep neural networks have been recently applied to lesion identification in fluorodeoxyglucose (FDG) positron emission tomography (PET) images, but they typically rely on a large amount of well-annotated data for model training. This is extremely difficult to achieve for neuroendocrine tumors (NETs), because of low incidence of NETs and expensive lesion annotation in PET images. The objective of this study is to design a novel, adaptable deep learning method, which uses no real lesion annotations but instead low-cost, list mode-simulated data, for hepatic lesion detection in real-world clinical NET PET images. METHODS We first propose a region-guided generative adversarial network (RG-GAN) for lesion-preserved image-to-image translation. Then, we design a specific data augmentation module for our list-mode simulated data and incorporate this module into the RG-GAN to improve model training. Finally, we combine the RG-GAN, the data augmentation module and a lesion detection neural network into a unified framework for joint-task learning to adaptatively identify lesions in real-world PET data. RESULTS The proposed method outperforms recent state-of-the-art lesion detection methods in real clinical 68Ga-DOTATATE PET images, and produces very competitive performance with the target model that is trained with real lesion annotations. CONCLUSION With RG-GAN modeling and specific data augmentation, we can obtain good lesion detection performance without using any real data annotations. SIGNIFICANCE This study introduces an adaptable deep learning method for hepatic lesion identification in NETs, which can significantly reduce human effort for data annotation and improve model generalizability for lesion detection with PET imaging.
Collapse
|
13
|
Yazdani E, Geramifar P, Karamzade-Ziarati N, Sadeghi M, Amini P, Rahmim A. Radiomics and Artificial Intelligence in Radiotheranostics: A Review of Applications for Radioligands Targeting Somatostatin Receptors and Prostate-Specific Membrane Antigens. Diagnostics (Basel) 2024; 14:181. [PMID: 38248059 PMCID: PMC10814892 DOI: 10.3390/diagnostics14020181] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 01/11/2024] [Accepted: 01/12/2024] [Indexed: 01/23/2024] Open
Abstract
Radiotheranostics refers to the pairing of radioactive imaging biomarkers with radioactive therapeutic compounds that deliver ionizing radiation. Given the introduction of very promising radiopharmaceuticals, the radiotheranostics approach is creating a novel paradigm in personalized, targeted radionuclide therapies (TRTs), also known as radiopharmaceuticals (RPTs). Radiotherapeutic pairs targeting somatostatin receptors (SSTR) and prostate-specific membrane antigens (PSMA) are increasingly being used to diagnose and treat patients with metastatic neuroendocrine tumors (NETs) and prostate cancer. In parallel, radiomics and artificial intelligence (AI), as important areas in quantitative image analysis, are paving the way for significantly enhanced workflows in diagnostic and theranostic fields, from data and image processing to clinical decision support, improving patient selection, personalized treatment strategies, response prediction, and prognostication. Furthermore, AI has the potential for tremendous effectiveness in patient dosimetry which copes with complex and time-consuming tasks in the RPT workflow. The present work provides a comprehensive overview of radiomics and AI application in radiotheranostics, focusing on pairs of SSTR- or PSMA-targeting radioligands, describing the fundamental concepts and specific imaging/treatment features. Our review includes ligands radiolabeled by 68Ga, 18F, 177Lu, 64Cu, 90Y, and 225Ac. Specifically, contributions via radiomics and AI towards improved image acquisition, reconstruction, treatment response, segmentation, restaging, lesion classification, dose prediction, and estimation as well as ongoing developments and future directions are discussed.
Collapse
Affiliation(s)
- Elmira Yazdani
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran 14496-14535, Iran
- Finetech in Medicine Research Center, Iran University of Medical Sciences, Tehran 14496-14535, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran 14117-13135, Iran
| | - Najme Karamzade-Ziarati
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran 14117-13135, Iran
| | - Mahdi Sadeghi
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran 14496-14535, Iran
- Finetech in Medicine Research Center, Iran University of Medical Sciences, Tehran 14496-14535, Iran
| | - Payam Amini
- Department of Biostatistics, School of Public Health, Iran University of Medical Sciences, Tehran 14496-14535, Iran
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC V5Z 1L3, Canada
| |
Collapse
|
14
|
Zang S, Jiang C, Zhang L, Fu J, Meng Q, Wu W, Shao G, Sun H, Jia R, Wang F. Deep learning based on 68Ga-PSMA-11 PET/CT for predicting pathological upgrading in patients with prostate cancer. Front Oncol 2024; 13:1273414. [PMID: 38260839 PMCID: PMC10800856 DOI: 10.3389/fonc.2023.1273414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 12/18/2023] [Indexed: 01/24/2024] Open
Abstract
Objectives To explore the feasibility and importance of deep learning (DL) based on 68Ga-prostate-specific membrane antigen (PSMA)-11 PET/CT in predicting pathological upgrading from biopsy to radical prostatectomy (RP) in patients with prostate cancer (PCa). Methods In this retrospective study, all patients underwent 68Ga-PSMA-11 PET/CT, transrectal ultrasound (TRUS)-guided systematic biopsy, and RP for PCa sequentially between January 2017 and December 2022. Two DL models (three-dimensional [3D] ResNet-18 and 3D DenseNet-121) based on 68Ga-PSMA-11 PET and support vector machine (SVM) models integrating clinical data with DL signature were constructed. The model performance was evaluated using area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. Results Of 109 patients, 87 (44 upgrading, 43 non-upgrading) were included in the training set and 22 (11 upgrading, 11 non-upgrading) in the test set. The combined SVM model, incorporating clinical features and signature of 3D ResNet-18 model, demonstrated satisfactory prediction in the test set with an AUC value of 0.628 (95% confidence interval [CI]: 0.365, 0.891) and accuracy of 0.727 (95% CI: 0.498, 0.893). Conclusion A DL method based on 68Ga-PSMA-11 PET may have a role in predicting pathological upgrading from biopsy to RP in patients with PCa.
Collapse
Affiliation(s)
- Shiming Zang
- Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Cuiping Jiang
- Department of Ultrasound, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Lele Zhang
- Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Jingjing Fu
- Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Qingle Meng
- Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Wenyu Wu
- Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Guoqiang Shao
- Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Hongbin Sun
- Department of Urology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Ruipeng Jia
- Department of Urology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Feng Wang
- Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| |
Collapse
|
15
|
Mohseninia N, Zamani-Siahkali N, Harsini S, Divband G, Pirich C, Beheshti M. Bone Metastasis in Prostate Cancer: Bone Scan Versus PET Imaging. Semin Nucl Med 2024; 54:97-118. [PMID: 37596138 DOI: 10.1053/j.semnuclmed.2023.07.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 07/11/2023] [Indexed: 08/20/2023]
Abstract
Prostate cancer is the second most common cause of malignancy among men, with bone metastasis being a significant source of morbidity and mortality in advanced cases. Detecting and treating bone metastasis at an early stage is crucial to improve the quality of life and survival of prostate cancer patients. This objective strongly relies on imaging studies. While CT and MRI have their specific utilities, they also possess certain drawbacks. Bone scintigraphy, although cost-effective and widely available, presents high false-positive rates. The emergence of PET/CT and PET/MRI, with their ability to overcome the limitations of standard imaging methods, offers promising alternatives for the detection of bone metastasis. Various radiotracers targeting cell division activity or cancer-specific membrane proteins, as well as bone seeking agents, have been developed and tested. The use of positron-emitting isotopes such as fluorine-18 and gallium-68 for labeling allows for a reduced radiation dose and unaffected biological properties. Furthermore, the integration of artificial intelligence (AI) and radiomics techniques in medical imaging has shown significant advancements in reducing interobserver variability, improving accuracy, and saving time. This article provides an overview of the advantages and limitations of bone scan using SPECT and SPECT/CT and PET imaging methods with different radiopharmaceuticals and highlights recent developments in hybrid scanners, AI, and radiomics for the identification of prostate cancer bone metastasis using molecular imaging.
Collapse
Affiliation(s)
- Nasibeh Mohseninia
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital, Paracelsus Medical University, Salzburg, Austria
| | - Nazanin Zamani-Siahkali
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital, Paracelsus Medical University, Salzburg, Austria; Department of Nuclear Medicine, Research center for Nuclear Medicine and Molecular Imaging, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Harsini
- Department of Molecular Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | | | - Christian Pirich
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital, Paracelsus Medical University, Salzburg, Austria
| | - Mohsen Beheshti
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital, Paracelsus Medical University, Salzburg, Austria.
| |
Collapse
|
16
|
Lindgren Belal S, Frantz S, Minarik D, Enqvist O, Wikström E, Edenbrandt L, Trägårdh E. Applications of Artificial Intelligence in PSMA PET/CT for Prostate Cancer Imaging. Semin Nucl Med 2024; 54:141-149. [PMID: 37357026 DOI: 10.1053/j.semnuclmed.2023.06.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 06/12/2023] [Indexed: 06/27/2023]
Abstract
Prostate-specific membrane antigen (PSMA) positron emission tomography/computed tomography (PET/CT) has emerged as an important imaging technique for prostate cancer. The use of PSMA PET/CT is rapidly increasing, while the number of nuclear medicine physicians and radiologists to interpret these scans is limited. Additionally, there is variability in interpretation among readers. Artificial intelligence techniques, including traditional machine learning and deep learning algorithms, are being used to address these challenges and provide additional insights from the images. The aim of this scoping review was to summarize the available research on the development and applications of AI in PSMA PET/CT for prostate cancer imaging. A systematic literature search was performed in PubMed, Embase and Cinahl according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 26 publications were included in the synthesis. The included studies focus on different aspects of artificial intelligence in PSMA PET/CT, including detection of primary tumor, local recurrence and metastatic lesions, lesion classification, tumor quantification and prediction/prognostication. Several studies show similar performances of artificial intelligence algorithms compared to human interpretation. Few artificial intelligence tools are approved for use in clinical practice. Major limitations include the lack of external validation and prospective design. Demonstrating the clinical impact and utility of artificial intelligence tools is crucial for their adoption in healthcare settings. To take the next step towards a clinically valuable artificial intelligence tool that provides quantitative data, independent validation studies are needed across institutions and equipment to ensure robustness.
Collapse
Affiliation(s)
- Sarah Lindgren Belal
- Department of Translational Medicine and Wallenberg Centre for Molecular Medicine, Lund University, Malmö, Sweden; Department of Surgery, Skåne University Hospital, Malmö, Sweden
| | - Sophia Frantz
- Department of Translational Medicine and Wallenberg Centre for Molecular Medicine, Lund University, Malmö, Sweden; Department of Health Technology Assessment South, Skåne University Hospital, Lund, Sweden
| | - David Minarik
- Department of Translational Medicine and Wallenberg Centre for Molecular Medicine, Lund University, Malmö, Sweden; Department of Radiation Physics, Skåne University Hospital, Malmö, Sweden
| | - Olof Enqvist
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden; Department of Clinical Physiology and Nuclear Medicine, Malmö Sweden
| | - Erik Wikström
- Department of Health Technology Assessment South, Skåne University Hospital, Lund, Sweden
| | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Sweden
| | - Elin Trägårdh
- Department of Translational Medicine and Wallenberg Centre for Molecular Medicine, Lund University, Malmö, Sweden; Department of Clinical Physiology and Nuclear Medicine, Skåne University Hospital, Malmö, Sweden.
| |
Collapse
|
17
|
Mirshahvalad SA, Eisazadeh R, Shahbazi-Akbari M, Pirich C, Beheshti M. Application of Artificial Intelligence in Oncologic Molecular PET-Imaging: A Narrative Review on Beyond [ 18F]F-FDG Tracers - Part I. PSMA, Choline, and DOTA Radiotracers. Semin Nucl Med 2024; 54:171-180. [PMID: 37752032 DOI: 10.1053/j.semnuclmed.2023.08.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 08/29/2023] [Indexed: 09/28/2023]
Abstract
Artificial intelligence (AI) has evolved significantly in the past few decades. This thriving trend has also been seen in medicine in recent years, particularly in the field of imaging. Machine learning (ML), deep learning (DL), and their methods (eg, SVM, CNN), as well as radiomics, are the terminologies that have been introduced to this field and, to some extent, become familiar to the expert clinicians. PET is one of the modalities that has been enhanced via these state-of-the-art algorithms. This robust imaging technique further merged with anatomical modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), to provide reliable hybrid modalities, PET/CT and PET/MRI. Applying AI-based algorithms on the different components (PET, CT, and MRI) has resulted in promising results, maximizing the value of PET imaging. However, [18F]F-FDG, the most commonly utilized tracer in molecular imaging, has been mainly in the spotlight. Thus, we aimed to look into the less discussed tracers in this review, moving beyond [18F]F-FDG. The novel non-[18F]F-FDG agents also showed to be valuable in various clinical tasks, including lesion detection and tumor characterization, accurate delineation, and prognostic impact. Regarding prostate patients, PSMA-based models were highly accurate in determining tumoral lesions' location and delineating them, particularly within the prostate gland. However, they also could assess whole-body images to detect extra-prostatic lesions in a patient automatically. Considering the prognostic value of prostate-specific membrane antigen (PSMA) PET using AI, it could predict response to treatment and patient survival, which are crucial in patient management. Choline imaging, another non-[18F]F-FDG tracer, similarly showed acceptable results that may be of benefit in the clinic, though the current evidence is significantly more limited than PSMA. Lastly, different subtypes of DOTA ligands were found to be valuable. They could diagnose tumoral lesions in challenging sites and even predict histopathology grade, being a highly advantageous noninvasive tool. In conclusion, the current limited investigations have shown promising results, leading us to a bright future for AI in molecular imaging beyond [18F]F-FDG.
Collapse
Affiliation(s)
- Seyed Ali Mirshahvalad
- Division of Molecular Imaging & Theranostics, Department of Nuclear Medicine, University Hospital, Paracelsus Medical University, Salzburg, Austria; Joint Department of Medical Imaging, University Health Network, University of Toronto, Toronto, Canada
| | - Roya Eisazadeh
- Division of Molecular Imaging & Theranostics, Department of Nuclear Medicine, University Hospital, Paracelsus Medical University, Salzburg, Austria
| | - Malihe Shahbazi-Akbari
- Division of Molecular Imaging & Theranostics, Department of Nuclear Medicine, University Hospital, Paracelsus Medical University, Salzburg, Austria; Research Center for Nuclear Medicine, Department of Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Christian Pirich
- Division of Molecular Imaging & Theranostics, Department of Nuclear Medicine, University Hospital, Paracelsus Medical University, Salzburg, Austria
| | - Mohsen Beheshti
- Division of Molecular Imaging & Theranostics, Department of Nuclear Medicine, University Hospital, Paracelsus Medical University, Salzburg, Austria.
| |
Collapse
|
18
|
Leung VWS, Ng CKC, Lam SK, Wong PT, Ng KY, Tam CH, Lee TC, Chow KC, Chow YK, Tam VCW, Lee SWY, Lim FMY, Wu JQ, Cai J. Computed Tomography-Based Radiomics for Long-Term Prognostication of High-Risk Localized Prostate Cancer Patients Received Whole Pelvic Radiotherapy. J Pers Med 2023; 13:1643. [PMID: 38138870 PMCID: PMC10744672 DOI: 10.3390/jpm13121643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 11/21/2023] [Accepted: 11/23/2023] [Indexed: 12/24/2023] Open
Abstract
Given the high death rate caused by high-risk prostate cancer (PCa) (>40%) and the reliability issues associated with traditional prognostic markers, the purpose of this study is to investigate planning computed tomography (pCT)-based radiomics for the long-term prognostication of high-risk localized PCa patients who received whole pelvic radiotherapy (WPRT). This is a retrospective study with methods based on best practice procedures for radiomics research. Sixty-four patients were selected and randomly assigned to training (n = 45) and testing (n = 19) cohorts for radiomics model development with five major steps: pCT image acquisition using a Philips Big Bore CT simulator; multiple manual segmentations of clinical target volume for the prostate (CTVprostate) on the pCT images; feature extraction from the CTVprostate using PyRadiomics; feature selection for overfitting avoidance; and model development with three-fold cross-validation. The radiomics model and signature performances were evaluated based on the area under the receiver operating characteristic curve (AUC) as well as accuracy, sensitivity and specificity. This study's results show that our pCT-based radiomics model was able to predict the six-year progression-free survival of the high-risk localized PCa patients who received the WPRT with highly consistent performances (mean AUC: 0.76 (training) and 0.71 (testing)). These are comparable to findings of other similar studies including those using magnetic resonance imaging (MRI)-based radiomics. The accuracy, sensitivity and specificity of our radiomics signature that consisted of two texture features were 0.778, 0.833 and 0.556 (training) and 0.842, 0.867 and 0.750 (testing), respectively. Since CT is more readily available than MRI and is the standard-of-care modality for PCa WPRT planning, pCT-based radiomics could be used as a routine non-invasive approach to the prognostic prediction of WPRT treatment outcomes in high-risk localized PCa.
Collapse
Affiliation(s)
- Vincent W. S. Leung
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| | - Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia;
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| | - Sai-Kit Lam
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong SAR, China;
| | - Po-Tsz Wong
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| | - Ka-Yan Ng
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| | - Cheuk-Hong Tam
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| | - Tsz-Ching Lee
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| | - Kin-Chun Chow
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| | - Yan-Kate Chow
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| | - Victor C. W. Tam
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| | - Shara W. Y. Lee
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| | - Fiona M. Y. Lim
- Department of Oncology, Princess Margaret Hospital, Hong Kong SAR, China;
| | - Jackie Q. Wu
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC 27708, USA;
| | - Jing Cai
- Department of Health Technology and Informatics, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China; (P.-T.W.); (V.C.W.T.); (S.W.Y.L.); (J.C.)
| |
Collapse
|
19
|
Kendrick J, Francis RJ, Hassan GM, Rowshanfarzad P, Ong JS, McCarthy M, Alexander S, Ebert MA. Prognostic utility of RECIP 1.0 with manual and AI-based segmentations in biochemically recurrent prostate cancer from [ 68Ga]Ga-PSMA-11 PET images. Eur J Nucl Med Mol Imaging 2023; 50:4077-4086. [PMID: 37550494 PMCID: PMC10611879 DOI: 10.1007/s00259-023-06382-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 08/02/2023] [Indexed: 08/09/2023]
Abstract
PURPOSE This study aimed to (i) validate the Response Evaluation Criteria in PSMA (RECIP 1.0) criteria in a cohort of biochemically recurrent (BCR) prostate cancer (PCa) patients and (ii) determine if this classification could be performed fully automatically using a trained artificial intelligence (AI) model. METHODS One hundred ninety-nine patients were imaged with [68Ga]Ga-PSMA-11 PET/CT once at the time of biochemical recurrence and then a second time a median of 6.0 months later to assess disease progression. Standard-of-care treatments were administered to patients in the interim. Whole-body tumour volume was quantified semi-automatically (TTVman) in all patients and using a novel AI method (TTVAI) in a subset (n = 74, the remainder were used in the training process of the model). Patients were classified as having progressive disease (RECIP-PD), or non-progressive disease (non RECIP-PD). Association of RECIP classifications with patient overall survival (OS) was assessed using the Kaplan-Meier method with the log rank test and univariate Cox regression analysis with derivation of hazard ratios (HRs). Concordance of manual and AI response classifications was evaluated using the Cohen's kappa statistic. RESULTS Twenty-six patients (26/199 = 13.1%) presented with RECIP-PD according to semi-automated delineations, which was associated with a significantly lower survival probability (log rank p < 0.005) and higher risk of death (HR = 3.78 (1.96-7.28), p < 0.005). Twelve patients (12/74 = 16.2%) presented with RECIP-PD according to AI-based segmentations, which was also associated with a significantly lower survival (log rank p = 0.013) and higher risk of death (HR = 3.75 (1.23-11.47), p = 0.02). Overall, semi-automated and AI-based RECIP classifications were in fair agreement (Cohen's k = 0.31). CONCLUSION RECIP 1.0 was demonstrated to be prognostic in a BCR PCa population and is robust to two different segmentation methods, including a novel AI-based method. RECIP 1.0 can be used to assess disease progression in PCa patients with less advanced disease. This study was registered with the Australian New Zealand Clinical Trials Registry (ACTRN12615000608561) on 11 June 2015.
Collapse
Affiliation(s)
- Jake Kendrick
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, Australia.
- Centre for Advanced Technologies in Cancer Research (CATCR), Perth, Western Australia, Australia.
| | - Roslyn J Francis
- Medical School, The University of Western Australia, Crawley, Western Australia, Australia
- Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, Australia
- Centre for Advanced Technologies in Cancer Research (CATCR), Perth, Western Australia, Australia
| | - Jeremy Sl Ong
- Department of Nuclear Medicine, Fiona Stanley Hospital, Murdoch, Western Australia, Australia
| | - Michael McCarthy
- Department of Nuclear Medicine, Fiona Stanley Hospital, Murdoch, Western Australia, Australia
| | - Sweeka Alexander
- Department of Nuclear Medicine, Fiona Stanley Hospital, Murdoch, Western Australia, Australia
| | - Martin A Ebert
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, Australia
- Centre for Advanced Technologies in Cancer Research (CATCR), Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- 5D Clinics, Claremont, Western Australia, Australia
| |
Collapse
|
20
|
Rich JM, Bhardwaj LN, Shah A, Gangal K, Rapaka MS, Oberai AA, Fields BKK, Matcuk GR, Duddalwar VA. Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. FRONTIERS IN RADIOLOGY 2023; 3:1241651. [PMID: 37614529 PMCID: PMC10442705 DOI: 10.3389/fradi.2023.1241651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 07/28/2023] [Indexed: 08/25/2023]
Abstract
Introduction Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT). Method The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review. Results The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9. Discussion Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.
Collapse
Affiliation(s)
- Joseph M. Rich
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Lokesh N. Bhardwaj
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Aman Shah
- Department of Applied Biostatistics and Epidemiology, University of Southern California, Los Angeles, CA, United States
| | - Krish Gangal
- Bridge UnderGrad Science Summer Research Program, Irvington High School, Fremont, CA, United States
| | - Mohitha S. Rapaka
- Department of Biology, University of Texas at Austin, Austin, TX, United States
| | - Assad A. Oberai
- Department of Aerospace and Mechanical Engineering Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brandon K. K. Fields
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - George R. Matcuk
- Department of Radiology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Vinay A. Duddalwar
- Department of Radiology, Keck School of Medicine of the University of Southern California, Los Angeles, CA, United States
- Department of Radiology, USC Radiomics Laboratory, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
21
|
Bi L, Fulham M, Song S, Feng DD, Kim J. Hyper-Connected Transformer Network for Multi-Modality PET-CT Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083369 DOI: 10.1109/embc40787.2023.10340635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
[18F]-Fluorodeoxyglucose (FDG) positron emission tomography - computed tomography (PET-CT) has become the imaging modality of choice for diagnosing many cancers. Co-learning complementary PET-CT imaging features is a fundamental requirement for automatic tumor segmentation and for developing computer aided cancer diagnosis systems. In this study, we propose a hyper-connected transformer (HCT) network that integrates a transformer network (TN) with a hyper connected fusion for multi-modality PET-CT images. The TN was leveraged for its ability to provide global dependencies in image feature learning, which was achieved by using image patch embeddings with a self-attention mechanism to capture image-wide contextual information. We extended the single-modality definition of TN with multiple TN based branches to separately extract image features. We also introduced a hyper connected fusion to fuse the contextual and complementary image features across multiple transformers in an iterative manner. Our results with two clinical datasets show that HCT achieved better performance in segmentation accuracy when compared to the existing methods.Clinical Relevance-We anticipate that our approach can be an effective and supportive tool to aid physicians in tumor quantification and in identifying image biomarkers for cancer treatment.
Collapse
|
22
|
Dai J, Wang H, Xu Y, Chen X, Tian R. Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
Affiliation(s)
- Jiaona Dai
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Hui Wang
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuchao Xu
- School of Nuclear Science and Technology, University of South China, Hengyang City 421001, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu 610041, China.
| | - Rong Tian
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
23
|
Mask R-CNN assisted 2.5D object detection pipeline of 68Ga-PSMA-11 PET/CT-positive metastatic pelvic lymph node after radical prostatectomy from solely CT imaging. Sci Rep 2023; 13:1696. [PMID: 36717727 PMCID: PMC9886937 DOI: 10.1038/s41598-023-28669-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023] Open
Abstract
Prostate-specific membrane antigen (PSMA) positron emission tomography (PET)/computed tomography (CT) is a molecular and functional imaging modality with better restaging accuracy over conventional imaging for detecting prostate cancer in men suspected of lymph node (LN) progression after definitive therapy. However, the availability of PSMA PET/CT is limited in both low-resource settings and for repeating imaging surveillance. In contrast, CT is widely available, cost-effective, and routinely performed as part of patient follow-up or radiotherapy workflow. Compared with the molecular activities, the morphological and texture changes of subclinical LNs in CT are subtle, making manual detection of positive LNs infeasible. Instead, we harness the power of artificial intelligence for automated LN detection on CT. We examined 68Ga-PSMA-11 PET/CT images from 88 patients (including 739 PSMA PET/CT-positive pelvic LNs) who experienced a biochemical recurrence after radical prostatectomy and presented for salvage radiotherapy with prostate-specific antigen < 1 ng/mL. Scans were divided into a training set (nPatient = 52, nNode = 400), a validation set (nPatient = 18, nNode = 143), and a test set (nPatient = 18, nNodes = 196). Using PSMA PET/CT as the ground truth and consensus pelvic LN clinical target volumes as search regions, a 2.5-dimensional (2.5D) Mask R-CNN based object detection framework was trained. The entire framework contained whole slice imaging pretraining, masked-out region fine-tuning, prediction post-processing, and "window bagging". Following an additional preprocessing step-pelvic LN clinical target volume extraction, our pipeline located positive pelvic LNs solely based on CT scans. Our pipeline could achieve a sensitivity of 83.351%, specificity of 58.621% out of 196 positive pelvic LNs from 18 patients in the test set, of which most of the false positives can be post-removable by radiologists. Our tool may aid CT-based detection of pelvic LN metastasis and triage patients most unlikely to benefit from the PSMA PET/CT scan.
Collapse
|
24
|
Kendrick J, Francis RJ, Hassan GM, Rowshanfarzad P, Ong JSL, Ebert MA. Fully automatic prognostic biomarker extraction from metastatic prostate lesion segmentations in whole-body [ 68Ga]Ga-PSMA-11 PET/CT images. Eur J Nucl Med Mol Imaging 2022; 50:67-79. [PMID: 35976392 DOI: 10.1007/s00259-022-05927-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/01/2022] [Indexed: 12/17/2022]
Abstract
PURPOSE This study aimed to develop and assess an automated segmentation framework based on deep learning for metastatic prostate cancer (mPCa) lesions in whole-body [68Ga]Ga-PSMA-11 PET/CT images for the purpose of extracting patient-level prognostic biomarkers. METHODS Three hundred thirty-seven [68Ga]Ga-PSMA-11 PET/CT images were retrieved from a cohort of biochemically recurrent PCa patients. A fully 3D convolutional neural network (CNN) is proposed which is based on the self-configuring nnU-Net framework, and was trained on a subset of these scans, with an independent test set reserved for model evaluation. Voxel-level segmentation results were assessed using the dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity. Sensitivity and PPV were calculated to assess lesion level detection; patient-level classification results were assessed by the accuracy, PPV, and sensitivity. Whole-body biomarkers total lesional volume (TLVauto) and total lesional uptake (TLUauto) were calculated from the automated segmentations, and Kaplan-Meier analysis was used to assess biomarker relationship with patient overall survival. RESULTS At the patient level, the accuracy, sensitivity, and PPV were all > 90%, with the best metric being the PPV (97.2%). PPV and sensitivity at the lesion level were 88.2% and 73.0%, respectively. DSC and PPV measured at the voxel level performed within measured inter-observer variability (DSC, median = 50.7% vs. second observer = 32%, p = 0.012; PPV, median = 64.9% vs. second observer = 25.7%, p < 0.005). Kaplan-Meier analysis of TLVauto and TLUauto showed they were significantly associated with patient overall survival (both p < 0.005). CONCLUSION The fully automated assessment of whole-body [68Ga]Ga-PSMA-11 PET/CT images using deep learning shows significant promise, yielding accurate scan classification, voxel-level segmentations within inter-observer variability, and potentially clinically useful prognostic biomarkers associated with patient overall survival. TRIAL REGISTRATION This study was registered with the Australian New Zealand Clinical Trials Registry (ACTRN12615000608561) on 11 June 2015.
Collapse
Affiliation(s)
- Jake Kendrick
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia.
| | - Roslyn J Francis
- Medical School, University of Western Australia, Crawley, WA, Australia.,Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, WA, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Jeremy S L Ong
- Department of Nuclear Medicine, Fiona Stanley Hospital, Murdoch, WA, Australia
| | - Martin A Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia.,Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, WA, Australia.,5D Clinics, Claremont, WA, Australia
| |
Collapse
|
25
|
Visvikis D, Lambin P, Beuschau Mauridsen K, Hustinx R, Lassmann M, Rischpler C, Shi K, Pruim J. Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging 2022; 49:4452-4463. [PMID: 35809090 PMCID: PMC9606092 DOI: 10.1007/s00259-022-05891-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 06/25/2022] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) will change the face of nuclear medicine and molecular imaging as it will in everyday life. In this review, we focus on the potential applications of AI in the field, both from a physical (radiomics, underlying statistics, image reconstruction and data analysis) and a clinical (neurology, cardiology, oncology) perspective. Challenges for transferability from research to clinical practice are being discussed as is the concept of explainable AI. Finally, we focus on the fields where challenges should be set out to introduce AI in the field of nuclear medicine and molecular imaging in a reliable manner.
Collapse
Affiliation(s)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands
| | - Kim Beuschau Mauridsen
- Center of Functionally Integrative Neuroscience and MindLab, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Roland Hustinx
- GIGA-CRC in Vivo Imaging, University of Liège, GIGA, Avenue de l'Hôpital 11, 4000, Liege, Belgium
| | - Michael Lassmann
- Klinik Und Poliklinik Für Nuklearmedizin, Universitätsklinikum Würzburg, Würzburg, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland.,Department of Informatics, Technical University of Munich, Munich, Germany
| | - Jan Pruim
- Medical Imaging Center, Dept. of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
26
|
Trägårdh E, Enqvist O, Ulén J, Hvittfeldt E, Garpered S, Belal SL, Bjartell A, Edenbrandt L. Freely available artificial intelligence for pelvic lymph node metastases in PSMA PET-CT that performs on par with nuclear medicine physicians. Eur J Nucl Med Mol Imaging 2022; 49:3412-3418. [PMID: 35475912 PMCID: PMC9308591 DOI: 10.1007/s00259-022-05806-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 04/16/2022] [Indexed: 01/25/2023]
Abstract
PURPOSE The aim of this study was to develop and validate an artificial intelligence (AI)-based method using convolutional neural networks (CNNs) for the detection of pelvic lymph node metastases in scans obtained using [18F]PSMA-1007 positron emission tomography-computed tomography (PET-CT) from patients with high-risk prostate cancer. The second goal was to make the AI-based method available to other researchers. METHODS [18F]PSMA PET-CT scans were collected from 211 patients. Suspected pelvic lymph node metastases were marked by three independent readers. A CNN was developed and trained on a training and validation group of 161 of the patients. The performance of the AI method and the inter-observer agreement between the three readers were assessed in a separate test group of 50 patients. RESULTS The sensitivity of the AI method for detecting pelvic lymph node metastases was 82%, and the corresponding sensitivity for the human readers was 77% on average. The average number of false positives was 1.8 per patient. A total of 5-17 false negative lesions in the whole cohort were found, depending on which reader was used as a reference. The method is available for researchers at www.recomia.org . CONCLUSION This study shows that AI can obtain a sensitivity on par with that of physicians with a reasonable number of false positives. The difficulty in achieving high inter-observer sensitivity emphasizes the need for automated methods. On the road to qualifying AI tools for clinical use, independent validation is critical and allows performance to be assessed in studies from different hospitals. Therefore, we have made our AI tool freely available to other researchers.
Collapse
Affiliation(s)
- Elin Trägårdh
- Department of Translational Medicine and Wallenberg Centre of Molecular Medicine, Lund University, Malmö, Sweden.
- Department of Clinical Physiology and Nuclear Medicine, Skåne University Hospital, Carl Bertil Laurells gata 9, 205 02, Malmö, Sweden.
| | - Olof Enqvist
- Eigenvision AB, Malmö, Sweden
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | | | - Erland Hvittfeldt
- Department of Translational Medicine and Wallenberg Centre of Molecular Medicine, Lund University, Malmö, Sweden
- Department of Clinical Physiology and Nuclear Medicine, Skåne University Hospital, Carl Bertil Laurells gata 9, 205 02, Malmö, Sweden
| | - Sabine Garpered
- Department of Clinical Physiology and Nuclear Medicine, Skåne University Hospital, Carl Bertil Laurells gata 9, 205 02, Malmö, Sweden
| | - Sarah Lindgren Belal
- Department of Translational Medicine and Wallenberg Centre of Molecular Medicine, Lund University, Malmö, Sweden
- Department of Surgery, Skåne University Hospital, Malmö, Sweden
| | - Anders Bjartell
- Department of Urology, Skåne University Hospital and Lund University, Malmö, Sweden
| | - Lars Edenbrandt
- Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
27
|
de Barros HA, van Oosterom MN, Donswijk ML, Hendrikx JJMA, Vis AN, Maurer T, van Leeuwen FWB, van der Poel HG, van Leeuwen PJ. Robot-assisted Prostate-specific Membrane Antigen-radioguided Salvage Surgery in Recurrent Prostate Cancer Using a DROP-IN Gamma Probe: The First Prospective Feasibility Study. Eur Urol 2022; 82:97-105. [PMID: 35339318 DOI: 10.1016/j.eururo.2022.03.002] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/08/2022] [Accepted: 03/04/2022] [Indexed: 12/15/2022]
Abstract
BACKGROUND It has been proven that intraoperative prostate-specific membrane antigen (PSMA)-targeted radioguidance is valuable for the detection of prostate cancer (PCa) lesions during open surgery. Rapid extension of robot-assisted, minimally invasive surgery has increased the need to make PSMA-radioguided surgery (RGS) robot-compliant. OBJECTIVE To evaluate whether the miniaturized DROP-IN gamma probe facilitates translation of PSMA-RGS to robotic surgery in men with recurrent PCa. DESIGN, SETTING, AND PARTICIPANTS This prospective feasibility study included 20 patients with up to three pelvic PCa recurrences (nodal or local) on staging PSMA positron emission tomography (PET) after previous curative-intent therapy. SURGICAL PROCEDURE Robot-assisted PSMA-RGS using the DROP-IN gamma probe was carried out 19-23 h after intravenous injection of 99mtechnetium PSMA-Investigation & Surgery (99mTc-PSMA-I&S). MEASUREMENTS The primary endpoint was the feasibility of robot-assisted PSMA-RGS. Secondary endpoints were a comparison of the radioactive status (positive or negative) of resected specimens and final histopathology results, prostate-specific antigen (PSA) response following PSMA-RGS, and complications according to the Clavien-Dindo classification. RESULTS AND LIMITATIONS Using the DROP-IN probe, 19/21 (90%) PSMA-avid lesions could be resected robotically. On a per-lesion basis, the sensitivity and specificity of robot-assisted PSMA-RGS was 86% and 100%, respectively. A prostate-specific antigen (PSA) reduction of >50% and a complete biochemical response (PSA <0.2 ng/ml) were seen in 12/18 (67%) and 4/18 (22%) patients, respectively. During follow-up of up to 15 mo, 4/18 patients (22%) remained free of biochemical recurrence (PSA ≤0.2 ng/ml). One patient suffered from a Clavien-Dindo grade >III complication. CONCLUSIONS The DROP-IN probe helps in realizing robot-assisted PSMA-RGS. The procedure is technically feasible for intraoperative detection of nodal or local PSMA-avid PCa recurrences. PATIENT SUMMARY A device called the DROP-IN probe facilitates minimally invasive, robot-assisted surgery guided by radioactive tracers in patients with recurrent prostate cancer. This procedure holds promise for improving the intraoperative identification and removal of prostate cancer lesions.
Collapse
Affiliation(s)
- Hilda A de Barros
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands; Netherlands Prostate Cancer Network, Amsterdam, The Netherlands.
| | - Matthias N van Oosterom
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands; Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Maarten L Donswijk
- Department of Nuclear Medicine, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Jeroen J M A Hendrikx
- Department of Nuclear Medicine, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands; Department of Pharmacy & Pharmacology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - André N Vis
- Netherlands Prostate Cancer Network, Amsterdam, The Netherlands; Department of Urology, Amsterdam University Medical Center, VU University, Amsterdam, The Netherlands
| | - Tobias Maurer
- Martini-Klinik Prostate Cancer Center, University Hospital Hamburg-Eppendorf, Hamburg, Germany; Department of Urology, University Hospital Hamburg-Eppendorf, Hamburg, Germany
| | - Fijs W B van Leeuwen
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands; Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Henk G van der Poel
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands; Netherlands Prostate Cancer Network, Amsterdam, The Netherlands; Department of Urology, Amsterdam University Medical Center, VU University, Amsterdam, The Netherlands
| | - Pim J van Leeuwen
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands; Netherlands Prostate Cancer Network, Amsterdam, The Netherlands
| |
Collapse
|
28
|
Yoshida A, Ueda D, Higashiyama S, Katayama Y, Matsumoto T, Yamanaga T, Miki Y, Kawabe J. Deep learning-based detection of parathyroid adenoma by 99mTc-MIBI scintigraphy in patients with primary hyperparathyroidism. Ann Nucl Med 2022; 36:468-478. [PMID: 35182328 DOI: 10.1007/s12149-022-01726-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 02/06/2022] [Indexed: 11/01/2022]
Abstract
OBJECTIVE It is important to detect parathyroid adenomas by parathyroid scintigraphy with 99m-technetium sestamibi (99mTc-MIBI) before surgery. This study aimed to develop and validate deep learning (DL)-based models to detect parathyroid adenoma in patients with primary hyperparathyroidism, from parathyroid scintigrams with 99mTc-MIBI. METHODS DL-based models for detecting parathyroid adenoma in early- and late-phase parathyroid scintigrams were, respectively, developed and evaluated. The training dataset used to train the models was collected from 192 patients (165 adenoma cases, mean age: 64 years ± 13, 145 women) and the validation dataset used to tune the models was collected from 45 patients (30 adenoma cases, mean age: 67 years ± 12, 37 women). The images were collected from patients who were pathologically diagnosed with parathyroid adenomas or in whom no lesions could be detected by either parathyroid scintigraphy or ultrasonography at our institution from June 2010 to March 2019. The models were tested on a dataset collected from 44 patients (30 adenoma cases, mean age: 67 years ± 12, 38 women) who took scintigraphy from April 2019 to March 2020. The models' lesion-based sensitivity and mean false positive indications per image (mFPI) were assessed with the test dataset. RESULTS The sensitivity was 82% [95% confidence interval 72-92%] with mFPI of 0.44 for the scintigrams of the early-phase model and 83% [73-92%] with mFPI of 0.31 for the scintigrams of the delayed-phase model in the test dataset, respectively. CONCLUSIONS The DL-based models were able to detect parathyroid adenomas with a high sensitivity using parathyroid scintigraphy with 99m-technetium sestamibi.
Collapse
Affiliation(s)
- Atsushi Yoshida
- Department of Nuclear Medicine, Graduate School of Medicine, Osaka City University, 1-4-3, Asahimachi, Abeno-ku, Osaka, 545-8585, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3, Asahimachi, Abeno-ku, Osaka, 545-8585, Japan.
| | - Shigeaki Higashiyama
- Department of Nuclear Medicine, Graduate School of Medicine, Osaka City University, 1-4-3, Asahimachi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yutaka Katayama
- Department of Radiology, Osaka City University Hospital, 1-5-7, Asahimachi, Abeno-ku, Osaka, 545-8586, Japan
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3, Asahimachi, Abeno-ku, Osaka, 545-8585, Japan
| | - Takashi Yamanaga
- Department of Radiology, Osaka City University Hospital, 1-5-7, Asahimachi, Abeno-ku, Osaka, 545-8586, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3, Asahimachi, Abeno-ku, Osaka, 545-8585, Japan
| | - Joji Kawabe
- Department of Nuclear Medicine, Graduate School of Medicine, Osaka City University, 1-4-3, Asahimachi, Abeno-ku, Osaka, 545-8585, Japan
| |
Collapse
|
29
|
Smart materials: rational design in biosystems via artificial intelligence. Trends Biotechnol 2022; 40:987-1003. [DOI: 10.1016/j.tibtech.2022.01.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 01/09/2022] [Accepted: 01/10/2022] [Indexed: 12/12/2022]
|
30
|
Li MD, Ahmed SR, Choy E, Lozano-Calderon SA, Kalpathy-Cramer J, Chang CY. Artificial intelligence applied to musculoskeletal oncology: a systematic review. Skeletal Radiol 2022; 51:245-256. [PMID: 34013447 DOI: 10.1007/s00256-021-03820-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 05/13/2021] [Accepted: 05/13/2021] [Indexed: 02/02/2023]
Abstract
Developments in artificial intelligence have the potential to improve the care of patients with musculoskeletal tumors. We performed a systematic review of the published scientific literature to identify the current state of the art of artificial intelligence applied to musculoskeletal oncology, including both primary and metastatic tumors, and across the radiology, nuclear medicine, pathology, clinical research, and molecular biology literature. Through this search, we identified 252 primary research articles, of which 58 used deep learning and 194 used other machine learning techniques. Articles involving deep learning have mostly involved bone scintigraphy, histopathology, and radiologic imaging. Articles involving other machine learning techniques have mostly involved transcriptomic analyses, radiomics, and clinical outcome prediction models using medical records. These articles predominantly present proof-of-concept work, other than the automated bone scan index for bone metastasis quantification, which has translated to clinical workflows in some regions. We systematically review and discuss this literature, highlight opportunities for multidisciplinary collaboration, and identify potentially clinically useful topics with a relative paucity of research attention. Musculoskeletal oncology is an inherently multidisciplinary field, and future research will need to integrate and synthesize noisy siloed data from across clinical, imaging, and molecular datasets. Building the data infrastructure for collaboration will help to accelerate progress towards making artificial intelligence truly useful in musculoskeletal oncology.
Collapse
Affiliation(s)
- Matthew D Li
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA. .,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Syed Rakin Ahmed
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.,Harvard Medical School, Harvard Graduate Program in Biophysics, Harvard University, Cambridge, MA, USA.,Geisel School of Medicine At Dartmouth, Dartmouth College, Hanover, NH, USA
| | - Edwin Choy
- Division of Hematology Oncology, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Santiago A Lozano-Calderon
- Department of Orthopedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Connie Y Chang
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
31
|
Ong JS, Hofman MS. PET imaging of prostate cancer. Nucl Med Mol Imaging 2022. [DOI: 10.1016/b978-0-12-822960-6.00111-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
32
|
Wendler T, van Leeuwen FWB, Navab N, van Oosterom MN. How molecular imaging will enable robotic precision surgery : The role of artificial intelligence, augmented reality, and navigation. Eur J Nucl Med Mol Imaging 2021; 48:4201-4224. [PMID: 34185136 PMCID: PMC8566413 DOI: 10.1007/s00259-021-05445-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 06/01/2021] [Indexed: 02/08/2023]
Abstract
Molecular imaging is one of the pillars of precision surgery. Its applications range from early diagnostics to therapy planning, execution, and the accurate assessment of outcomes. In particular, molecular imaging solutions are in high demand in minimally invasive surgical strategies, such as the substantially increasing field of robotic surgery. This review aims at connecting the molecular imaging and nuclear medicine community to the rapidly expanding armory of surgical medical devices. Such devices entail technologies ranging from artificial intelligence and computer-aided visualization technologies (software) to innovative molecular imaging modalities and surgical navigation (hardware). We discuss technologies based on their role at different steps of the surgical workflow, i.e., from surgical decision and planning, over to target localization and excision guidance, all the way to (back table) surgical verification. This provides a glimpse of how innovations from the technology fields can realize an exciting future for the molecular imaging and surgery communities.
Collapse
Affiliation(s)
- Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
| | - Fijs W. B. van Leeuwen
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Orsi Academy, Melle, Belgium
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
- Chair for Computer Aided Medical Procedures Laboratory for Computational Sensing + Robotics, Johns-Hopkins University, Baltimore, MD USA
| | - Matthias N. van Oosterom
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| |
Collapse
|
33
|
Ma K, Harmon SA, Klyuzhin IS, Rahmim A, Turkbey B. Clinical Application of Artificial Intelligence in Positron Emission Tomography: Imaging of Prostate Cancer. PET Clin 2021; 17:137-143. [PMID: 34809863 DOI: 10.1016/j.cpet.2021.09.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
PET imaging with targeted novel tracers has been commonly used in the clinical management of prostate cancer. The use of artificial intelligence (AI) in PET imaging is a relatively new approach and in this review article, we will review the current trends and categorize the currently available research into the quantification of tumor burden within the organ, evaluation of metastatic disease, and translational/supplemental research which aims to improve other AI research efforts.
Collapse
Affiliation(s)
- Kevin Ma
- Artificial Intelligence Resource, Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA
| | - Stephanie A Harmon
- Artificial Intelligence Resource, Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA
| | - Ivan S Klyuzhin
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| | - Baris Turkbey
- Artificial Intelligence Resource, Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA.
| |
Collapse
|
34
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
35
|
Liu X, Sun Z, Han C, Cui Y, Huang J, Wang X, Zhang X, Wang X. Development and validation of the 3D U-Net algorithm for segmentation of pelvic lymph nodes on diffusion-weighted images. BMC Med Imaging 2021; 21:170. [PMID: 34774001 PMCID: PMC8590773 DOI: 10.1186/s12880-021-00703-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 11/08/2021] [Indexed: 12/16/2022] Open
Abstract
Background The 3D U-Net model has been proved to perform well in the automatic organ segmentation. The aim of this study is to evaluate the feasibility of the 3D U-Net algorithm for the automated detection and segmentation of lymph nodes (LNs) on pelvic diffusion-weighted imaging (DWI) images. Methods A total of 393 DWI images of patients suspected of having prostate cancer (PCa) between January 2019 and December 2020 were collected for model development. Seventy-seven DWI images from another group of PCa patients imaged between January 2021 and April 2021 were collected for temporal validation. Segmentation performance was assessed using the Dice score, positive predictive value (PPV), true positive rate (TPR), and volumetric similarity (VS), Hausdorff distance (HD), the Average distance (AVD), and the Mahalanobis distance (MHD) with manual annotation of pelvic LNs as the reference. The accuracy with which the suspicious metastatic LNs (short diameter > 0.8 cm) were detected was evaluated using the area under the curve (AUC) at the patient level, and the precision, recall, and F1-score were determined at the lesion level. The consistency of LN staging on an hold-out test dataset between the model and radiologist was assessed using Cohen’s kappa coefficient. Results In the testing set used for model development, the Dice score, TPR, PPV, VS, HD, AVD and MHD values for the segmentation of suspicious LNs were 0.85, 0.82, 0.80, 0.86, 2.02 (mm), 2.01 (mm), and 1.54 (mm) respectively. The precision, recall, and F1-score for the detection of suspicious LNs were 0.97, 0.98 and 0.97, respectively. In the temporal validation dataset, the AUC of the model for identifying PCa patients with suspicious LNs was 0.963 (95% CI: 0.892–0.993). High consistency of LN staging (Kappa = 0.922) was achieved between the model and expert radiologist. Conclusion The 3D U-Net algorithm can accurately detect and segment pelvic LNs based on DWI images.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jiahao Huang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
36
|
Yin P, Sun C, Wang S, Chen L, Hong N. Clinical-Deep Neural Network and Clinical-Radiomics Nomograms for Predicting the Intraoperative Massive Blood Loss of Pelvic and Sacral Tumors. Front Oncol 2021; 11:752672. [PMID: 34760700 PMCID: PMC8574215 DOI: 10.3389/fonc.2021.752672] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 10/06/2021] [Indexed: 11/29/2022] Open
Abstract
Background Patients with pelvic and sacral tumors are prone to massive blood loss (MBL) during surgery, which may endanger their lives. Purposes This study aimed to determine the feasibility of using deep neural network (DNN) and radiomics nomogram (RN) based on 3D computed tomography (CT) features and clinical characteristics to predict the intraoperative MBL of pelvic and sacral tumors. Materials and Methods This single-center retrospective analysis included 810 patients with pelvic and sacral tumors. 1316 CT and CT enhanced radiomics features were extracted. RN1 and RN2 were constructed by random grouping and time node grouping, respectively. The DNN models were constructed for comparison with RN. Clinical factors associated with the MBL were also evaluated. The area under the receiver operating characteristic curve (AUC) and accuracy (ACC) were used to evaluate different models. Results Radscore, tumor type, tumor location, and sex were significant predictors of the MBL of pelvic and sacral tumors (P < 0.05), of which radscore (OR, ranging from 2.109 to 4.706, P < 0.001) was the most important. The clinical-DNN and clinical-RN performed better than DNN and RN. The best-performing clinical-DNN model based on CT features exhibited an AUC of 0.92 and an ACC of 0.97 in the training set, and an AUC of 0.92 and an ACC of 0.75 in the validation set. Conclusions The clinical-DNN and clinical-RN had good performance in predicting the MBL of pelvic and sacral tumors, which could be used for clinical decision-making.
Collapse
Affiliation(s)
- Ping Yin
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Chao Sun
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Sicong Wang
- Department of Pharmaceuticals Diagnosis, GE Healthcare (China), Shanghai, China
| | - Lei Chen
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Nan Hong
- Department of Radiology, Peking University People's Hospital, Beijing, China
| |
Collapse
|
37
|
Kendrick J, Francis R, Hassan GM, Rowshanfarzad P, Jeraj R, Kasisi C, Rusanov B, Ebert M. Radiomics for Identification and Prediction in Metastatic Prostate Cancer: A Review of Studies. Front Oncol 2021; 11:771787. [PMID: 34790581 PMCID: PMC8591174 DOI: 10.3389/fonc.2021.771787] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 10/11/2021] [Indexed: 12/21/2022] Open
Abstract
Metastatic Prostate Cancer (mPCa) is associated with a poor patient prognosis. mPCa spreads throughout the body, often to bones, with spatial and temporal variations that make the clinical management of the disease difficult. The evolution of the disease leads to spatial heterogeneity that is extremely difficult to characterise with solid biopsies. Imaging provides the opportunity to quantify disease spread. Advanced image analytics methods, including radiomics, offer the opportunity to characterise heterogeneity beyond what can be achieved with simple assessment. Radiomics analysis has the potential to yield useful quantitative imaging biomarkers that can improve the early detection of mPCa, predict disease progression, assess response, and potentially inform the choice of treatment procedures. Traditional radiomics analysis involves modelling with hand-crafted features designed using significant domain knowledge. On the other hand, artificial intelligence techniques such as deep learning can facilitate end-to-end automated feature extraction and model generation with minimal human intervention. Radiomics models have the potential to become vital pieces in the oncology workflow, however, the current limitations of the field, such as limited reproducibility, are impeding their translation into clinical practice. This review provides an overview of the radiomics methodology, detailing critical aspects affecting the reproducibility of features, and providing examples of how artificial intelligence techniques can be incorporated into the workflow. The current landscape of publications utilising radiomics methods in the assessment and treatment of mPCa are surveyed and reviewed. Associated studies have incorporated information from multiple imaging modalities, including bone scintigraphy, CT, PET with varying tracers, multiparametric MRI together with clinical covariates, spanning the prediction of progression through to overall survival in varying cohorts. The methodological quality of each study is quantified using the radiomics quality score. Multiple deficits were identified, with the lack of prospective design and external validation highlighted as major impediments to clinical translation. These results inform some recommendations for future directions of the field.
Collapse
Affiliation(s)
- Jake Kendrick
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Roslyn Francis
- Medical School, University of Western Australia, Crawley, WA, Australia
- Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, WA, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Robert Jeraj
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| | - Collin Kasisi
- Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, WA, Australia
| | - Branimir Rusanov
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, WA, Australia
- 5D Clinics, Claremont, WA, Australia
| |
Collapse
|
38
|
Diao Z, Jiang H, Han XH, Yao YD, Shi T. EFNet: evidence fusion network for tumor segmentation from PET-CT volumes. Phys Med Biol 2021; 66. [PMID: 34555816 DOI: 10.1088/1361-6560/ac299a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/23/2021] [Indexed: 11/11/2022]
Abstract
Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.
Collapse
Affiliation(s)
- Zhaoshuo Diao
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| | - Xian-Hua Han
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi-shi 7538511, Japan
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken NJ 07030, United States of America
| | - Tianyu Shi
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
39
|
Rogasch JMM, Penzkofer T. AI in nuclear medicine - what, why and how? Nuklearmedizin 2021; 60:321-324. [PMID: 34607369 DOI: 10.1055/a-1542-6231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
40
|
Brosch-Lenz J, Yousefirizi F, Zukotynski K, Beauregard JM, Gaudet V, Saboury B, Rahmim A, Uribe C. Role of Artificial Intelligence in Theranostics:: Toward Routine Personalized Radiopharmaceutical Therapies. PET Clin 2021; 16:627-641. [PMID: 34537133 DOI: 10.1016/j.cpet.2021.06.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
We highlight emerging uses of artificial intelligence (AI) in the field of theranostics, focusing on its significant potential to enable routine and reliable personalization of radiopharmaceutical therapies (RPTs). Personalized RPTs require patient-specific dosimetry calculations accompanying therapy. Additionally we discuss the potential to exploit biological information from diagnostic and therapeutic molecular images to derive biomarkers for absorbed dose and outcome prediction; toward personalization of therapies. We try to motivate the nuclear medicine community to expand and align efforts into making routine and reliable personalization of RPTs a reality.
Collapse
Affiliation(s)
- Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Katherine Zukotynski
- Department of Medicine and Radiology, McMaster University, 1200 Main Street West, Hamilton, Ontario L9G 4X5, Canada
| | - Jean-Mathieu Beauregard
- Department of Radiology and Nuclear Medicine, Cancer Research Centre, Université Laval, 2325 Rue de l'Université, Québec City, Quebec G1V 0A6, Canada; Department of Medical Imaging, Research Center (Oncology Axis), CHU de Québec - Université Laval, 2325 Rue de l'Université, Québec City, Quebec G1V 0A6, Canada
| | - Vincent Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, 11th Floor, 2775 Laurel St, Vancouver, British Columbia V5Z 1M9, Canada; Department of Physics, University of British Columbia, 325 - 6224 Agricultural Road, Vancouver, British Columbia V6T 1Z1, Canada
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, 11th Floor, 2775 Laurel St, Vancouver, British Columbia V5Z 1M9, Canada; Department of Functional Imaging, BC Cancer, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| |
Collapse
|
41
|
Yousefirizi F, Jha AK, Brosch-Lenz J, Saboury B, Rahmim A. Toward High-Throughput Artificial Intelligence-Based Segmentation in Oncological PET Imaging. PET Clin 2021; 16:577-596. [PMID: 34537131 DOI: 10.1016/j.cpet.2021.06.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks have shown impressive results and potential toward fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single- and bimodality scans. This work reviews existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts toward routine adoption in clinical workflows.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO 63130, USA; Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, Senior Scientist & Provincial Medical Imaging Physicist, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| |
Collapse
|
42
|
CDC-Net: Cascaded decoupled convolutional network for lesion-assisted detection and grading of retinopathy using optical coherence tomography (OCT) scans. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103030] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
43
|
Analytical performance of aPROMISE: automated anatomic contextualization, detection, and quantification of [ 18F]DCFPyL (PSMA) imaging for standardized reporting. Eur J Nucl Med Mol Imaging 2021; 49:1041-1051. [PMID: 34463809 PMCID: PMC8803714 DOI: 10.1007/s00259-021-05497-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 07/09/2021] [Indexed: 11/21/2022]
Abstract
Purpose The application of automated image analyses could improve and facilitate standardization and consistency of quantification in [18F]DCFPyL (PSMA) PET/CT scans. In the current study, we analytically validated aPROMISE, a software as a medical device that segments organs in low-dose CT images with deep learning, and subsequently detects and quantifies potential pathological lesions in PSMA PET/CT. Methods To evaluate the deep learning algorithm, the automated segmentations of the low-dose CT component of PSMA PET/CT scans from 20 patients were compared to manual segmentations. Dice scores were used to quantify the similarities between the automated and manual segmentations. Next, the automated quantification of tracer uptake in the reference organs and detection and pre-segmentation of potential lesions were evaluated in 339 patients with prostate cancer, who were all enrolled in the phase II/III OSPREY study. Three nuclear medicine physicians performed the retrospective independent reads of OSPREY images with aPROMISE. Quantitative consistency was assessed by the pairwise Pearson correlations and standard deviation between the readers and aPROMISE. The sensitivity of detection and pre-segmentation of potential lesions was evaluated by determining the percent of manually selected abnormal lesions that were automatically detected by aPROMISE. Results The Dice scores for bone segmentations ranged from 0.88 to 0.95. The Dice scores of the PSMA PET/CT reference organs, thoracic aorta and liver, were 0.89 and 0.97, respectively. Dice scores of other visceral organs, including prostate, were observed to be above 0.79. The Pearson correlation for blood pool reference was higher between any manual reader and aPROMISE, than between any pair of manual readers. The standard deviations of reference organ uptake across all patients as determined by aPROMISE (SD = 0.21 blood pool and SD = 1.16 liver) were lower compared to those of the manual readers. Finally, the sensitivity of aPROMISE detection and pre-segmentation was 91.5% for regional lymph nodes, 90.6% for all lymph nodes, and 86.7% for bone in metastatic patients. Conclusion In this analytical study, we demonstrated the segmentation accuracy of the deep learning algorithm, the consistency in quantitative assessment across multiple readers, and the high sensitivity in detecting potential lesions. The study provides a foundational framework for clinical evaluation of aPROMISE in standardized reporting of PSMA PET/CT. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05497-8.
Collapse
|
44
|
Hassan B, Qin S, Ahmed R, Hassan T, Taguri AH, Hashmi S, Werghi N. Deep learning based joint segmentation and characterization of multi-class retinal fluid lesions on OCT scans for clinical use in anti-VEGF therapy. Comput Biol Med 2021; 136:104727. [PMID: 34385089 DOI: 10.1016/j.compbiomed.2021.104727] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 07/31/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022]
Abstract
BACKGROUND In anti-vascular endothelial growth factor (anti-VEGF) therapy, an accurate estimation of multi-class retinal fluid (MRF) is required for the activity prescription and intravitreal dose. This study proposes an end-to-end deep learning-based retinal fluids segmentation network (RFS-Net) to segment and recognize three MRF lesion manifestations, namely, intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED), from multi-vendor optical coherence tomography (OCT) imagery. The proposed image analysis tool will optimize anti-VEGF therapy and contribute to reducing the inter- and intra-observer variability. METHOD The proposed RFS-Net architecture integrates the atrous spatial pyramid pooling (ASPP), residual, and inception modules in the encoder path to learn better features and conserve more global information for precise segmentation and characterization of MRF lesions. The RFS-Net model is trained and validated using OCT scans from multiple vendors (Topcon, Cirrus, Spectralis), collected from three publicly available datasets. The first dataset consisted of OCT volumes obtained from 112 subjects (a total of 11,334 B-scans) is used for both training and evaluation purposes. Moreover, the remaining two datasets are only used for evaluation purposes to check the trained RFS-Net's generalizability on unseen OCT scans. The two evaluation datasets contain a total of 1572 OCT B-scans from 1255 subjects. The performance of the proposed RFS-Net model is assessed through various evaluation metrics. RESULTS The proposed RFS-Net model achieved the mean F1 scores of 0.762, 0.796, and 0.805 for segmenting IRF, SRF, and PED. Moreover, with the automated segmentation of the three retinal manifestations, the RFS-Net brings a considerable gain in efficiency compared to the tedious and demanding manual segmentation procedure of the MRF. CONCLUSIONS Our proposed RFS-Net is a potential diagnostic tool for the automatic segmentation of MRF (IRF, SRF, and PED) lesions. It is expected to strengthen the inter-observer agreement, and standardization of dosimetry is envisaged as a result.
Collapse
Affiliation(s)
- Bilal Hassan
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China.
| | - Shiyin Qin
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China; School of Electrical Engineering and Intelligentization, Dongguan University of Technology, Dongguan, 523808, China
| | - Ramsha Ahmed
- School of Computer and Communication Engineering, University of Science and Technology Beijing (USTB), Beijing, 100083, China
| | - Taimur Hassan
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| | - Abdel Hakeem Taguri
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Shahrukh Hashmi
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Naoufel Werghi
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| |
Collapse
|
45
|
Roll W, Schindler P, Masthoff M, Seifert R, Schlack K, Bögemann M, Stegger L, Weckesser M, Rahbar K. Evaluation of 68Ga-PSMA-11 PET-MRI in Patients with Advanced Prostate Cancer Receiving 177Lu-PSMA-617 Therapy: A Radiomics Analysis. Cancers (Basel) 2021; 13:cancers13153849. [PMID: 34359750 PMCID: PMC8345703 DOI: 10.3390/cancers13153849] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Accepted: 07/27/2021] [Indexed: 12/14/2022] Open
Abstract
177Lutetium PSMA-617 (Lu-PSMA) therapy in patients with metastatic castration resistant prostate cancer (mCRPC) has gained visibility through the ongoing phase III trial. The data on prediction of therapy outcome and survival out of pretherapeutic imaging parameters is still sparse. In this study, the predictive and prognostic value of radiomic features from 68Ga-PSMA-11 PET-MRI are analyzed. In total, 21 patients with mCRPC underwent 68Ga-PSMA-11 PET-MRI before Lu-PSMA therapy. The PET-positive tumor volume was defined and transferred to whole-body T2-, T1- and contrast-enhanced T1-weighted MRI-sequences. The radiomic features from PET and MRI sequences were extracted by using a freely available software package. For selecting features that allow differentiation of biochemical response (PSA decrease > 50%), a stepwise dimension reduction was performed. Logistic regression models were fitted, and selected features were tested for their prognostic value (overall survival) in all patients. Eight patients achieved biochemical response after Lu-PSMA therapy. Ten independent radiomic features differentiated well between responders and non-responders. The logistic regression model, including the feature interquartile range from T2-weighted images, revealed the highest accuracy (AUC = 0.83) for the prediction of biochemical response after Lu-PSMA therapy. Within the final model, patients with a biochemical response (p = 0.003) and higher T2 interquartile range values in pre-therapeutic imaging (p = 0.038) survived significantly longer. This proof-of-concept study provides first evidence on a potential predictive and prognostic value of radiomic analysis of pretherapeutic 68Ga-PSMA-11 PET-MRI before Lu-PSMA therapy.
Collapse
Affiliation(s)
- Wolfgang Roll
- Department of Nuclear Medicine, University Hospital Muenster, 48149 Muenster, Germany; (R.S.); (L.S.); (M.W.); (K.R.)
- Correspondence: ; Tel.: +49-251-8347362; Fax: +49-251-8347363
| | - Philipp Schindler
- Department of Radiology, University Hospital Muenster, 48149 Muenster, Germany; (P.S.); (M.M.)
| | - Max Masthoff
- Department of Radiology, University Hospital Muenster, 48149 Muenster, Germany; (P.S.); (M.M.)
| | - Robert Seifert
- Department of Nuclear Medicine, University Hospital Muenster, 48149 Muenster, Germany; (R.S.); (L.S.); (M.W.); (K.R.)
- Department of Nuclear Medicine, University Hospital Essen, 45147 Essen, Germany
| | - Katrin Schlack
- Department of Urology, University Hospital Muenster, 48149 Muenster, Germany; (K.S.); (M.B.)
| | - Martin Bögemann
- Department of Urology, University Hospital Muenster, 48149 Muenster, Germany; (K.S.); (M.B.)
| | - Lars Stegger
- Department of Nuclear Medicine, University Hospital Muenster, 48149 Muenster, Germany; (R.S.); (L.S.); (M.W.); (K.R.)
| | - Matthias Weckesser
- Department of Nuclear Medicine, University Hospital Muenster, 48149 Muenster, Germany; (R.S.); (L.S.); (M.W.); (K.R.)
| | - Kambiz Rahbar
- Department of Nuclear Medicine, University Hospital Muenster, 48149 Muenster, Germany; (R.S.); (L.S.); (M.W.); (K.R.)
| |
Collapse
|
46
|
Capobianco N, Sibille L, Chantadisai M, Gafita A, Langbein T, Platsch G, Solari EL, Shah V, Spottiswoode B, Eiber M, Weber WA, Navab N, Nekolla SG. Whole-body uptake classification and prostate cancer staging in 68Ga-PSMA-11 PET/CT using dual-tracer learning. Eur J Nucl Med Mol Imaging 2021; 49:517-526. [PMID: 34232350 PMCID: PMC8803695 DOI: 10.1007/s00259-021-05473-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 06/17/2021] [Indexed: 01/16/2023]
Abstract
Purpose In PSMA-ligand PET/CT imaging, standardized evaluation frameworks and image-derived parameters are increasingly used to support prostate cancer staging. Clinical applicability remains challenging wherever manual measurements of numerous suspected lesions are required. Deep learning methods are promising for automated image analysis, typically requiring extensive expert-annotated image datasets to reach sufficient accuracy. We developed a deep learning method to support image-based staging, investigating the use of training information from two radiotracers. Methods In 173 subjects imaged with 68Ga-PSMA-11 PET/CT, divided into development (121) and test (52) sets, we trained and evaluated a convolutional neural network to both classify sites of elevated tracer uptake as nonsuspicious or suspicious for cancer and assign them an anatomical location. We evaluated training strategies to leverage information from a larger dataset of 18F-FDG PET/CT images and expert annotations, including transfer learning and combined training encoding the tracer type as input to the network. We assessed the agreement between the N and M stage assigned based on the network annotations and expert annotations, according to the PROMISE miTNM framework. Results In the development set, including 18F-FDG training data improved classification performance in four-fold cross validation. In the test set, compared to expert assessment, training with 18F-FDG data and the development set yielded 80.4% average precision [confidence interval (CI): 71.1–87.8] for identification of suspicious uptake sites, 77% (CI: 70.0–83.4) accuracy for anatomical location classification of suspicious findings, 81% agreement for identification of regional lymph node involvement, and 77% agreement for identification of metastatic stage. Conclusion The evaluated algorithm showed good agreement with expert assessment for identification and anatomical location classification of suspicious uptake sites in whole-body 68Ga-PSMA-11 PET/CT. With restricted PSMA-ligand data available, the use of training examples from a different radiotracer improved performance. The investigated methods are promising for enabling efficient assessment of cancer stage and tumor burden. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05473-2.
Collapse
Affiliation(s)
- Nicolò Capobianco
- Technische Universität München, Munich, Germany. .,Siemens Healthcare GmbH, Erlangen, Germany.
| | | | - Maythinee Chantadisai
- School of Medicine, Department of Nuclear Medicine, Technische Universität München, Munich, Germany.,Faculty of Medicine, King Chulalongkorn Memorial Hospital, The Thai Red Cross Society, Chulalongkorn University, Bangkok, Thailand
| | - Andrei Gafita
- School of Medicine, Department of Nuclear Medicine, Technische Universität München, Munich, Germany
| | - Thomas Langbein
- School of Medicine, Department of Nuclear Medicine, Technische Universität München, Munich, Germany
| | | | - Esteban Lucas Solari
- School of Medicine, Department of Nuclear Medicine, Technische Universität München, Munich, Germany
| | - Vijay Shah
- Siemens Medical Solutions USA, Inc., Knoxville, TN, USA
| | | | - Matthias Eiber
- School of Medicine, Department of Nuclear Medicine, Technische Universität München, Munich, Germany
| | - Wolfgang A Weber
- School of Medicine, Department of Nuclear Medicine, Technische Universität München, Munich, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Munich, Germany
| | - Stephan G Nekolla
- School of Medicine, Department of Nuclear Medicine, Technische Universität München, Munich, Germany
| |
Collapse
|
47
|
Deep supervised learning using self-adaptive auxiliary loss for COVID-19 diagnosis from imbalanced CT images. Neurocomputing 2021; 458:232-245. [PMID: 34121811 PMCID: PMC8180474 DOI: 10.1016/j.neucom.2021.06.012] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 05/02/2021] [Accepted: 06/04/2021] [Indexed: 12/21/2022]
Abstract
The outbreak and rapid spread of coronavirus disease 2019 (COVID-19) has had a huge impact on the lives and safety of people around the world. Chest CT is considered an effective tool for the diagnosis and follow-up of COVID-19. For faster examination, automatic COVID-19 diagnostic techniques using deep learning on CT images have received increasing attention. However, the number and category of existing datasets for COVID-19 diagnosis that can be used for training are limited, and the number of initial COVID-19 samples is much smaller than the normal’s, which leads to the problem of class imbalance. It makes the classification algorithms difficult to learn the discriminative boundaries since the data of some classes are rich while others are scarce. Therefore, training robust deep neural networks with imbalanced data is a fundamental challenging but important task in the diagnosis of COVID-19. In this paper, we create a challenging clinical dataset (named COVID19-Diag) with category diversity and propose a novel imbalanced data classification method using deep supervised learning with a self-adaptive auxiliary loss (DSN-SAAL) for COVID-19 diagnosis. The loss function considers both the effects of data overlap between CT slices and possible noisy labels in clinical datasets on a multi-scale, deep supervised network framework by integrating the effective number of samples and a weighting regularization item. The learning process jointly and automatically optimizes all parameters over the deep supervised network, making our model generally applicable to a wide range of datasets. Extensive experiments are conducted on COVID19-Diag and three public COVID-19 diagnosis datasets. The results show that our DSN-SAAL outperforms the state-of-the-art methods and is effective for the diagnosis of COVID-19 in varying degrees of data imbalance.
Collapse
|
48
|
Yin L, Cao Z, Wang K, Tian J, Yang X, Zhang J. A review of the application of machine learning in molecular imaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:825. [PMID: 34268438 PMCID: PMC8246214 DOI: 10.21037/atm-20-5877] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 10/02/2020] [Indexed: 12/12/2022]
Abstract
Molecular imaging (MI) is a science that uses imaging methods to reflect the changes of molecular level in living state and conduct qualitative and quantitative studies on its biological behaviors in imaging. Optical molecular imaging (OMI) and nuclear medical imaging are two key research fields of MI. OMI technology refers to the optical information generated by the imaging target (such as tumors) due to drug intervention and other reasons. By collecting the optical information, researchers can track the motion trajectory of the imaging target at the molecular level. Owing to its high specificity and sensitivity, OMI has been widely used in preclinical research and clinical surgery. Nuclear medical imaging mainly detects ionizing radiation emitted by radioactive substances. It can provide molecular information for early diagnosis, effective treatment and basic research of diseases, which has become one of the frontiers and hot topics in the field of medicine in the world today. Both OMI and nuclear medical imaging technology require a lot of data processing and analysis. In recent years, artificial intelligence technology, especially neural network-based machine learning (ML) technology, has been widely used in MI because of its powerful data processing capability. It provides a feasible strategy to deal with large and complex data for the requirement of MI. In this review, we will focus on the applications of ML methods in OMI and nuclear medical imaging.
Collapse
Affiliation(s)
- Lin Yin
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Cao
- Peking University First Hospital, Beijing, China
| | - Kun Wang
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jie Tian
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| | - Xing Yang
- Peking University First Hospital, Beijing, China
| | | |
Collapse
|
49
|
Bi L, Fulham M, Li N, Liu Q, Song S, Dagan Feng D, Kim J. Recurrent feature fusion learning for multi-modality pet-ct tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106043. [PMID: 33744750 DOI: 10.1016/j.cmpb.2021.106043] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 03/04/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE [18f]-fluorodeoxyglucose (fdg) positron emission tomography - computed tomography (pet-ct) is now the preferred imaging modality for staging many cancers. Pet images characterize tumoral glucose metabolism while ct depicts the complementary anatomical localization of the tumor. Automatic tumor segmentation is an important step in image analysis in computer aided diagnosis systems. Recently, fully convolutional networks (fcns), with their ability to leverage annotated datasets and extract image feature representations, have become the state-of-the-art in tumor segmentation. There are limited fcn based methods that support multi-modality images and current methods have primarily focused on the fusion of multi-modality image features at various stages, i.e., early-fusion where the multi-modality image features are fused prior to fcn, late-fusion with the resultant features fused and hyper-fusion where multi-modality image features are fused across multiple image feature scales. Early- and late-fusion methods, however, have inherent, limited freedom to fuse complementary multi-modality image features. The hyper-fusion methods learn different image features across different image feature scales that can result in inaccurate segmentations, in particular, in situations where the tumors have heterogeneous textures. METHODS we propose a recurrent fusion network (rfn), which consists of multiple recurrent fusion phases to progressively fuse the complementary multi-modality image features with intermediary segmentation results derived at individual recurrent fusion phases: (1) the recurrent fusion phases iteratively learn the image features and then refine the subsequent segmentation results; and, (2) the intermediary segmentation results allows our method to focus on learning the multi-modality image features around these intermediary segmentation results, which minimize the risk of inconsistent feature learning. RESULTS we evaluated our method on two pathologically proven non-small cell lung cancer pet-ct datasets. We compared our method to the commonly used fusion methods (early-fusion, late-fusion and hyper-fusion) and the state-of-the-art pet-ct tumor segmentation methods on various network backbones (resnet, densenet and 3d-unet). Our results show that the rfn provides more accurate segmentation compared to the existing methods and is generalizable to different datasets. CONCLUSIONS we show that learning through multiple recurrent fusion phases allows the iterative re-use of multi-modality image features that refines tumor segmentation results. We also identify that our rfn produces consistent segmentation results across different network architectures.
Collapse
Affiliation(s)
- Lei Bi
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia.
| | - Michael Fulham
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, NSW, Australia
| | - Nan Li
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - Qiufang Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - David Dagan Feng
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia; Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia.
| |
Collapse
|
50
|
Where to next prostate-specific membrane antigen PET imaging frontiers? Curr Opin Urol 2020; 30:672-678. [PMID: 32701718 DOI: 10.1097/mou.0000000000000797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE OF REVIEW Technical improvements in imaging equipment and availability of radiotracers, such as PSMA-ligands have increased the synergy between Urology and Nuclear Medicine. Meanwhile artificial intelligence is introduced in Nuclear Imaging. This review will give an overview of recent technical and clinical developments and an outlook on application of these in the near future. RECENT FINDINGS Digital PET/CT has shown gradual improvement in lesion detection and demarcation over conventional PET/CT, but total-body PET/CT holds promise for a magnitude of improvement in scan duration and quality, quantification, and dose optimization. PET-guided decision-making with the application of PSMA-ligands has been shown useful in demonstrating and biopting primary prostate cancer (PCa) lesions, guiding radiotherapy, guiding surgical resection of recurrent PCa, and assessing therapy response in PCa. Artificial intelligence made its way into Nuclear Imaging just recently, but encouraging progress promises clinical application with unprecedented possibilities. SUMMARY Evidence is growing on clinical usefulness of PET-guided decision-making with the still relatively new PSMA ligands as a prime example. Rapid evolution of PET instrumentation and clinical introduction of artificial intelligence will be the gamechangers of nuclear imaging in the near future, though its powers should still be mastered and incorporated in clinical practice.
Collapse
|