1
|
Dong X, Chen G, Zhu Y, Ma B, Ban X, Wu N, Ming Y. Artificial intelligence in skeletal metastasis imaging. Comput Struct Biotechnol J 2024; 23:157-164. [PMID: 38144945 PMCID: PMC10749216 DOI: 10.1016/j.csbj.2023.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 11/02/2023] [Accepted: 11/02/2023] [Indexed: 12/26/2023] Open
Abstract
In the field of metastatic skeletal oncology imaging, the role of artificial intelligence (AI) is becoming more prominent. Bone metastasis typically indicates the terminal stage of various malignant neoplasms. Once identified, it necessitates a comprehensive revision of the initial treatment regime, and palliative care is often the only resort. Given the gravity of the condition, the diagnosis of bone metastasis should be approached with utmost caution. AI techniques are being evaluated for their efficacy in a range of tasks within medical imaging, including object detection, disease classification, region segmentation, and prognosis prediction in medical imaging. These methods offer a standardized solution to the frequently subjective challenge of image interpretation.This subjectivity is most desirable in bone metastasis imaging. This review describes the basic imaging modalities of bone metastasis imaging, along with the recent developments and current applications of AI in the respective imaging studies. These concrete examples emphasize the importance of using computer-aided systems in the clinical setting. The review culminates with an examination of the current limitations and prospects of AI in the realm of bone metastasis imaging. To establish the credibility of AI in this domain, further research efforts are required to enhance the reproducibility and attain robust level of empirical support.
Collapse
Affiliation(s)
- Xiying Dong
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
- Department of Urology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 100021 Beijing, China
| | - Guilin Chen
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Graduate School of Peking Union Medical College, Beijing 100730, China
| | - Yuanpeng Zhu
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Graduate School of Peking Union Medical College, Beijing 100730, China
| | - Boyuan Ma
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
| | - Xiaojuan Ban
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
| | - Nan Wu
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Beijing Key Laboratory for Genetic Research of Skeletal Deformity, Beijing 100730, China
| | - Yue Ming
- Department of Nuclear Medicine (PET-CT Center), National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
2
|
Schott B, Pinchuk D, Santoro-Fernandes V, Klaneček Ž, Rivetti L, Deatsch A, Perlman S, Li Y, Jeraj R. Uncertainty quantification via localized gradients for deep learning-based medical image assessments. Phys Med Biol 2024; 69:155015. [PMID: 38981594 DOI: 10.1088/1361-6560/ad611d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 07/09/2024] [Indexed: 07/11/2024]
Abstract
Objective.Deep learning models that aid in medical image assessment tasks must be both accurate and reliable to be deployed within clinical settings. While deep learning models have been shown to be highly accurate across a variety of tasks, measures that indicate the reliability of these models are less established. Increasingly, uncertainty quantification (UQ) methods are being introduced to inform users on the reliability of model outputs. However, most existing methods cannot be augmented to previously validated models because they are not post hoc, and they change a model's output. In this work, we overcome these limitations by introducing a novel post hoc UQ method, termedLocal Gradients UQ, and demonstrate its utility for deep learning-based metastatic disease delineation.Approach.This method leverages a trained model's localized gradient space to assess sensitivities to trained model parameters. We compared the Local Gradients UQ method to non-gradient measures defined using model probability outputs. The performance of each uncertainty measure was assessed in four clinically relevant experiments: (1) response to artificially degraded image quality, (2) comparison between matched high- and low-quality clinical images, (3) false positive (FP) filtering, and (4) correspondence with physician-rated disease likelihood.Main results.(1) Response to artificially degraded image quality was enhanced by the Local Gradients UQ method, where the median percent difference between matching lesions in non-degraded and most degraded images was consistently higher for the Local Gradients uncertainty measure than the non-gradient uncertainty measures (e.g. 62.35% vs. 2.16% for additive Gaussian noise). (2) The Local Gradients UQ measure responded better to high- and low-quality clinical images (p< 0.05 vsp> 0.1 for both non-gradient uncertainty measures). (3) FP filtering performance was enhanced by the Local Gradients UQ method when compared to the non-gradient methods, increasing the area under the receiver operating characteristic curve (ROC AUC) by 20.1% and decreasing the false positive rate by 26%. (4) The Local Gradients UQ method also showed more favorable correspondence with physician-rated likelihood for malignant lesions by increasing ROC AUC for correspondence with physician-rated disease likelihood by 16.2%.Significance. In summary, this work introduces and validates a novel gradient-based UQ method for deep learning-based medical image assessments to enhance user trust when using deployed clinical models.
Collapse
Affiliation(s)
- Brayden Schott
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, United States of America
| | - Dmitry Pinchuk
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, United States of America
| | - Victor Santoro-Fernandes
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, United States of America
| | - Žan Klaneček
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| | - Luciano Rivetti
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| | - Alison Deatsch
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, United States of America
| | - Scott Perlman
- Department of Radiology, Section of Nuclear Medicine, School of Medicine and Public Health, University of Wisconsin, Madison, WI, United States of America
| | - Yixuan Li
- Department of Computer Sciences, School of Computer, Data, & Information Sciences, University of Wisconsin, Madison, WI, United States of America
| | - Robert Jeraj
- Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, United States of America
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
3
|
Jafari E, Zarei A, Dadgar H, Keshavarz A, Manafi-Farid R, Rostami H, Assadi M. A convolutional neural network-based system for fully automatic segmentation of whole-body [ 68Ga]Ga-PSMA PET images in prostate cancer. Eur J Nucl Med Mol Imaging 2024; 51:1476-1487. [PMID: 38095671 DOI: 10.1007/s00259-023-06555-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 11/30/2023] [Indexed: 03/22/2024]
Abstract
PURPOSE The aim of this study was development and evaluation of a fully automated tool for the detection and segmentation of mPCa lesions in whole-body [68Ga]Ga-PSMA-11 PET scans by using a nnU-Net framework. METHODS In this multicenter study, a cohort of 412 patients from three different center with all indication of PCa who underwent [68Ga]Ga-PSMA-11 PET/CT were enrolled. Two hundred cases of center 1 dataset were used for training the model. A fully 3D convolutional neural network (CNN) is proposed which is based on the self-configuring nnU-Net framework. A subset of center 1 dataset and cases of center 2 and center 3 were used for testing of model. The performance of the segmentation pipeline that was developed was evaluated by comparing the fully automatic segmentation mask with the manual segmentation of the corresponding internal and external test sets in three levels including patient-level scan classification, lesion-level detection, and voxel-level segmentation. In addition, for comparison of PET-derived quantitative biomarkers between automated and manual segmentation, whole-body PSMA tumor volume (PSMA-TV) and total lesions PSMA uptake (TL-PSMA) were calculated. RESULTS In terms of patient-level classification, the model achieved an accuracy of 83%, sensitivity of 92%, PPV of 77%, and NPV of 91% for the internal testing set. For lesion-level detection, the model achieved an accuracy of 87-94%, sensitivity of 88-95%, PPV of 98-100%, and F1-score of 93-97% for all testing sets. For voxel-level segmentation, the automated method achieved average values of 65-70% for DSC, 72-79% for PPV, 53-58% for IoU, and 62-73% for sensitivity in all testing sets. In the evaluation of volumetric parameters, there was a strong correlation between the manual and automated measurements of PSMA-TV and TL-PSMA for all centers. CONCLUSIONS The deep learning networks presented here offer promising solutions for automatically segmenting malignant lesions in prostate cancer patients using [68Ga]Ga-PSMA PET. These networks achieve a high level of accuracy in whole-body segmentation, as measured by the DSC and PPV at the voxel level. The resulting segmentations can be used for extraction of PET-derived quantitative biomarkers and utilized for treatment response assessment and radiomic studies.
Collapse
Affiliation(s)
- Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Amin Zarei
- IoT and Signal Processing Research Group, ICT Research Institute, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, Iran
| | - Habibollah Dadgar
- Cancer Research Center, RAZAVI Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Keshavarz
- IoT and Signal Processing Research Group, ICT Research Institute, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, Iran
| | - Reyhaneh Manafi-Farid
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Rostami
- Computer Engineering Department, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran.
| |
Collapse
|
4
|
Yang E, Shankar K, Kumar S, Seo C, Moon I. Equilibrium Optimization Algorithm with Deep Learning Enabled Prostate Cancer Detection on MRI Images. Biomedicines 2023; 11:3200. [PMID: 38137421 PMCID: PMC10740673 DOI: 10.3390/biomedicines11123200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 11/22/2023] [Accepted: 11/28/2023] [Indexed: 12/24/2023] Open
Abstract
The enlargement of the prostate gland in the reproductive system of males is considered a form of prostate cancer (PrC). The survival rate is considerably improved with earlier diagnosis of cancer; thus, timely intervention should be administered. In this study, a new automatic approach combining several deep learning (DL) techniques was introduced to detect PrC from MRI and ultrasound (US) images. Furthermore, the presented method describes why a certain decision was made given the input MRI or US images. Many pretrained custom-developed layers were added to the pretrained model and employed in the dataset. The study presents an Equilibrium Optimization Algorithm with Deep Learning-based Prostate Cancer Detection and Classification (EOADL-PCDC) technique on MRIs. The main goal of the EOADL-PCDC method lies in the detection and classification of PrC. To achieve this, the EOADL-PCDC technique applies image preprocessing to improve the image quality. In addition, the EOADL-PCDC technique follows the CapsNet (capsule network) model for the feature extraction model. The EOA is based on hyperparameter tuning used to increase the efficiency of CapsNet. The EOADL-PCDC algorithm makes use of the stacked bidirectional long short-term memory (SBiLSTM) model for prostate cancer classification. A comprehensive set of simulations of the EOADL-PCDC algorithm was tested on the benchmark MRI dataset. The experimental outcome revealed the superior performance of the EOADL-PCDC approach over existing methods in terms of different metrics.
Collapse
Affiliation(s)
- Eunmok Yang
- Department of Financial Information Security, Kookmin University, Seoul 02707, Republic of Korea;
| | - K. Shankar
- Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 602105, India;
- Big Data and Machine Learning Lab, South Ural State University, Chelyabinsk 454080, Russia
| | - Sachin Kumar
- College of IBS, National University of Science and Technology, MISiS, Moscow 119049, Russia;
| | - Changho Seo
- Department of Convergence Science, Kongju National University, Gongju-si 32588, Republic of Korea
| | - Inkyu Moon
- Department of Robotics & Mechatronics Engineering, Daegu Gyeongbuk Institute of Science & Technology (DGIST), Daegu 42988, Republic of Korea
| |
Collapse
|
5
|
Çevik HB, Ruggieri P, Giannoudis PV. Management of metastatic bone disease of the pelvis: current concepts. Eur J Trauma Emerg Surg 2023:10.1007/s00068-023-02382-x. [PMID: 37934294 DOI: 10.1007/s00068-023-02382-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 10/17/2023] [Indexed: 11/08/2023]
Abstract
PURPOSE Metastatic disease of the pelvis is frequently associated with severe pain and impaired ambulatory function. Depending on the patient's characteristics, primary tumor, and metastatic pelvic disease, the treatment choice may be varied. This study aims to report on the current management options of metastatic pelvic disease. METHODS We comprehensively researched multiple databases and evaluated essential studies about current concepts of managing a metastatic bone disease of the pelvis, focusing on specific indications as well as on the result of treatment. RESULTS Pelvic metastases not in the periacetabular region can be managed with modification of weight-bearing, analgesics, bisphosphonates, chemotherapy and/or radiotherapy. Minimally invasive approaches include radiofrequency ablation, cryoablation, embolization, percutaneous osteoplasty, and percutaneous screw placement. Pathological or impending periacetabular fracture, excessive periacetabular bone defect, radioresistant tumor, and persistent debilitating pain despite non-surgical treatment and/or minimally invasive procedures can be managed with different surgical techniques. Overall, treatment can be divided into nonoperative, minimally invasive, and operative based on specific indications, the expectations of the patient and the lesion. CONCLUSION Different treatment modalities exist to manage metastatic pelvic bone disease. Decision-making for the most appropriate treatment should be made with a multidisciplinary approach based on a case-by-case basis.
Collapse
Affiliation(s)
- Hüseyin Bilgehan Çevik
- Orthopaedics and Traumatology, Ankara Etlik City Hospital, University of Health Sciences, Ankara, Turkey.
| | - Pietro Ruggieri
- Orthopaedics and Orthopaedic Oncology, Department of Surgery, Oncology and Gastroenterology DiSCOG, University of Padova, Padua, Italy
| | - Peter V Giannoudis
- Academic Department of Trauma and Orthopaedics, School of Medicine, University of Leeds, Leeds, UK
| |
Collapse
|
6
|
Gong AJ, Ruchalski K, Kim HJ, Douek M, Gutierrez A, Patel M, Sai V, Coy H, Villegas B, Raman S, Goldin J. RECIST 1.1 Target Lesion Categorical Response in Metastatic Renal Cell Carcinoma: A Comparison of Conventional versus Volumetric Assessment. Radiol Imaging Cancer 2023; 5:e220166. [PMID: 37656041 PMCID: PMC10546365 DOI: 10.1148/rycan.220166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 07/05/2023] [Accepted: 07/18/2023] [Indexed: 09/02/2023]
Abstract
Purpose To investigate Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST 1.1) approximations of target lesion tumor burden by comparing categorical treatment response according to conventional RECIST versus actual tumor volume measurements of RECIST target lesions. Materials and Methods This is a retrospective cohort study of individuals with metastatic renal cell carcinoma enrolled in a clinical trial (from 2003 to 2017) and includes individuals who underwent baseline and at least one follow-up chest, abdominal, and pelvic CT study and with at least one target lesion. Target lesion volume was assessed by (a) Vmodel, a spherical model of conventional RECIST 1.1, which was extrapolated from RECIST diameter, and (b) Vactual, manually contoured volume. Volumetric responses were determined by the sum of target lesion volumes (Vmodel-sum TL and Vactual-sum TL, respectively). Categorical volumetric thresholds were extrapolated from RECIST. McNemar tests were used to compare categorical volume responses. Results Target lesions were assessed at baseline (638 participants), week 9 (593 participants), and week 17 (508 participants). Vmodel-sum TL classified more participants as having progressive disease (PD), compared with Vactual-sum TL at week 9 (52 vs 31 participants) and week 17 (57 vs 39 participants), with significant overall response discordance (P < .001). At week 9, 25 (48%) of 52 participants labeled with PD by Vmodel-sum TL were classified as having stable disease by Vactual-sum TL. Conclusion A model of RECIST 1.1 based on a single diameter measurement more frequently classified PD compared with response assessment by actual measured tumor volume. Keywords: Urinary, Kidney, Metastases, Oncology, Tumor Response, Volume Analysis, Outcomes Analysis ClinicalTrials.gov registration no. NCT01865747 © RSNA, 2023 Supplemental material is available for this article.
Collapse
Affiliation(s)
- Amanda J. Gong
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Kathleen Ruchalski
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Hyun J. Kim
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Michael Douek
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Antonio Gutierrez
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Maitraya Patel
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Victor Sai
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Heidi Coy
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Bianca Villegas
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Steven Raman
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| | - Jonathan Goldin
- From the David Geffen School of Medicine, University of California,
Los Angeles, Calif (A.J.G., K.R., H.J.K., M.D., A.G., M.P., V.S., H.C., S.R.,
J.G.); Department of Radiological Sciences, UCLA, Los Angeles, Calif (K.R.,
H.J.K., M.D., A.G., M.P., V.S., S.R., J.G.); and UCLA Center for Computer Vision
and Imaging Biomarkers, 924 Westwood Blvd, Ste 615, Los Angeles, CA 90024
(A.J.G., H.J.K., H.C., B.V., J.G.)
| |
Collapse
|
7
|
Rich JM, Bhardwaj LN, Shah A, Gangal K, Rapaka MS, Oberai AA, Fields BKK, Matcuk GR, Duddalwar VA. Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. FRONTIERS IN RADIOLOGY 2023; 3:1241651. [PMID: 37614529 PMCID: PMC10442705 DOI: 10.3389/fradi.2023.1241651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 07/28/2023] [Indexed: 08/25/2023]
Abstract
Introduction Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT). Method The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review. Results The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9. Discussion Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.
Collapse
Affiliation(s)
- Joseph M. Rich
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Lokesh N. Bhardwaj
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Aman Shah
- Department of Applied Biostatistics and Epidemiology, University of Southern California, Los Angeles, CA, United States
| | - Krish Gangal
- Bridge UnderGrad Science Summer Research Program, Irvington High School, Fremont, CA, United States
| | - Mohitha S. Rapaka
- Department of Biology, University of Texas at Austin, Austin, TX, United States
| | - Assad A. Oberai
- Department of Aerospace and Mechanical Engineering Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brandon K. K. Fields
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - George R. Matcuk
- Department of Radiology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Vinay A. Duddalwar
- Department of Radiology, Keck School of Medicine of the University of Southern California, Los Angeles, CA, United States
- Department of Radiology, USC Radiomics Laboratory, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
8
|
Morris JM, Wentworth A, Houdek MT, Karim SM, Clarke MJ, Daniels DJ, Rose PS. The Role of 3D Printing in Treatment Planning of Spine and Sacral Tumors. Neuroimaging Clin N Am 2023; 33:507-529. [PMID: 37356866 DOI: 10.1016/j.nic.2023.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/27/2023]
Abstract
Three-dimensional (3D) printing technology has proven to have many advantages in spine and sacrum surgery. 3D printing allows the manufacturing of life-size patient-specific anatomic and pathologic models to improve preoperative understanding of patient anatomy and pathology. Additionally, virtual surgical planning using medical computer-aided design software has enabled surgeons to create patient-specific surgical plans and simulate procedures in a virtual environment. This has resulted in reduced operative times, decreased complications, and improved patient outcomes. Combined with new surgical techniques, 3D-printed custom medical devices and instruments using titanium and biocompatible resins and polyamides have allowed innovative reconstructions.
Collapse
Affiliation(s)
- Jonathan M Morris
- Division of Neuroradiology, Department of Radiology, Anatomic Modeling Unit, Biomedical and Scientific Visualization, Mayo Clinic, 200 1st Street, Southwest, Rochester, MN, 55905, USA.
| | - Adam Wentworth
- Department of Radiology, Anatomic Modeling Unit, Mayo Clinic, Rochester, MN, USA
| | - Matthew T Houdek
- Division of Orthopedic Oncology, Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - S Mohammed Karim
- Division of Orthopedic Oncology, Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | | | | | - Peter S Rose
- Division of Orthopedic Oncology, Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
9
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
10
|
Liu X, Zhu Z, Wang K, Zhang Y, Li J, Wang X, Zhang X, Wang X. Semiautomated pelvic lymph node treatment response evaluation for patients with advanced prostate cancer: based on MET-RADS-P guidelines. Cancer Imaging 2023; 23:7. [PMID: 36650584 PMCID: PMC9847043 DOI: 10.1186/s40644-023-00523-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Accepted: 01/05/2023] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND The evaluation of treatment response according to METastasis Reporting and Data System for Prostate Cancer (MET-RADS-P) criteria is an important but time-consuming task for patients with advanced prostate cancer (APC). A deep learning-based algorithm has the potential to assist with this assessment. OBJECTIVE To develop and evaluate a deep learning-based algorithm for semiautomated treatment response assessment of pelvic lymph nodes. METHODS A total of 162 patients who had undergone at least two scans for follow-up assessment after APC metastasis treatment were enrolled. A previously reported deep learning model was used to perform automated segmentation of pelvic lymph nodes. The performance of the deep learning algorithm was evaluated using the Dice similarity coefficient (DSC) and volumetric similarity (VS). The consistency of the short diameter measurement with the radiologist was evaluated using Bland-Altman plotting. Based on the segmentation of lymph nodes, the treatment response was assessed automatically with a rule-based program according to the MET-RADS-P criteria. Kappa statistics were used to assess the accuracy and consistency of the treatment response assessment by the deep learning model and two radiologists [attending radiologist (R1) and fellow radiologist (R2)]. RESULTS The mean DSC and VS of the pelvic lymph node segmentation were 0.82 ± 0.09 and 0.88 ± 0.12, respectively. Bland-Altman plotting showed that most of the lymph node measurements were within the upper and lower limits of agreement (LOA). The accuracies of automated segmentation-based assessment were 0.92 (95% CI: 0.85-0.96), 0.91 (95% CI: 0.86-0.95) and 75% (95% CI: 0.46-0.92) for target lesions, nontarget lesions and nonpathological lesions, respectively. The consistency of treatment response assessment based on automated segmentation and manual segmentation was excellent for target lesions [K value: 0.92 (0.86-0.98)], good for nontarget lesions [0.82 (0.74-0.90)] and moderate for nonpathological lesions [0.71 (0.50-0.92)]. CONCLUSION The deep learning-based semiautomated algorithm showed high accuracy for the treatment response assessment of pelvic lymph nodes and demonstrated comparable performance with radiologists.
Collapse
Affiliation(s)
- Xiang Liu
- grid.411472.50000 0004 1764 1621Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034 China
| | - Zemin Zhu
- grid.501248.aDepartment of Hepatobiliary and Pancreatic Surgery, Zhuzhou Central Hospital, Zhuzhou, 412000 China
| | - Kexin Wang
- grid.24696.3f0000 0004 0369 153XSchool of Basic Medical Sciences, Capital Medical University, Beijing, 100069 China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd, Beijing, 100011 China
| | - Jialun Li
- Beijing Smart Tree Medical Technology Co. Ltd, Beijing, 100011 China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd, Beijing, 100011 China
| | - Xiaodong Zhang
- grid.411472.50000 0004 1764 1621Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034 China
| | - Xiaoying Wang
- grid.411472.50000 0004 1764 1621Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034 China
| |
Collapse
|
11
|
Zhu L, Gao G, Zhu Y, Han C, Liu X, Li D, Liu W, Wang X, Zhang J, Zhang X, Wang X. Fully automated detection and localization of clinically significant prostate cancer on MR images using a cascaded convolutional neural network. Front Oncol 2022; 12:958065. [PMID: 36249048 PMCID: PMC9558117 DOI: 10.3389/fonc.2022.958065] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose To develop a cascaded deep learning model trained with apparent diffusion coefficient (ADC) and T2-weighted imaging (T2WI) for fully automated detection and localization of clinically significant prostate cancer (csPCa). Methods This retrospective study included 347 consecutive patients (235 csPCa, 112 non-csPCa) with high-quality prostate MRI data, which were randomly selected for training, validation, and testing. The ground truth was obtained using manual csPCa lesion segmentation, according to pathological results. The proposed cascaded model based on Res-UNet takes prostate MR images (T2WI+ADC or only ADC) as inputs and automatically segments the whole prostate gland, the anatomic zones, and the csPCa region step by step. The performance of the models was evaluated and compared with PI-RADS (version 2.1) assessment using sensitivity, specificity, accuracy, and Dice similarity coefficient (DSC) in the held-out test set. Results In the test set, the per-lesion sensitivity of the biparametric (ADC + T2WI) model, ADC model, and PI-RADS assessment were 95.5% (84/88), 94.3% (83/88), and 94.3% (83/88) respectively (all p > 0.05). Additionally, the mean DSC based on the csPCa lesions were 0.64 ± 0.24 and 0.66 ± 0.23 for the biparametric model and ADC model, respectively. The sensitivity, specificity, and accuracy of the biparametric model were 95.6% (108/113), 91.5% (665/727), and 92.0% (773/840) based on sextant, and were 98.6% (68/69), 64.8% (46/71), and 81.4% (114/140) based on patients. The biparametric model had a similar performance to PI-RADS assessment (p > 0.05) and had higher specificity than the ADC model (86.8% [631/727], p< 0.001) based on sextant. Conclusion The cascaded deep learning model trained with ADC and T2WI achieves good performance for automated csPCa detection and localization.
Collapse
Affiliation(s)
- Lina Zhu
- Department of Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Ge Gao
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Yi Zhu
- Department of Clinical & Technical Support, Philips Healthcare, Beijing, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiang Liu
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Derun Li
- Department of Urology, Peking University First Hospital, Beijing, China
| | - Weipeng Liu
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiangpeng Wang
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Jingyuan Zhang
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
- *Correspondence: Xiaoying Wang,
| |
Collapse
|
12
|
Kendrick J, Francis RJ, Hassan GM, Rowshanfarzad P, Ong JSL, Ebert MA. Fully automatic prognostic biomarker extraction from metastatic prostate lesion segmentations in whole-body [ 68Ga]Ga-PSMA-11 PET/CT images. Eur J Nucl Med Mol Imaging 2022; 50:67-79. [PMID: 35976392 DOI: 10.1007/s00259-022-05927-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/01/2022] [Indexed: 12/17/2022]
Abstract
PURPOSE This study aimed to develop and assess an automated segmentation framework based on deep learning for metastatic prostate cancer (mPCa) lesions in whole-body [68Ga]Ga-PSMA-11 PET/CT images for the purpose of extracting patient-level prognostic biomarkers. METHODS Three hundred thirty-seven [68Ga]Ga-PSMA-11 PET/CT images were retrieved from a cohort of biochemically recurrent PCa patients. A fully 3D convolutional neural network (CNN) is proposed which is based on the self-configuring nnU-Net framework, and was trained on a subset of these scans, with an independent test set reserved for model evaluation. Voxel-level segmentation results were assessed using the dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity. Sensitivity and PPV were calculated to assess lesion level detection; patient-level classification results were assessed by the accuracy, PPV, and sensitivity. Whole-body biomarkers total lesional volume (TLVauto) and total lesional uptake (TLUauto) were calculated from the automated segmentations, and Kaplan-Meier analysis was used to assess biomarker relationship with patient overall survival. RESULTS At the patient level, the accuracy, sensitivity, and PPV were all > 90%, with the best metric being the PPV (97.2%). PPV and sensitivity at the lesion level were 88.2% and 73.0%, respectively. DSC and PPV measured at the voxel level performed within measured inter-observer variability (DSC, median = 50.7% vs. second observer = 32%, p = 0.012; PPV, median = 64.9% vs. second observer = 25.7%, p < 0.005). Kaplan-Meier analysis of TLVauto and TLUauto showed they were significantly associated with patient overall survival (both p < 0.005). CONCLUSION The fully automated assessment of whole-body [68Ga]Ga-PSMA-11 PET/CT images using deep learning shows significant promise, yielding accurate scan classification, voxel-level segmentations within inter-observer variability, and potentially clinically useful prognostic biomarkers associated with patient overall survival. TRIAL REGISTRATION This study was registered with the Australian New Zealand Clinical Trials Registry (ACTRN12615000608561) on 11 June 2015.
Collapse
Affiliation(s)
- Jake Kendrick
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia.
| | - Roslyn J Francis
- Medical School, University of Western Australia, Crawley, WA, Australia.,Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, WA, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Jeremy S L Ong
- Department of Nuclear Medicine, Fiona Stanley Hospital, Murdoch, WA, Australia
| | - Martin A Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia.,Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, WA, Australia.,5D Clinics, Claremont, WA, Australia
| |
Collapse
|
13
|
Madireddy I, Wu T. Rule and Neural Network-Based Image Segmentation of Mice Vertebrae Images. Cureus 2022; 14:e27247. [PMID: 36039207 PMCID: PMC9401637 DOI: 10.7759/cureus.27247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/23/2022] [Indexed: 12/03/2022] Open
Abstract
Background Image segmentation is a fundamental technique that allows researchers to process images from various sources into individual components for certain applications, such as visual or numerical evaluations. Image segmentation is beneficial when studying medical images for healthcare purposes. However, existing semantic image segmentation models like the U-net are computationally intensive. This work aimed to develop less complicated models that could still accurately segment images. Methodology Rule-based and linear layer neural network models were developed in Mathematica and trained on mouse vertebrae micro-computed tomography scans. These models were tasked with segmenting the cortical shell from the whole bone image. A U-net model was also set up for comparison. Results It was found that the linear layer neural network had comparable accuracy to the U-net model in segmenting the mice vertebrae scans. Conclusions This work provides two separate models that allow for automated segmentation of mouse vertebral scans, which could be potentially valuable in applications such as pre-processing the murine vertebral scans for further evaluations of the effect of drug treatment on bone micro-architecture.
Collapse
|