1
|
Mota SM, Priester A, Shubert J, Bong J, Sayre J, Berry-Pusey B, Brisbane WG, Natarajan S. Artificial Intelligence Improves the Ability of Physicians to Identify Prostate Cancer Extent. J Urol 2024; 212:52-62. [PMID: 38860576 PMCID: PMC11178250 DOI: 10.1097/ju.0000000000003960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 03/28/2024] [Indexed: 06/12/2024]
Abstract
PURPOSE Defining prostate cancer contours is a complex task, undermining the efficacy of interventions such as focal therapy. A multireader multicase study compared physicians' performance using artificial intelligence (AI) vs standard-of-care methods for tumor delineation. MATERIALS AND METHODS Cases were interpreted by 7 urologists and 3 radiologists from 5 institutions with 2 to 23 years of experience. Each reader evaluated 50 prostatectomy cases retrospectively eligible for focal therapy. Each case included a T2-weighted MRI, contours of the prostate and region(s) of interest suspicious for cancer, and a biopsy report. First, readers defined cancer contours cognitively, manually delineating tumor boundaries to encapsulate all clinically significant disease. Then, after ≥ 4 weeks, readers contoured the same cases using AI software. Using tumor boundaries on whole-mount histopathology slides as ground truth, AI-assisted, cognitively-defined, and hemigland cancer contours were evaluated. Primary outcome measures were the accuracy and negative margin rate of cancer contours. All statistical analyses were performed using generalized estimating equations. RESULTS The balanced accuracy (mean of voxel-wise sensitivity and specificity) of AI-assisted cancer contours (84.7%) was superior to cognitively-defined (67.2%) and hemigland contours (75.9%; P < .0001). Cognitively-defined cancer contours systematically underestimated cancer extent, with a negative margin rate of 1.6% compared to 72.8% for AI-assisted cancer contours (P < .0001). CONCLUSIONS AI-assisted cancer contours reduce underestimation of prostate cancer extent, significantly improving contouring accuracy and negative margin rate achieved by physicians. This technology can potentially improve outcomes, as accurate contouring informs patient management strategy and underpins the oncologic efficacy of treatment.
Collapse
Affiliation(s)
| | - Alan Priester
- Avenda Health, Inc
- Department of Urology, David Geffen School of Medicine, Los Angeles, California
| | | | | | - James Sayre
- Department of Radiological Sciences and Biostatistics, University of California, Los Angeles, California
| | | | - Wayne G Brisbane
- Department of Urology, David Geffen School of Medicine, Los Angeles, California
| | - Shyam Natarajan
- Avenda Health, Inc
- Department of Urology, David Geffen School of Medicine, Los Angeles, California
| |
Collapse
|
2
|
Rajagopal A, Westphalen AC, Velarde N, Simko JP, Nguyen H, Hope TA, Larson PEZ, Magudia K. Mixed Supervision of Histopathology Improves Prostate Cancer Classification From MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2610-2622. [PMID: 38547000 DOI: 10.1109/tmi.2024.3382909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Non-invasive prostate cancer classification from MRI has the potential to revolutionize patient care by providing early detection of clinically significant disease, but has thus far shown limited positive predictive value. To address this, we present a image-based deep learning method to predict clinically significant prostate cancer from screening MRI in patients that subsequently underwent biopsy with results ranging from benign pathology to the highest grade tumors. Specifically, we demonstrate that mixed supervision via diverse histopathological ground truth improves classification performance despite the cost of reduced concordance with image-based segmentation. Where prior approaches have utilized pathology results as ground truth derived from targeted biopsies and whole-mount prostatectomy to strongly supervise the localization of clinically significant cancer, our approach also utilizes weak supervision signals extracted from nontargeted systematic biopsies with regional localization to improve overall performance. Our key innovation is performing regression by distribution rather than simply by value, enabling use of additional pathology findings traditionally ignored by deep learning strategies. We evaluated our model on a dataset of 973 (testing n=198 ) multi-parametric prostate MRI exams collected at UCSF from 2016-2019 followed by MRI/ultrasound fusion (targeted) biopsy and systematic (nontargeted) biopsy of the prostate gland, demonstrating that deep networks trained with mixed supervision of histopathology can feasibly exceed the performance of the Prostate Imaging-Reporting and Data System (PI-RADS) clinical standard for prostate MRI interpretation (71.6% vs 66.7% balanced accuracy and 0.724 vs 0.716 AUC).
Collapse
|
3
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
4
|
Guenzel K, Lukas Baumgaertner G, Padhani AR, Luckau J, Carsten Lock U, Ozimek T, Heinrich S, Schlegel J, Busch J, Magheli A, Struck J, Borgmann H, Penzkofer T, Hamm B, Hinz S, Alexander Hamm C. Diagnostic Utility of Artificial Intelligence-assisted Transperineal Biopsy Planning in Prostate Cancer Suspected Men: A Prospective Cohort Study. Eur Urol Focus 2024:S2405-4569(24)00059-2. [PMID: 38688825 DOI: 10.1016/j.euf.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 03/22/2024] [Accepted: 04/12/2024] [Indexed: 05/02/2024]
Abstract
BACKGROUND AND OBJECTIVE Accurate magnetic resonance imaging (MRI) reporting is essential for transperineal prostate biopsy (TPB) planning. Although approved computer-aided diagnosis (CAD) tools may assist urologists in this task, evidence of improved clinically significant prostate cancer (csPCa) detection is lacking. Therefore, we aimed to document the diagnostic utility of using Prostate Imaging Reporting and Data System (PI-RADS) and CAD for biopsy planning compared with PI-RADS alone. METHODS A total of 262 consecutive men scheduled for TPB at our referral centre were analysed. Reported PI-RADS lesions and an US Food and Drug Administration-cleared CAD tool were used for TPB planning. PI-RADS and CAD lesions were targeted on TPB, while four (interquartile range: 2-5) systematic biopsies were taken. The outcomes were the (1) proportion of csPCa (grade group ≥2) and (2) number of targeted lesions and false-positive rate. Performance was tested using free-response receiver operating characteristic curves and the exact Fisher-Yates test. KEY FINDINGS AND LIMITATIONS Overall, csPCa was detected in 56% (146/262) of men, with sensitivity of 92% and 97% (p = 0.007) for PI-RADS- and CAD-directed TPB, respectively. In 4% (10/262), csPCa was detected solely by CAD-directed biopsies; in 8% (22/262), additional csPCa lesions were detected. However, the number of targeted lesions increased by 54% (518 vs 336) and the false-positive rate doubled (0.66 vs 1.39; p = 0.009). Limitations include biopsies only for men at clinical/radiological suspicion and no multidisciplinary review of MRI before biopsy. CONCLUSIONS AND CLINICAL IMPLICATIONS The tested CAD tool for TPB planning improves csPCa detection at the cost of an increased number of lesions sampled and false positives. This may enable more personalised biopsy planning depending on urological and patient preferences. PATIENT SUMMARY The computer-aided diagnosis tool tested for transperineal prostate biopsy planning improves the detection of clinically significant prostate cancer at the cost of an increased number of lesions sampled and false positives. This may enable more personalised biopsy planning depending on urological and patient preferences.
Collapse
Affiliation(s)
- Karsten Guenzel
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany; Prostate-Diagnostic-Centre Berlin, PDZB, Berlin, Germany; Department of Urology, Faculty of Health Sciences Brandenburg, Brandenburg Medical School Theodor Fontane, Neuruppin, Germany.
| | | | - Anwar R Padhani
- Paul Strickland Scanner Centre, Mount Vernon Hospital, Middlesex, UK
| | - Johannes Luckau
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany
| | | | - Tomasz Ozimek
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany
| | - Stefan Heinrich
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany
| | - Jakob Schlegel
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany
| | - Jonas Busch
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany
| | - Ahmed Magheli
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany
| | - Julian Struck
- Department of Urology, Faculty of Health Sciences Brandenburg, Brandenburg Medical School Theodor Fontane, Neuruppin, Germany
| | - Hendrik Borgmann
- Department of Urology, Faculty of Health Sciences Brandenburg, Brandenburg Medical School Theodor Fontane, Neuruppin, Germany
| | - Tobias Penzkofer
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany; Berlin Institute of Health (BIH), Berlin, Germany
| | - Bernd Hamm
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Stefan Hinz
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany; Department of Urology, Magdeburg University Medical Center, Otto von Guericke University, Magdeburg, Germany
| | - Charlie Alexander Hamm
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany; Berlin Institute of Health (BIH), Berlin, Germany
| |
Collapse
|
5
|
Yin X, Wang K, Wang L, Yang Z, Zhang Y, Wu P, Zhao C, Zhang J. Algorithms for classification of sequences and segmentation of prostate gland: an external validation study. Abdom Radiol (NY) 2024; 49:1275-1287. [PMID: 38436698 DOI: 10.1007/s00261-024-04241-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 02/05/2024] [Accepted: 02/05/2024] [Indexed: 03/05/2024]
Abstract
OBJECTIVES The aim of the study was to externally validate two AI models for the classification of prostate mpMRI sequences and segmentation of the prostate gland on T2WI. MATERIALS AND METHODS MpMRI data from 719 patients were retrospectively collected from two hospitals, utilizing nine MR scanners from four different vendors, over the period from February 2018 to May 2022. Med3D deep learning pretrained architecture was used to perform image classification,UNet-3D was used to segment the prostate gland. The images were classified into one of nine image types by the mode. The segmentation model was validated using T2WI images. The accuracy of the segmentation was evaluated by measuring the DSC, VS,AHD.Finally,efficacy of the models was compared for different MR field strengths and sequences. RESULTS 20,551 image groups were obtained from 719 MR studies. The classification model accuracy is 99%, with a kappa of 0.932. The precision, recall, and F1 values for the nine image types had statistically significant differences, respectively (all P < 0.001). The accuracy for scanners 1.436 T, 1.5 T, and 3.0 T was 87%, 86%, and 98%, respectively (P < 0.001). For segmentation model, the median DSC was 0.942 to 0.955, the median VS was 0.974 to 0.982, and the median AHD was 5.55 to 6.49 mm,respectively.These values also had statistically significant differences for the three different magnetic field strengths (all P < 0.001). CONCLUSION The AI models for mpMRI image classification and prostate segmentation demonstrated good performance during external validation, which could enhance efficiency in prostate volume measurement and cancer detection with mpMRI. CLINICAL RELEVANCE STATEMENT These models can greatly improve the work efficiency in cancer detection, measurement of prostate volume and guided biopsies.
Collapse
Affiliation(s)
- Xuemei Yin
- Department of Medical Imaging, First Hospital of Qinhuangdao, 066000, Qinhuangdao City, Hebei Province, China
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, 100052, Beijing, China
| | - Liang Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd, 100011, Beijing, China
| | - Pengsheng Wu
- Beijing Smart Tree Medical Technology Co. Ltd, 100011, Beijing, China
| | - Chenglin Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China.
| | - Jun Zhang
- Department of Medical Imaging, First Hospital of Qinhuangdao, 066000, Qinhuangdao City, Hebei Province, China.
| |
Collapse
|
6
|
Nai R, Wang K, Li X, Du S, E T, Xiao H, Quan S, Zhang Y, Yu J, Li J, Zhang X, Wang X. Quantitative measurement of the ureter on three-dimensional magnetic resonance urography images using deep learning. Med Phys 2024. [PMID: 38477634 DOI: 10.1002/mp.17025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 02/23/2024] [Accepted: 03/01/2024] [Indexed: 03/14/2024] Open
Abstract
BACKGROUND Accurate measurement of ureteral diameters plays a pivotal role in diagnosing and monitoring urinary tract obstruction (UTO). While three-dimensional magnetic resonance urography (3D MRU) represents a significant advancement in imaging, the traditional manual methods for assessing ureteral diameters are characterized by labor-intensive procedures and inherent variability. In the realm of medical image analysis, deep learning has led to a paradigm shift, yet the development of a comprehensive automated tool for the precise segmentation and measurement of ureters in MR images is an unaddressed challenge. PURPOSE The ureter was quantitatively measured on 3D MRU images using a deep learning model. METHODS A retrospective cohort of 445 3D MRU scans (443 patients, 52 ± 18 years; 217 female patients) was collected and split into training, validation, and internal testing cohorts. A 3D V-Net model was trained for urinary tract segmentation, and a post-processing algorithm was developed for ureteral measurements. The accuracy of the segmentation was evaluated using the Dice similarity coefficient (DSC) and volume intraclass correlation coefficient (ICC), with ground truth segmentations provided by experienced radiologists. The external cohort comprised 50 scans (50 patients, 55 ± 21 years; 30 female patients), and the model-predicted ureteral diameter measurements were compared with manual measurements to assess system performance. The various diameter parameters of ureter among the different measurement methods (ground truth, auto-segmentation with automatic diameter extraction, and manual segmentation with automatic diameter extraction) were assessed with Friedman tests and post hoc Dunn test. The effectiveness of the UTO diagnosis was assessed by receiver operating characteristic (ROC) curves and their respective areas under the curve (AUC) between different methods. RESULTS In both the internal test and external cohorts, the mean DSC values for bilateral ureters exceeded 0.70. The ICCs for the bilateral ureter volume obtained by comparing the model and manual segmentation were all greater than 0.96 (p < 0.05), except for the right ureter in the internal test cohort, for which the ICC was 0.773 (p < 0.05). The mean DSCs for interobserver and intraobserver reliability were all above 0.97. The maximum diameter of the ureter exhibited no statistically significant differences either in the dilated (p = 0.08) or in the non-dilated (p = 0.32) ureters across the three measurement methods. The AUCs of ground truth, auto-segmentation with automatic diameter extraction, and manual segmentation with automatic diameter extraction in diagnosing UTO were 0.988 (95% CI: 0.934, 1.000), 0.961 (95% CI: 0.893, 0.991), and 0.979 (95% CI: 0.919, 0.998), respectively. There was no statistical difference between AUCs of the different methods (p > 0.05). CONCLUSION The proposed deep learning model and post-processing algorithm provide an effective means for the quantitative evaluation of urinary diseases using 3D MRU images.
Collapse
Affiliation(s)
- Rile Nai
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University Beijing, Beijing, China
| | - Xiaoqing Li
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Shangsong Du
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Tuya E
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - He Xiao
- Department of Radiology, Beijing Changping Hospital, Beijing, China
| | - Shuo Quan
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Junhua Yu
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Jialun Li
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
| |
Collapse
|
7
|
Ramacciotti LS, Hershenhouse JS, Mokhtar D, Paralkar D, Kaneko M, Eppler M, Gill K, Mogoulianitis V, Duddalwar V, Abreu AL, Gill I, Cacciamani GE. Comprehensive Assessment of MRI-based Artificial Intelligence Frameworks Performance in the Detection, Segmentation, and Classification of Prostate Lesions Using Open-Source Databases. Urol Clin North Am 2024; 51:131-161. [PMID: 37945098 DOI: 10.1016/j.ucl.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Numerous MRI-based artificial intelligence (AI) frameworks have been designed for prostate cancer lesion detection, segmentation, and classification via MRI as a result of intrareader and interreader variability that is inherent to traditional interpretation. Open-source data sets have been released with the intention of providing freely available MRIs for the testing of diverse AI frameworks in automated or semiautomated tasks. Here, an in-depth assessment of the performance of MRI-based AI frameworks for detecting, segmenting, and classifying prostate lesions using open-source databases was performed. Among 17 data sets, 12 were specific to prostate cancer detection/classification, with 52 studies meeting the inclusion criteria.
Collapse
Affiliation(s)
- Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Jacob S Hershenhouse
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Daniel Mokhtar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Masatomo Kaneko
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Michael Eppler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Vasileios Mogoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Andre L Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
8
|
Kaneko M, Magoulianitis V, Ramacciotti LS, Raman A, Paralkar D, Chen A, Chu TN, Yang Y, Xue J, Yang J, Liu J, Jadvar DS, Gill K, Cacciamani GE, Nikias CL, Duddalwar V, Jay Kuo CC, Gill IS, Abreu AL. The Novel Green Learning Artificial Intelligence for Prostate Cancer Imaging: A Balanced Alternative to Deep Learning and Radiomics. Urol Clin North Am 2024; 51:1-13. [PMID: 37945095 DOI: 10.1016/j.ucl.2023.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
The application of artificial intelligence (AI) on prostate magnetic resonance imaging (MRI) has shown promising results. Several AI systems have been developed to automatically analyze prostate MRI for segmentation, cancer detection, and region of interest characterization, thereby assisting clinicians in their decision-making process. Deep learning, the current trend in imaging AI, has limitations including the lack of transparency "black box", large data processing, and excessive energy consumption. In this narrative review, the authors provide an overview of the recent advances in AI for prostate cancer diagnosis and introduce their next-generation AI model, Green Learning, as a promising solution.
Collapse
Affiliation(s)
- Masatomo Kaneko
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Vasileios Magoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Alex Raman
- Western University of Health Sciences. Pomona, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Andrew Chen
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Timothy N Chu
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Yijing Yang
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jintang Xue
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jiaxin Yang
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jinyuan Liu
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Donya S Jadvar
- Dornsife School of Letters and Science, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer; Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Chrysostomos L Nikias
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - C-C Jay Kuo
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Inderbir S Gill
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Andre Luis Abreu
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer; Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
9
|
Matsuoka Y, Ueno Y, Uehara S, Tanaka H, Kobayashi M, Tanaka H, Yoshida S, Yokoyama M, Kumazawa I, Fujii Y. Deep-learning prostate cancer detection and segmentation on biparametric versus multiparametric magnetic resonance imaging: Added value of dynamic contrast-enhanced imaging. Int J Urol 2023; 30:1103-1111. [PMID: 37605627 DOI: 10.1111/iju.15280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 07/30/2023] [Indexed: 08/23/2023]
Abstract
OBJECTIVES To develop diagnostic algorithms of multisequence prostate magnetic resonance imaging for cancer detection and segmentation using deep learning and explore values of dynamic contrast-enhanced imaging in multiparametric imaging, compared with biparametric imaging. METHODS We collected 3227 multiparametric imaging sets from 332 patients, including 218 cancer patients (291 biopsy-proven foci) and 114 noncancer patients. Diagnostic algorithms of T2-weighted, T2-weighted plus dynamic contrast-enhanced, biparametric, and multiparametric imaging were built using 2578 sets, and their performance for clinically significant cancer was evaluated using 649 sets. RESULTS Biparametric and multiparametric imaging had following region-based performance: sensitivity of 71.9% and 74.8% (p = 0.394) and positive predictive value of 61.3% and 74.8% (p = 0.013), respectively. In side-specific analyses of cancer images, the specificity was 72.6% and 89.5% (p < 0.001) and the negative predictive value was 78.9% and 83.5% (p = 0.364), respectively. False-negative cancer on multiparametric imaging was smaller (p = 0.002) and more dominant with grade group ≤2 (p = 0.028) than true positive foci. In the peripheral zone, false-positive regions on biparametric imaging turned out to be true negative on multiparametric imaging more frequently compared with the transition zone (78.3% vs. 47.2%, p = 0.018). In contrast, T2-weighted plus dynamic contrast-enhanced imaging had lower specificity than T2-weighted imaging (41.1% vs. 51.6%, p = 0.042). CONCLUSIONS When using deep learning, multiparametric imaging provides superior performance to biparametric imaging in the specificity and positive predictive value, especially in the peripheral zone. Dynamic contrast-enhanced imaging helps reduce overdiagnosis in multiparametric imaging.
Collapse
Affiliation(s)
- Yoh Matsuoka
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
- Department of Urology, Saitama Cancer Center, Saitama, Japan
| | - Yoshihiko Ueno
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Kanagawa, Japan
| | - Sho Uehara
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Hiroshi Tanaka
- Department of Radiology, Ochanomizu Surugadai Clinic, Tokyo, Japan
| | - Masaki Kobayashi
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Hajime Tanaka
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Soichiro Yoshida
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Minato Yokoyama
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Itsuo Kumazawa
- Laboratory for Future Interdisciplinary Research of Science and Technology, Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa, Japan
| | - Yasuhisa Fujii
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| |
Collapse
|
10
|
Kovacs B, Netzer N, Baumgartner M, Schrader A, Isensee F, Weißer C, Wolf I, Görtz M, Jaeger PF, Schütz V, Floca R, Gnirs R, Stenzinger A, Hohenfellner M, Schlemmer HP, Bonekamp D, Maier-Hein KH. Addressing image misalignments in multi-parametric prostate MRI for enhanced computer-aided diagnosis of prostate cancer. Sci Rep 2023; 13:19805. [PMID: 37957250 PMCID: PMC10643562 DOI: 10.1038/s41598-023-46747-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/04/2023] [Indexed: 11/15/2023] Open
Abstract
Prostate cancer (PCa) diagnosis on multi-parametric magnetic resonance images (MRI) requires radiologists with a high level of expertise. Misalignments between the MRI sequences can be caused by patient movement, elastic soft-tissue deformations, and imaging artifacts. They further increase the complexity of the task prompting radiologists to interpret the images. Recently, computer-aided diagnosis (CAD) tools have demonstrated potential for PCa diagnosis typically relying on complex co-registration of the input modalities. However, there is no consensus among research groups on whether CAD systems profit from using registration. Furthermore, alternative strategies to handle multi-modal misalignments have not been explored so far. Our study introduces and compares different strategies to cope with image misalignments and evaluates them regarding to their direct effect on diagnostic accuracy of PCa. In addition to established registration algorithms, we propose 'misalignment augmentation' as a concept to increase CAD robustness. As the results demonstrate, misalignment augmentations can not only compensate for a complete lack of registration, but if used in conjunction with registration, also improve the overall performance on an independent test set.
Collapse
Affiliation(s)
- Balint Kovacs
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany.
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany.
| | - Nils Netzer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Michael Baumgartner
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - Adrian Schrader
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Cedric Weißer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Ivo Wolf
- Mannheim University of Applied Sciences, Mannheim, Germany
| | - Magdalena Görtz
- Junior Clinical Cooperation Unit 'Multiparametric Methods for Early Detection of Prostate Cancer', German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Paul F Jaeger
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Victoria Schütz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Ralf Floca
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Regula Gnirs
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Albrecht Stenzinger
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Markus Hohenfellner
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
| | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
11
|
Kim HS, Kim EJ, Kim J. Emerging Trends in Artificial Intelligence-Based Urological Imaging Technologies and Practical Applications. Int Neurourol J 2023; 27:S73-81. [PMID: 38048821 DOI: 10.5213/inj.2346286.143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 11/15/2023] [Indexed: 12/06/2023] Open
Abstract
The integration of artificial intelligence (AI) into medical imaging has notably expanded its significance within urology. AI applications offer a broad spectrum of utilities in this domain, ranging from precise diagnosis achieved through image segmentation and anomaly detection to improved procedural assistance in biopsies and surgical interventions. Although challenges persist concerning data security, transparency, and integration into existing clinical workflows, extensive research has been conducted on AI-assisted imaging technologies while recognizing their potential to reshape urological practices. This review paper outlines current AI techniques employed for image analysis to offer an overview of the latest technological trends and applications in the field of urology.
Collapse
Affiliation(s)
- Hyun Suh Kim
- School of Photography and Videography, Kyungil University, Gyeongsan, Korea
| | - Eun Joung Kim
- Culture Contents Technology Institute, Gachon University, Seongnam, Korea
| | - JungYoon Kim
- Department of Game Media, College of Future Industry, Gachon University, Seongnam, Korea
| |
Collapse
|
12
|
Garg P, Mohanty A, Ramisetty S, Kulkarni P, Horne D, Pisick E, Salgia R, Singhal SS. Artificial intelligence and allied subsets in early detection and preclusion of gynecological cancers. Biochim Biophys Acta Rev Cancer 2023; 1878:189026. [PMID: 37980945 DOI: 10.1016/j.bbcan.2023.189026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 11/09/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023]
Abstract
Gynecological cancers including breast, cervical, ovarian, uterine, and vaginal, pose the greatest threat to world health, with early identification being crucial to patient outcomes and survival rates. The application of machine learning (ML) and artificial intelligence (AI) approaches to the study of gynecological cancer has shown potential to revolutionize cancer detection and diagnosis. The current review outlines the significant advancements, obstacles, and prospects brought about by AI and ML technologies in the timely identification and accurate diagnosis of different types of gynecological cancers. The AI-powered technologies can use genomic data to discover genetic alterations and biomarkers linked to a particular form of gynecologic cancer, assisting in the creation of targeted treatments. Furthermore, it has been shown that the potential benefits of AI and ML technologies in gynecologic tumors can greatly increase the accuracy and efficacy of cancer diagnosis, reduce diagnostic delays, and possibly eliminate the need for needless invasive operations. In conclusion, the review focused on the integrative part of AI and ML based tools and techniques in the early detection and exclusion of various cancer types; together with a collaborative coordination between research clinicians, data scientists, and regulatory authorities, which is suggested to realize the full potential of AI and ML in gynecologic cancer care.
Collapse
Affiliation(s)
- Pankaj Garg
- Department of Chemistry, GLA University, Mathura, Uttar Pradesh 281406, India
| | - Atish Mohanty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sravani Ramisetty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Prakash Kulkarni
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - David Horne
- Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Evan Pisick
- Department of Medical Oncology, City of Hope, Chicago, IL 60099, USA
| | - Ravi Salgia
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sharad S Singhal
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA.
| |
Collapse
|
13
|
Hagiwara A, Fujita S, Kurokawa R, Andica C, Kamagata K, Aoki S. Multiparametric MRI: From Simultaneous Rapid Acquisition Methods and Analysis Techniques Using Scoring, Machine Learning, Radiomics, and Deep Learning to the Generation of Novel Metrics. Invest Radiol 2023; 58:548-560. [PMID: 36822661 PMCID: PMC10332659 DOI: 10.1097/rli.0000000000000962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/10/2023] [Indexed: 02/25/2023]
Abstract
ABSTRACT With the recent advancements in rapid imaging methods, higher numbers of contrasts and quantitative parameters can be acquired in less and less time. Some acquisition models simultaneously obtain multiparametric images and quantitative maps to reduce scan times and avoid potential issues associated with the registration of different images. Multiparametric magnetic resonance imaging (MRI) has the potential to provide complementary information on a target lesion and thus overcome the limitations of individual techniques. In this review, we introduce methods to acquire multiparametric MRI data in a clinically feasible scan time with a particular focus on simultaneous acquisition techniques, and we discuss how multiparametric MRI data can be analyzed as a whole rather than each parameter separately. Such data analysis approaches include clinical scoring systems, machine learning, radiomics, and deep learning. Other techniques combine multiple images to create new quantitative maps associated with meaningful aspects of human biology. They include the magnetic resonance g-ratio, the inner to the outer diameter of a nerve fiber, and the aerobic glycolytic index, which captures the metabolic status of tumor tissues.
Collapse
Affiliation(s)
- Akifumi Hagiwara
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shohei Fujita
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Division of Neuroradiology, Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Christina Andica
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Koji Kamagata
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shigeki Aoki
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| |
Collapse
|
14
|
Priester A, Fan RE, Shubert J, Rusu M, Vesal S, Shao W, Khandwala YS, Marks LS, Natarajan S, Sonn GA. Prediction and Mapping of Intraprostatic Tumor Extent with Artificial Intelligence. EUR UROL SUPPL 2023; 54:20-27. [PMID: 37545845 PMCID: PMC10403686 DOI: 10.1016/j.euros.2023.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/21/2023] [Indexed: 08/08/2023] Open
Abstract
Background Magnetic resonance imaging (MRI) underestimation of prostate cancer extent complicates the definition of focal treatment margins. Objective To validate focal treatment margins produced by an artificial intelligence (AI) model. Design setting and participants Testing was conducted retrospectively in an independent dataset of 50 consecutive patients who had radical prostatectomy for intermediate-risk cancer. An AI deep learning model incorporated multimodal imaging and biopsy data to produce three-dimensional cancer estimation maps and margins. AI margins were compared with conventional MRI regions of interest (ROIs), 10-mm margins around ROIs, and hemigland margins. The AI model also furnished predictions of negative surgical margin probability, which were assessed for accuracy. Outcome measurements and statistical analysis Comparing AI with conventional margins, sensitivity was evaluated using Wilcoxon signed-rank tests and negative margin rates using chi-square tests. Predicted versus observed negative margin probability was assessed using linear regression. Clinically significant prostate cancer (International Society of Urological Pathology grade ≥2) delineated on whole-mount histopathology served as ground truth. Results and limitations The mean sensitivity for cancer-bearing voxels was higher for AI margins (97%) than for conventional ROIs (37%, p < 0.001), 10-mm ROI margins (93%, p = 0.24), and hemigland margins (94%, p < 0.001). For index lesions, AI margins were more often negative (90%) than conventional ROIs (0%, p < 0.001), 10-mm ROI margins (82%, p = 0.24), and hemigland margins (66%, p = 0.004). Predicted and observed negative margin probabilities were strongly correlated (R2 = 0.98, median error = 4%). Limitations include a validation dataset derived from a single institution's prostatectomy population. Conclusions The AI model was accurate and effective in an independent test set. This approach could improve and standardize treatment margin definition, potentially reducing cancer recurrence rates. Furthermore, an accurate assessment of negative margin probability could facilitate informed decision-making for patients and physicians. Patient summary Artificial intelligence was used to predict the extent of tumors in surgically removed prostate specimens. It predicted tumor margins more accurately than conventional methods.
Collapse
Affiliation(s)
- Alan Priester
- Department of Urology, David Geffen School of Medicine, Los Angeles, CA, USA
- Avenda Health, Inc., Culver City, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | | | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Medicine, University of Florida, Gainesville, FL, USA
| | - Yash Samir Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Leonard S. Marks
- Department of Urology, David Geffen School of Medicine, Los Angeles, CA, USA
| | - Shyam Natarajan
- Department of Urology, David Geffen School of Medicine, Los Angeles, CA, USA
- Avenda Health, Inc., Culver City, CA, USA
| | - Geoffrey A. Sonn
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
15
|
Zhu M, Liang Z, Feng T, Mai Z, Jin S, Wu L, Zhou H, Chen Y, Yan W. Up-to-Date Imaging and Diagnostic Techniques for Prostate Cancer: A Literature Review. Diagnostics (Basel) 2023; 13:2283. [PMID: 37443677 DOI: 10.3390/diagnostics13132283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/29/2023] [Accepted: 06/30/2023] [Indexed: 07/15/2023] Open
Abstract
Prostate cancer (PCa) faces great challenges in early diagnosis, which often leads not only to unnecessary, invasive procedures, but to over-diagnosis and treatment as well, thus highlighting the need for modern PCa diagnostic techniques. The review aims to provide an up-to-date summary of chronologically existing diagnostic approaches for PCa, as well as their potential to improve clinically significant PCa (csPCa) diagnosis and to reduce the proliferation and monitoring of PCa. Our review demonstrates the primary outcomes of the most significant studies and makes comparisons across the diagnostic efficacies of different PCa tests. Since prostate biopsy, the current mainstream PCa diagnosis, is an invasive procedure with a high risk of post-biopsy complications, it is vital we dig out specific, sensitive, and accurate diagnostic approaches in PCa and conduct more studies with milestone findings and comparable sample sizes to validate and corroborate the findings.
Collapse
Affiliation(s)
- Ming Zhu
- Department of Urology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Zhen Liang
- Department of Urology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Tianrui Feng
- Department of Urology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Zhipeng Mai
- Department of Urology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Shijie Jin
- Department of Urology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Liyi Wu
- Department of Urology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Huashan Zhou
- Department of Urology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Yuliang Chen
- Department of Urology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Weigang Yan
- Department of Urology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
16
|
Bashkanov O, Rak M, Meyer A, Engelage L, Lumiani A, Muschter R, Hansen C. Automatic detection of prostate cancer grades and chronic prostatitis in biparametric MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 239:107624. [PMID: 37271051 DOI: 10.1016/j.cmpb.2023.107624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 05/13/2023] [Accepted: 05/25/2023] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE With emerging evidence to improve prostate cancer (PCa) screening, multiparametric magnetic prostate imaging is becoming an essential noninvasive component of the diagnostic routine. Computer-aided diagnostic (CAD) tools powered by deep learning can help radiologists interpret multiple volumetric images. In this work, our objective was to examine promising methods recently proposed in the multigrade prostate cancer detection task and to suggest practical considerations regarding model training in this context. METHODS We collected 1647 fine-grained biopsy-confirmed findings, including Gleason scores and prostatitis, to form a training dataset. In our experimental framework for lesion detection, all models utilized 3D nnU-Net architecture that accounts for anisotropy in the MRI data. First, we explore an optimal range of b-values for diffusion-weighted imaging (DWI) modality and its effect on the detection of clinically significant prostate cancer (csPCa) and prostatitis using deep learning, as the optimal range is not yet clearly defined in this domain. Next, we propose a simulated multimodal shift as a data augmentation technique to compensate for the multimodal shift present in the data. Third, we study the effect of incorporating the prostatitis class alongside cancer-related findings at three different granularities of the prostate cancer class (coarse, medium, and fine) and its impact on the detection rate of the target csPCa. Furthermore, ordinal and one-hot encoded (OHE) output formulations were tested. RESULTS An optimal model configuration with fine class granularity (prostatitis included) and OHE has scored the lesion-wise partial Free-Response Receiver Operating Characteristic (FROC) area under the curve (AUC) of 1.94 (CI 95%: 1.76-2.11) and patient-wise ROC AUC of 0.874 (CI 95%: 0.793-0.938) in the detection of csPCa. Inclusion of the auxiliary prostatitis class has demonstrated a stable relative improvement in specificity at a false positive rate (FPR) of 1.0 per patient, with an increase of 3%, 7%, and 4% for coarse, medium, and fine class granularities. CONCLUSIONS This paper examines several configurations for model training in the biparametric MRI setup and proposes optimal value ranges. It also shows that the fine-grained class configuration, including prostatitis, is beneficial for detecting csPCa. The ability to detect prostatitis in all low-risk cancer lesions suggests the potential to improve the quality of the early diagnosis of prostate diseases. It also implies an improved interpretability of the results by the radiologist.
Collapse
Affiliation(s)
- Oleksii Bashkanov
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany.
| | - Marko Rak
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Anneke Meyer
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | | | | | | | - Christian Hansen
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| |
Collapse
|
17
|
Rajagopal A, Redekop E, Kemisetti A, Kulkarni R, Raman S, Sarma K, Magudia K, Arnold CW, Larson PEZ. Federated Learning with Research Prototypes: Application to Multi-Center MRI-based Detection of Prostate Cancer with Diverse Histopathology. Acad Radiol 2023; 30:644-657. [PMID: 36914501 PMCID: PMC10869141 DOI: 10.1016/j.acra.2023.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 02/13/2023] [Accepted: 02/13/2023] [Indexed: 03/13/2023]
Abstract
RATIONALE AND OBJECTIVES Early prostate cancer detection and staging from MRI is extremely challenging for both radiologists and deep learning algorithms, but the potential to learn from large and diverse datasets remains a promising avenue to increase their performance within and across institutions. To enable this for prototype-stage algorithms, where the majority of existing research remains, we introduce a flexible federated learning framework for cross-site training, validation, and evaluation of custom deep learning prostate cancer detection algorithms. MATERIALS AND METHODS We introduce an abstraction of prostate cancer groundtruth that represents diverse annotation and histopathology data. We maximize use of this groundtruth if and when they are available using UCNet, a custom 3D UNet that enables simultaneous supervision of pixel-wise, region-wise, and gland-wise classification. We leverage these modules to perform cross-site federated training using 1400+ heterogeneous multi-parameteric prostate MRI exams from two University hospitals. RESULTS We observe a positive result, with significant improvements in cross-site generalization performance with negligible intra-site performance degradation for both lesion segmentation and per-lesion binary classification of clinically-significant prostate cancer. Cross-site lesion segmentation performance intersection-over-union (IoU) improved by 100%, while cross-site lesion classification performance overall accuracy improved by 9.5-14.8%, depending on the optimal checkpoint selected by each site. CONCLUSION Federated learning can improve the generalization performance of prostate cancer detection models across institutions while protecting patient health information and institution-specific code and data. However, even more data and participating institutions are likely required to improve the absolute performance of prostate cancer classification models. To enable adoption of federated learning with limited re-engineering of federated components, we open-source our FLtools system at https://federated.ucsf.edu, including examples that can be easily adapted to other medical imaging deep learning projects.
Collapse
Affiliation(s)
- Abhejit Rajagopal
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, 94158, USA.
| | - Ekaterina Redekop
- Departments of Radiology and Electrical Engineering, University of California, Los Angeles, 90024, USA
| | - Anil Kemisetti
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, 94158, USA
| | - Rushikesh Kulkarni
- Departments of Radiology and Electrical Engineering, University of California, Los Angeles, 90024, USA
| | - Steven Raman
- Departments of Radiology and Electrical Engineering, University of California, Los Angeles, 90024, USA
| | - Karthik Sarma
- Departments of Radiology and Electrical Engineering, University of California, Los Angeles, 90024, USA
| | - Kirti Magudia
- Department of Radiology, Duke University, Durham, 27708, USA
| | - Corey W Arnold
- Departments of Radiology and Electrical Engineering, University of California, Los Angeles, 90024, USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, 94158, USA
| |
Collapse
|
18
|
Nematollahi H, Moslehi M, Aminolroayaei F, Maleki M, Shahbazi-Gahrouei D. Diagnostic Performance Evaluation of Multiparametric Magnetic Resonance Imaging in the Detection of Prostate Cancer with Supervised Machine Learning Methods. Diagnostics (Basel) 2023; 13:diagnostics13040806. [PMID: 36832294 PMCID: PMC9956028 DOI: 10.3390/diagnostics13040806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/15/2023] [Accepted: 02/17/2023] [Indexed: 02/25/2023] Open
Abstract
Prostate cancer is the second leading cause of cancer-related death in men. Its early and correct diagnosis is of particular importance to controlling and preventing the disease from spreading to other tissues. Artificial intelligence and machine learning have effectively detected and graded several cancers, in particular prostate cancer. The purpose of this review is to show the diagnostic performance (accuracy and area under the curve) of supervised machine learning algorithms in detecting prostate cancer using multiparametric MRI. A comparison was made between the performances of different supervised machine-learning methods. This review study was performed on the recent literature sourced from scientific citation websites such as Google Scholar, PubMed, Scopus, and Web of Science up to the end of January 2023. The findings of this review reveal that supervised machine learning techniques have good performance with high accuracy and area under the curve for prostate cancer diagnosis and prediction using multiparametric MR imaging. Among supervised machine learning methods, deep learning, random forest, and logistic regression algorithms appear to have the best performance.
Collapse
|
19
|
Rouvière O, Jaouen T, Baseilhac P, Benomar ML, Escande R, Crouzet S, Souchon R. Artificial intelligence algorithms aimed at characterizing or detecting prostate cancer on MRI: How accurate are they when tested on independent cohorts? – A systematic review. Diagn Interv Imaging 2022; 104:221-234. [PMID: 36517398 DOI: 10.1016/j.diii.2022.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE The purpose of this study was to perform a systematic review of the literature on the diagnostic performance, in independent test cohorts, of artificial intelligence (AI)-based algorithms aimed at characterizing/detecting prostate cancer on magnetic resonance imaging (MRI). MATERIALS AND METHODS Medline, Embase and Web of Science were searched for studies published between January 2018 and September 2022, using a histological reference standard, and assessing prostate cancer characterization/detection by AI-based MRI algorithms in test cohorts composed of more than 40 patients and with at least one of the following independency criteria as compared to the training cohort: different institution, different population type, different MRI vendor, different magnetic field strength or strict temporal splitting. RESULTS Thirty-five studies were selected. The overall risk of bias was low. However, 23 studies did not use predefined diagnostic thresholds, which may have optimistically biased the results. Test cohorts fulfilled one to three of the five independency criteria. The diagnostic performance of the algorithms used as standalones was good, challenging that of human reading. In the 12 studies with predefined diagnostic thresholds, radiomics-based computer-aided diagnosis systems (assessing regions-of-interest drawn by the radiologist) tended to provide more robust results than deep learning-based computer-aided detection systems (providing probability maps). Two of the six studies comparing unassisted and assisted reading showed significant improvement due to the algorithm, mostly by reducing false positive findings. CONCLUSION Prostate MRI AI-based algorithms showed promising results, especially for the relatively simple task of characterizing predefined lesions. The best management of discrepancies between human reading and algorithm findings still needs to be defined.
Collapse
Affiliation(s)
- Olivier Rouvière
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France; Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France.
| | | | - Pierre Baseilhac
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Mohammed Lamine Benomar
- LabTAU, INSERM, U1032, Lyon 69003, France; University of Ain Temouchent, Faculty of Science and Technology, Algeria
| | - Raphael Escande
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Sébastien Crouzet
- Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon 69003, France
| | | |
Collapse
|
20
|
Lu X, Zhang S, Liu Z, Liu S, Huang J, Kong G, Li M, Liang Y, Cui Y, Yang C, Zhao S. Ultrasonographic pathological grading of prostate cancer using automatic region-based Gleason grading network. Comput Med Imaging Graph 2022; 102:102125. [PMID: 36257091 DOI: 10.1016/j.compmedimag.2022.102125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/26/2022] [Accepted: 09/20/2022] [Indexed: 11/05/2022]
Abstract
The Gleason scoring system is a reliable method for quantifying the aggressiveness of prostate cancer, which provides an important reference value for clinical assessment on therapeutic strategies. However, to the best of our knowledge, no study has been done on the pathological grading of prostate cancer from single ultrasound images. In this work, a novel Automatic Region-based Gleason Grading (ARGG) network for prostate cancer based on deep learning is proposed. ARGG consists of two stages: (1) a region labeling object detection (RLOD) network is designed to label the prostate cancer lesion region; (2) a Gleason grading network (GNet) is proposed for pathological grading of prostate ultrasound images. In RLOD, a new feature fusion structure Skip-connected Feature Pyramid Network (CFPN) is proposed as an auxiliary branch for extracting features and enhancing the fusion of high-level features and low-level features, which helps to detect the small lesion and extract the image detail information. In GNet, we designed a synchronized pulse enhancement module (SPEM) based on pulse-coupled neural networks for enhancing the results of RLOD detection and used as training samples, and then fed the enhanced results and the original ones into the channel attention classification network (CACN), which introduces an attention mechanism to benefit the prediction of cancer grading. Experimental performance on the dataset of prostate ultrasound images collected from hospitals shows that the proposed Gleason grading model outperforms the manual diagnosis by physicians with a precision of 0.830. In addition, we have evaluated the lesions detection performance of RLOD, which achieves a mean Dice metric of 0.815.
Collapse
Affiliation(s)
- Xu Lu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China; Pazhou Lab, Guangzhou 510330, China
| | - Shulian Zhang
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Zhiyong Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Shaopeng Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Jun Huang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Guoquan Kong
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Mingzhu Li
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yinying Liang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yunneng Cui
- Department of Radiology, Foshan Maternity and Children's Healthcare Hospital Affiliated to Southern Medical University, Foshan 528000, China
| | - Chuan Yang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Shen Zhao
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
21
|
Huang SY, Hsu WL, Hsu RJ, Liu DW. Fully Convolutional Network for the Semantic Segmentation of Medical Images: A Survey. Diagnostics (Basel) 2022; 12:diagnostics12112765. [PMID: 36428824 PMCID: PMC9689961 DOI: 10.3390/diagnostics12112765] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/19/2022] [Accepted: 11/04/2022] [Indexed: 11/16/2022] Open
Abstract
There have been major developments in deep learning in computer vision since the 2010s. Deep learning has contributed to a wealth of data in medical image processing, and semantic segmentation is a salient technique in this field. This study retrospectively reviews recent studies on the application of deep learning for segmentation tasks in medical imaging and proposes potential directions for future development, including model development, data augmentation processing, and dataset creation. The strengths and deficiencies of studies on models and data augmentation, as well as their application to medical image segmentation, were analyzed. Fully convolutional network developments have led to the creation of the U-Net and its derivatives. Another noteworthy image segmentation model is DeepLab. Regarding data augmentation, due to the low data volume of medical images, most studies focus on means to increase the wealth of medical image data. Generative adversarial networks (GAN) increase data volume via deep learning. Despite the increasing types of medical image datasets, there is still a deficiency of datasets on specific problems, which should be improved moving forward. Considering the wealth of ongoing research on the application of deep learning processing to medical image segmentation, the data volume and practical clinical application problems must be addressed to ensure that the results are properly applied.
Collapse
Affiliation(s)
- Sheng-Yao Huang
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
| | - Wen-Lin Hsu
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
| | - Ren-Jun Hsu
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
- Correspondence: (R.-J.H.); (D.-W.L.); Tel. & Fax: +886-3-8561825 (R.-J.H. & D.-W.L.)
| | - Dai-Wei Liu
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
- Correspondence: (R.-J.H.); (D.-W.L.); Tel. & Fax: +886-3-8561825 (R.-J.H. & D.-W.L.)
| |
Collapse
|
22
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Dataset of prostate MRI annotated for anatomical zones and cancer. Data Brief 2022; 45:108739. [DOI: 10.1016/j.dib.2022.108739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/03/2022] [Accepted: 11/04/2022] [Indexed: 11/11/2022] Open
|
23
|
Kim H, Margolis DJA, Nagar H, Sabuncu MR. Pulse Sequence Dependence of a Simple and Interpretable Deep Learning Method for Detection of Clinically Significant Prostate Cancer Using Multiparametric MRI. Acad Radiol 2022; 30:966-970. [PMID: 36334976 DOI: 10.1016/j.acra.2022.10.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 09/18/2022] [Accepted: 10/03/2022] [Indexed: 01/15/2023]
Abstract
RATIONALE AND OBJECTIVES Multiparametric magnetic resonance imaging (mpMRI) is increasingly used for risk stratification and localization of prostate cancer (PCa). Thanks to the great success of deep learning models in computer vision, the potential application for early detection of PCa using mpMRI is imminent. MATERIALS AND METHODS Deep learning analysis of the PROSTATEx dataset. RESULTS In this study, we show a simple convolutional neural network (CNN) with mpMRI can achieve high performance for detection of clinically significant PCa (csPCa), depending on the pulse sequences used. The mpMRI model with T2-ADC-DWI achieved 0.90 AUC score in the held-out test set, not significantly better than the model using Ktrans instead of DWI (AUC 0.89). Interestingly, the model incorporating T2-ADC- Ktrans better estimates grade. We also describe a saliency "heat" map. Our results show that csPCa detection models with mpMRI may be leveraged to guide clinical management strategies. CONCLUSION Convolutional neural networks incorporating multiple pulse sequences show high performance for detection of clinically-significant prostate cancer, and the model including dynamic contrast-enhanced information correlates best with grade.
Collapse
Affiliation(s)
- Heejong Kim
- School of Electrical & Computer Engineering, Cornell University and Cornell Tech, New York, NY, USA
| | | | - Himanshu Nagar
- Department of Radiation Oncology, Weill Cornell Medicine, New York, NY, USA
| | - Mert R Sabuncu
- School of Electrical & Computer Engineering, Cornell University and Cornell Tech, New York, NY, USA
| |
Collapse
|
24
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
25
|
Sunoqrot MRS, Saha A, Hosseinzadeh M, Elschot M, Huisman H. Artificial intelligence for prostate MRI: open datasets, available applications, and grand challenges. Eur Radiol Exp 2022; 6:35. [PMID: 35909214 PMCID: PMC9339427 DOI: 10.1186/s41747-022-00288-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/09/2022] [Indexed: 11/29/2022] Open
Abstract
Artificial intelligence (AI) for prostate magnetic resonance imaging (MRI) is starting to play a clinical role for prostate cancer (PCa) patients. AI-assisted reading is feasible, allowing workflow reduction. A total of 3,369 multi-vendor prostate MRI cases are available in open datasets, acquired from 2003 to 2021 in Europe or USA at 3 T (n = 3,018; 89.6%) or 1.5 T (n = 296; 8.8%), 346 cases scanned with endorectal coil (10.3%), 3,023 (89.7%) with phased-array surface coils; 412 collected for anatomical segmentation tasks, 3,096 for PCa detection/classification; for 2,240 cases lesions delineation is available and 56 cases have matching histopathologic images; for 2,620 cases the PSA level is provided; the total size of all open datasets amounts to approximately 253 GB. Of note, quality of annotations provided per dataset highly differ and attention must be paid when using these datasets (e.g., data overlap). Seven grand challenges and commercial applications from eleven vendors are here considered. Few small studies provided prospective validation. More work is needed, in particular validation on large-scale multi-institutional, well-curated public datasets to test general applicability. Moreover, AI needs to be explored for clinical stages other than detection/characterization (e.g., follow-up, prognosis, interventions, and focal treatment).
Collapse
Affiliation(s)
- Mohammed R S Sunoqrot
- Department of Circulation and Medical Imaging, NTNU-Norwegian University of Science and Technology, 7030, Trondheim, Norway. .,Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway.
| | - Anindo Saha
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | - Matin Hosseinzadeh
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, NTNU-Norwegian University of Science and Technology, 7030, Trondheim, Norway.,Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Henkjan Huisman
- Department of Circulation and Medical Imaging, NTNU-Norwegian University of Science and Technology, 7030, Trondheim, Norway.,Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| |
Collapse
|
26
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Comput Biol Med 2022; 148:105817. [PMID: 35841780 DOI: 10.1016/j.compbiomed.2022.105817] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 06/12/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The development of deep learning (DL) models for prostate segmentation on magnetic resonance imaging (MRI) depends on expert-annotated data and reliable baselines, which are often not publicly available. This limits both reproducibility and comparability. METHODS Prostate158 consists of 158 expert annotated biparametric 3T prostate MRIs comprising T2w sequences and diffusion-weighted sequences with apparent diffusion coefficient maps. Two U-ResNets trained for segmentation of anatomy (central gland, peripheral zone) and suspicious lesions for prostate cancer (PCa) with a PI-RADS score of ≥4 served as baseline algorithms. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average surface distance (ASD). The Wilcoxon test with Bonferroni correction was used to evaluate differences in performance. The generalizability of the baseline model was assessed using the open datasets Medical Segmentation Decathlon and PROSTATEx. RESULTS Compared to Reader 1, the models achieved a DSC/HD/ASD of 0.88/18.3/2.2 for the central gland, 0.75/22.8/1.9 for the peripheral zone, and 0.45/36.7/17.4 for PCa. Compared with Reader 2, the DSC/HD/ASD were 0.88/17.5/2.6 for the central gland, 0.73/33.2/1.9 for the peripheral zone, and 0.4/39.5/19.1 for PCa. Interrater agreement measured in DSC/HD/ASD was 0.87/11.1/1.0 for the central gland, 0.75/15.8/0.74 for the peripheral zone, and 0.6/18.8/5.5 for PCa. Segmentation performances on the Medical Segmentation Decathlon and PROSTATEx were 0.82/22.5/3.4; 0.86/18.6/2.5 for the central gland, and 0.64/29.2/4.7; 0.71/26.3/2.2 for the peripheral zone. CONCLUSIONS We provide an openly accessible, expert-annotated 3T dataset of prostate MRI and a reproducible benchmark to foster the development of prostate segmentation algorithms.
Collapse
Affiliation(s)
- Lisa C Adams
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | - Marcus R Makowski
- Technical University of Munich, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Ismaninger Str. 22, 81675, Munich, Germany
| | - Günther Engel
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Institute for Diagnostic and Interventional Radiology, Georg-August University, Göttingen, Germany
| | - Maximilian Rattunde
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Felix Busch
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Patrick Asbach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Stefan M Niehues
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | | | - Geert Litjens
- Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | - Keno K Bressem
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
27
|
Gunashekar DD, Bielak L, Hägele L, Oerther B, Benndorf M, Grosu AL, Brox T, Zamboglou C, Bock M. Explainable AI for CNN-based prostate tumor segmentation in multi-parametric MRI correlated to whole mount histopathology. Radiat Oncol 2022; 17:65. [PMID: 35366918 PMCID: PMC8976981 DOI: 10.1186/s13014-022-02035-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 03/15/2022] [Indexed: 12/15/2022] Open
Abstract
Automatic prostate tumor segmentation is often unable to identify the lesion even if multi-parametric MRI data is used as input, and the segmentation output is difficult to verify due to the lack of clinically established ground truth images. In this work we use an explainable deep learning model to interpret the predictions of a convolutional neural network (CNN) for prostate tumor segmentation. The CNN uses a U-Net architecture which was trained on multi-parametric MRI data from 122 patients to automatically segment the prostate gland and prostate tumor lesions. In addition, co-registered ground truth data from whole mount histopathology images were available in 15 patients that were used as a test set during CNN testing. To be able to interpret the segmentation results of the CNN, heat maps were generated using the Gradient Weighted Class Activation Map (Grad-CAM) method. The CNN achieved a mean Dice Sorensen Coefficient 0.62 and 0.31 for the prostate gland and the tumor lesions -with the radiologist drawn ground truth and 0.32 with whole-mount histology ground truth for tumor lesions. Dice Sorensen Coefficient between CNN predictions and manual segmentations from MRI and histology data were not significantly different. In the prostate the Grad-CAM heat maps could differentiate between tumor and healthy prostate tissue, which indicates that the image information in the tumor was essential for the CNN segmentation.
Collapse
|
28
|
Ayyad SM, Badawy MA, Shehata M, Alksas A, Mahmoud A, Abou El-Ghar M, Ghazal M, El-Melegy M, Abdel-Hamid NB, Labib LM, Ali HA, El-Baz A. A New Framework for Precise Identification of Prostatic Adenocarcinoma. SENSORS 2022; 22:s22051848. [PMID: 35270995 PMCID: PMC8915102 DOI: 10.3390/s22051848] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 02/21/2022] [Accepted: 02/24/2022] [Indexed: 02/01/2023]
Abstract
Prostate cancer, which is also known as prostatic adenocarcinoma, is an unconstrained growth of epithelial cells in the prostate and has become one of the leading causes of cancer-related death worldwide. The survival of patients with prostate cancer relies on detection at an early, treatable stage. In this paper, we introduce a new comprehensive framework to precisely differentiate between malignant and benign prostate cancer. This framework proposes a noninvasive computer-aided diagnosis system that integrates two imaging modalities of MR (diffusion-weighted (DW) and T2-weighted (T2W)). For the first time, it utilizes the combination of functional features represented by apparent diffusion coefficient (ADC) maps estimated from DW-MRI for the whole prostate in combination with texture features with its first- and second-order representations, extracted from T2W-MRIs of the whole prostate, and shape features represented by spherical harmonics constructed for the lesion inside the prostate and integrated with PSA screening results. The dataset presented in the paper includes 80 biopsy confirmed patients, with a mean age of 65.7 years (43 benign prostatic hyperplasia, 37 prostatic carcinomas). Experiments were conducted using different well-known machine learning approaches including support vector machines (SVM), random forests (RF), decision trees (DT), and linear discriminant analysis (LDA) classification models to study the impact of different feature sets that lead to better identification of prostatic adenocarcinoma. Using a leave-one-out cross-validation approach, the diagnostic results obtained using the SVM classification model along with the combined feature set after applying feature selection (88.75% accuracy, 81.08% sensitivity, 95.35% specificity, and 0.8821 AUC) indicated that the system’s performance, after integrating and reducing different types of feature sets, obtained an enhanced diagnostic performance compared with each individual feature set and other machine learning classifiers. In addition, the developed diagnostic system provided consistent diagnostic performance using 10-fold and 5-fold cross-validation approaches, which confirms the reliability, generalization ability, and robustness of the developed system.
Collapse
Affiliation(s)
- Sarah M. Ayyad
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Mohamed A. Badawy
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt; (M.A.B.); (M.A.E.-G.)
| | - Mohamed Shehata
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Ahmed Alksas
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Ali Mahmoud
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Mohamed Abou El-Ghar
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt; (M.A.B.); (M.A.E.-G.)
| | - Mohammed Ghazal
- Department of Electrical and Computer Engineering, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Moumen El-Melegy
- Department of Electrical Engineering, Assiut University, Assiut 71511, Egypt;
| | - Nahla B. Abdel-Hamid
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Labib M. Labib
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - H. Arafat Ali
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
- Faulty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35516, Egypt
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
- Correspondence:
| |
Collapse
|