1
|
Bhattacharya I, Lim DS, Aung HL, Liu X, Seetharaman A, Kunder CA, Shao W, Soerensen SJC, Fan RE, Ghanouni P, To'o KJ, Brooks JD, Sonn GA, Rusu M. Bridging the gap between prostate radiology and pathology through machine learning. Med Phys 2022; 49:5160-5181. [PMID: 35633505 PMCID: PMC9543295 DOI: 10.1002/mp.15777] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 05/10/2022] [Accepted: 05/10/2022] [Indexed: 11/27/2022] Open
Abstract
Background Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, magnetic resonance imaging (MRI) is considered the most sensitive non‐invasive imaging modality that enables visualization, detection, and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter‐reader agreements. Purpose Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI. Methods Four different deep learning models (SPCNet, U‐Net, branched U‐Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology‐confirmed radiologist labels, pathologist labels on whole‐mount histopathology images, and lesion‐level and pixel‐level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel‐level Gleason patterns) on whole‐mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre‐operative MRI using an automated MRI‐histopathology registration platform. Results Radiologist labels missed cancers (ROC‐AUC: 0.75‐0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24‐0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC‐AUC: 0.97‐1, lesion Dice: 0.75‐0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC‐AUC: 0.91‐0.94), and had generalizable and comparable performance to pathologist label‐trained‐models in the targeted biopsy cohort (aggressive lesion ROC‐AUC: 0.87‐0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel‐level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human‐annotated label type. Conclusions Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label‐trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter‐ and intra‐reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - David S Lim
- Department of Computer Science, Stanford University, Stanford, CA 94305
| | - Han Lin Aung
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Xingchen Liu
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA 94305
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA 94305
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Katherine J To'o
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA 94304
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| |
Collapse
|
2
|
Wang NN, Zhou SR, Chen L, Tibshirani R, Fan RE, Ghanouni P, Thong AE, To'o KJ, Amirkhiz K, Nix JW, Gordetsky JB, Sprenkle P, Rais-Bahrami S, Sonn GA. The stanford prostate cancer calculator: Development and external validation of online nomograms incorporating PIRADS scores to predict clinically significant prostate cancer. Urol Oncol 2021; 39:831.e19-831.e27. [PMID: 34247909 DOI: 10.1016/j.urolonc.2021.06.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 05/01/2021] [Accepted: 06/07/2021] [Indexed: 01/18/2023]
Abstract
BACKGROUND While multiparametric MRI (mpMRI) has high sensitivity for detection of clinically significant prostate cancer (CSC), false positives and negatives remain common. Calculators that combine mpMRI with clinical variables can improve cancer risk assessment, while providing more accurate predictions for individual patients. We sought to create and externally validate nomograms incorporating Prostate Imaging Reporting and Data System (PIRADS) scores and clinical data to predict the presence of CSC in men of all biopsy backgrounds. METHODS Data from 2125 men undergoing mpMRI and MR fusion biopsy from 2014 to 2018 at Stanford, Yale, and UAB were prospectively collected. Clinical data included age, race, PSA, biopsy status, PIRADS scores, and prostate volume. A nomogram predicting detection of CSC on targeted or systematic biopsy was created. RESULTS Biopsy history, Prostate Specific Antigen (PSA) density, PIRADS score of 4 or 5, Caucasian race, and age were significant independent predictors. Our nomogram-the Stanford Prostate Cancer Calculator (SPCC)-combined these factors in a logistic regression to provide stronger predictive accuracy than PSA density or PIRADS alone. Validation of the SPCC using data from Yale and UAB yielded robust AUC values. CONCLUSIONS The SPCC combines pre-biopsy mpMRI with clinical data to more accurately predict the probability of CSC in men of all biopsy backgrounds. The SPCC demonstrates strong external generalizability with successful validation in two separate institutions. The calculator is available as a free web-based tool that can direct real-time clinical decision-making.
Collapse
Affiliation(s)
- Nancy N Wang
- Department of Urology, Stanford University School of Medicine, Stanford, CA
| | - Steve R Zhou
- Department of Urology, Stanford University School of Medicine, Stanford, CA.
| | - Leo Chen
- Department of Urology, Stanford University School of Medicine, Stanford, CA
| | - Robert Tibshirani
- Departments of Biomedical Data Science and Statistics, Stanford University, Stanford, CA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Alan E Thong
- Department of Urology, Stanford University School of Medicine, Stanford, CA
| | - Katherine J To'o
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Kamyar Amirkhiz
- Department of Urology, Yale School of Medicine, New Haven, CT
| | - Jeffrey W Nix
- Department of Urology, University of Alabama at Birmingham, Birmingham, AL; O'Neal Comprehensive Cancer Center at UAB, University of Alabama at Birmingham, Birmingham, AL
| | - Jennifer B Gordetsky
- Department of Urology, University of Alabama at Birmingham, Birmingham, AL; Department of Pathology, University of Alabama at Birmingham, Birmingham, AL
| | | | - Soroush Rais-Bahrami
- Department of Urology, University of Alabama at Birmingham, Birmingham, AL; O'Neal Comprehensive Cancer Center at UAB, University of Alabama at Birmingham, Birmingham, AL; Department of Radiology, University of Alabama at Birmingham, Birmingham, AL
| | - Geoffrey A Sonn
- Department of Urology, Stanford University School of Medicine, Stanford, CA; Department of Radiology, Stanford University School of Medicine, Stanford, CA
| |
Collapse
|
3
|
Abstract
OBJECTIVE The objective of this study was to evaluate the performance of routine helical liver CT in the detection and grading of esophageal varices in cirrhotic patients. MATERIALS AND METHODS A total of 67 consecutive cirrhotic patients who underwent both upper endoscopy and helical liver CT within a 4-week interval were evaluated. The CT protocol included unenhanced, arterial, and portal phases with a collimation of 7-7.5 mm. Two blinded abdominal imagers (6 and 7 years' experience) retrospectively interpreted all CT images to detect the presence of esophageal varices on a 5-point confidence scale and measure the largest varix identified. Receiver operating characteristic (ROC) curve analysis was performed, and the correlation between CT measurements and endoscopic grading, the reference standard, was assessed. RESULTS The variceal detection rates for the observers was 92% (11/12) and 92% (11/12) for large (i.e., clinically significant) varices, 53% (16/30) and 60% (18/30) for small varices, and 64% (27/42) and 69% (29/42) for all varices. The area under the ROC curve for the detection of esophageal varices of any size was 0.77 (observer 1) and 0.80 (observer 2). CT variceal grading showed a strong correlation with endoscopic grading for both observers (p < or = 0.001). Using a variceal diameter threshold of 3 mm on CT, sensitivity, specificity, and accuracy for distinguishing large esophageal varices from small or no varices were 92% (11/12), 84% (46/55), and 85% (57/67), respectively, for both observers. CONCLUSION Liver CT is useful for the detection and grading of esophageal varices. A diameter of 3 mm may be an appropriate screening threshold for large clinically significant varices.
Collapse
Affiliation(s)
- Young Jun Kim
- Department of Radiological Sciences, David Geffen School of Medicine at UCLA, 10833 Le Conte Ave., Los Angeles, CA 90095-1721, USA
| | | | | | | | | | | |
Collapse
|
4
|
To'o KJ, Raman SS, Yu NC, Kim YJ, Crawford T, Kadell BM, Lu DSK. Pancreatic and peripancreatic diseases mimicking primary pancreatic neoplasia. Radiographics 2006; 25:949-65. [PMID: 16009817 DOI: 10.1148/rg.254045167] [Citation(s) in RCA: 52] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
A variety of anatomic variants and pathologic conditions in and around the pancreas may simulate primary pancreatic neoplasia at routine abdominal cross-sectional imaging. An ambiguous lesion whose appearance suggests a pancreatic origin requires a broad differential diagnosis that can subsequently be narrowed on the basis of both clinical history and features at optimal computed tomography (CT) and magnetic resonance (MR) imaging. Pancreas-specific multidetector CT and MR imaging techniques with thin collimation, multiplanar and multiphasic scans, and newly introduced curved planar reformation may help avoid potential diagnostic pitfalls. These techniques can help identify and characterize a mass in multiple viewing planes, thereby helping distinguish a true pancreatic neoplasm from peripancreatic adenopathy or from a tumor of the adjacent duodenum or small bowel. They can also help determine the cause of a tumor. It is important that the radiologist be familiar with the wide spectrum of anatomic variants and disease entities that can mimic primary pancreatic neoplasia in order to initiate the appropriate lesion-specific work-up and treatment and avoid unnecessary tests or procedures, including surgery.
Collapse
Affiliation(s)
- Katherine J To'o
- Department of Radiology, David Geffen School of Medicine, UCLA, 10833 Le Conte Ave, BL-428 CHS, Box 951721, Los Angeles, CA 90095-1721, USA
| | | | | | | | | | | | | |
Collapse
|