1
|
Shkolyar E, Zhou SR, Carlson CJ, Chang S, Laurie MA, Xing L, Bowden AK, Liao JC. Optimizing cystoscopy and TURBT: enhanced imaging and artificial intelligence. Nat Rev Urol 2024:10.1038/s41585-024-00904-9. [PMID: 38982304 DOI: 10.1038/s41585-024-00904-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2024] [Indexed: 07/11/2024]
Abstract
Diagnostic cystoscopy in combination with transurethral resection of the bladder tumour are the standard for the diagnosis, surgical treatment and surveillance of bladder cancer. The ability to inspect the bladder in its current form stems from a long chain of advances in imaging science and endoscopy. Despite these advances, bladder cancer recurrence and progression rates remain high after endoscopic resection. This stagnation is a result of the heterogeneity of cancer biology as well as limitations in surgical techniques and tools, as incomplete resection and provider-specific differences affect cancer persistence and early recurrence. An unmet clinical need remains for solutions that can improve tumour delineation and resection. Translational advances in enhanced cystoscopy technologies and artificial intelligence offer promising avenues to overcoming the progress plateau.
Collapse
Affiliation(s)
- Eugene Shkolyar
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| | - Steve R Zhou
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Camella J Carlson
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Shuang Chang
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Mark A Laurie
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA
| | - Audrey K Bowden
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Joseph C Liao
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA.
- Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA.
| |
Collapse
|
2
|
Jiang X, Zheng H, Yuan Z, Lan K, Wu Y. HIMS-Net: Horizontal-vertical interaction and multiple side-outputs network for cyst segmentation in jaw images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:4036-4055. [PMID: 38549317 DOI: 10.3934/mbe.2024178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
Jaw cysts are mainly caused by abnormal tooth development, chronic oral inflammation, or jaw damage, which may lead to facial swelling, deformity, tooth loss, and other symptoms. Due to the diversity and complexity of cyst images, deep-learning algorithms still face many difficulties and challenges. In response to these problems, we present a horizontal-vertical interaction and multiple side-outputs network for cyst segmentation in jaw images. First, the horizontal-vertical interaction mechanism facilitates complex communication paths in the vertical and horizontal dimensions, and it has the ability to capture a wide range of context dependencies. Second, the feature-fused unit is introduced to adjust the network's receptive field, which enhances the ability of acquiring multi-scale context information. Third, the multiple side-outputs strategy intelligently combines feature maps to generate more accurate and detailed change maps. Finally, experiments were carried out on the self-established jaw cyst dataset and compared with different specialist physicians to evaluate its clinical usability. The research results indicate that the Matthews correlation coefficient (Mcc), Dice, and Jaccard of HIMS-Net were 93.61, 93.66 and 88.10% respectively, which may contribute to rapid and accurate diagnosis in clinical practice.
Collapse
Affiliation(s)
- Xiaoliang Jiang
- College of Mechanical Engineering, Quzhou University, Quzhou 324000, China
| | - Huixia Zheng
- Department of Stomatology, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou 324000, China
| | - Zhenfei Yuan
- Department of Stomatology, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou 324000, China
| | - Kun Lan
- College of Mechanical Engineering, Quzhou University, Quzhou 324000, China
| | - Yaoyang Wu
- Department of Computer and Information Science, University of Macau, Macau 999078, China
| |
Collapse
|
3
|
Tano ZE, Cumpanas AD, Gorgen ARH, Rojhani A, Altamirano-Villarroel J, Landman J. Surgical Artificial Intelligence: Endourology. Urol Clin North Am 2024; 51:77-89. [PMID: 37945104 DOI: 10.1016/j.ucl.2023.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Endourology is ripe with information that includes patient factors, laboratory tests, outcomes, and visual data, which is becoming increasingly complex to assess. Artificial intelligence (AI) has the potential to explore and define these relationships; however, humans might not be involved in the input, analysis, or even determining the methods of analysis. Herein, the authors present the current state of AI in endourology and highlight the need for urologists to share their proposed AI solutions for reproducibility outside of their institutions and prepare themselves to properly critique this new technology.
Collapse
Affiliation(s)
- Zachary E Tano
- Department of Urology, University of California, Irvine, 3800 West Chapman Avenue, Suite 7200, Orange, CA 92868, USA.
| | - Andrei D Cumpanas
- Department of Urology, University of California, Irvine, 3800 West Chapman Avenue, Suite 7200, Orange, CA 92868, USA
| | - Antonio R H Gorgen
- Department of Urology, University of California, Irvine, 3800 West Chapman Avenue, Suite 7200, Orange, CA 92868, USA
| | - Allen Rojhani
- Department of Urology, University of California, Irvine, 3800 West Chapman Avenue, Suite 7200, Orange, CA 92868, USA
| | - Jaime Altamirano-Villarroel
- Department of Urology, University of California, Irvine, 3800 West Chapman Avenue, Suite 7200, Orange, CA 92868, USA
| | - Jaime Landman
- Department of Urology, University of California, Irvine, 3800 West Chapman Avenue, Suite 7200, Orange, CA 92868, USA
| |
Collapse
|
4
|
Ikeda A, Nosato H. Overview of current applications and trends in artificial intelligence for cystoscopy and transurethral resection of bladder tumours. Curr Opin Urol 2024; 34:27-31. [PMID: 37902120 DOI: 10.1097/mou.0000000000001135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
PURPOSE OF REVIEW Accurate preoperative and intraoperative identification and complete resection of bladder cancer is essential. Adequate postoperative follow-up and observation are important to identify early intravesical recurrence or progression. However, the accuracy of diagnosis and treatment is dependent on the knowledge and experience of the physicians. Artificial intelligence (AI) can be an important tool for physicians performing cystoscopies. RECENT FINDINGS Reports published over the past year and a half have identified an adequate amount of cystoscopy datasets for deep learning, with rich datasets of multiple tumour types including images of flat, carcinoma-in-situ, and elevated lesions, and more diverse applications. In addition to detecting bladder tumours, AI can assist in diagnosing interstitial cystitis. Applications of AI using conventional white-light and also to bladder endoscopy with different image enhancement techniques and manufacturers is underway. A framework has also been proposed to standardise the management of clinical data from cystoscopy to aid education and AI development and to compare with gastrointestinal endoscopic AI. Although real-world clinical applications have lagged, technological developments are progressing. SUMMARY AI-based cystoscopy is likely to become an important tool and is expected to have real-world clinical applications comprehensively linking AI and imaging, data management systems, and clinicians. VIDEO ABSTRACT http://links.lww.com/COU/A45.
Collapse
Affiliation(s)
- Atsushi Ikeda
- Department of Urology, Institute of Medicine, University of Tsukuba, Tsukuba City, Ibaraki Prefecture, Japan
| | - Hirokazu Nosato
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba City, Ibaraki Prefecture, Japan
| |
Collapse
|
5
|
Eminaga O, Lee TJ, Laurie M, Ge TJ, La V, Long J, Semjonow A, Bogemann M, Lau H, Shkolyar E, Xing L, Liao JC. Efficient Augmented Intelligence Framework for Bladder Lesion Detection. JCO Clin Cancer Inform 2023; 7:e2300031. [PMID: 37774313 PMCID: PMC10569784 DOI: 10.1200/cci.23.00031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 05/23/2023] [Accepted: 08/16/2023] [Indexed: 10/01/2023] Open
Abstract
PURPOSE Development of intelligence systems for bladder lesion detection is cost intensive. An efficient strategy to develop such intelligence solutions is needed. MATERIALS AND METHODS We used four deep learning models (ConvNeXt, PlexusNet, MobileNet, and SwinTransformer) covering a variety of model complexity and efficacy. We trained these models on a previously published educational cystoscopy atlas (n = 312 images) to estimate the ratio between normal and cancer scores and externally validated on cystoscopy videos from 68 cases, with region of interest (ROI) pathologically confirmed to be benign and cancerous bladder lesions (ie, ROI). The performance measurement included specificity and sensitivity at frame level, frame sequence (block) level, and ROI level for each case. RESULTS Specificity was comparable between four models at frame (range, 30.0%-44.8%) and block levels (56%-67%). Although sensitivity at the frame level (range, 81.4%-88.1%) differed between the models, sensitivity at the block level (100%) and ROI level (100%) was comparable between these models. MobileNet and PlexusNet were computationally more efficient for real-time ROI detection than ConvNeXt and SwinTransformer. CONCLUSION Educational cystoscopy atlas and efficient models facilitate the development of real-time intelligence system for bladder lesion detection.
Collapse
Affiliation(s)
- Okyaz Eminaga
- AI Vobis, Palo Alto, CA
- Center for Artificial Intelligence in Medicine and Imaging, Stanford University School of Medicine, Stanford, CA
| | - Timothy Jiyong Lee
- Department of Urology, Stanford University School of Medicine, Stanford, CA
- Veterans Affairs Palo Alto Health Care System, Palo Alto, CA
| | - Mark Laurie
- Department of Urology, Stanford University School of Medicine, Stanford, CA
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA
| | - T. Jessie Ge
- Department of Urology, Stanford University School of Medicine, Stanford, CA
| | - Vinh La
- Department of Urology, Stanford University School of Medicine, Stanford, CA
| | - Jin Long
- Center for Artificial Intelligence in Medicine and Imaging, Stanford University School of Medicine, Stanford, CA
| | - Axel Semjonow
- Department of Urology, Muenster University Hospital, Muenster, Germany
| | - Martin Bogemann
- Department of Urology, Muenster University Hospital, Muenster, Germany
| | - Hubert Lau
- Veterans Affairs Palo Alto Health Care System, Palo Alto, CA
| | - Eugene Shkolyar
- Department of Urology, Stanford University School of Medicine, Stanford, CA
| | - Lei Xing
- Center for Artificial Intelligence in Medicine and Imaging, Stanford University School of Medicine, Stanford, CA
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA
| | - Joseph C. Liao
- Center for Artificial Intelligence in Medicine and Imaging, Stanford University School of Medicine, Stanford, CA
- Department of Urology, Stanford University School of Medicine, Stanford, CA
- Veterans Affairs Palo Alto Health Care System, Palo Alto, CA
| |
Collapse
|
6
|
Chang TC, Shkolyar E, Del Giudice F, Eminaga O, Lee T, Laurie M, Seufert C, Jia X, Mach KE, Xing L, Liao JC. Real-time Detection of Bladder Cancer Using Augmented Cystoscopy with Deep Learning: a Pilot Study. J Endourol 2023. [PMID: 37432899 DOI: 10.1089/end.2023.0056] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023] Open
Abstract
BACKGROUND Detection of bladder tumors under white light cystoscopy (WLC) is challenging yet impactful on treatment outcomes. Artificial intelligence (AI) holds the potential to improve tumor detection; however, its application in the real-time setting remains unexplored. AI has been applied to previously recorded images for post hoc analysis. In this study, we evaluate the feasibility of real-time AI integration during clinic cystoscopy and transurethral resection of bladder tumor (TURBT) on live, streaming video. METHODS Patients undergoing clinic flexible cystoscopy and TURBT were prospectively enrolled. A real-time alert device system (real-time CystoNet) was developed and integrated with standard cystoscopy towers. Streaming videos were processed in real time to display alert boxes in sync with live cystoscopy. The per-frame diagnostic accuracy was measured. RESULTS AND LIMITATIONS Real-time CystoNet was successfully integrated in the operating room during TURBT and clinic cystoscopy in 50 consecutive patients. There were 55 procedures that met the inclusion criteria for analysis including 21 clinic cystoscopies and 34 TURBTs. For clinic cystoscopy, real-time CystoNet achieved per-frame tumor specificity of 98.8% with a median error rate of 3.6% (range: 0 - 47%) frames per cystoscopy. For TURBT, the per-frame tumor sensitivity was 52.9% and the per-frame tumor specificity was 95.4% with an error rate of 16.7% for cases with pathologically confirmed bladder cancers. CONCLUSIONS The current pilot study demonstrates the feasibility of using a real-time AI system (real-time CystoNet) during cystoscopy and TURBT to generate active feedback to the surgeon. Further optimization of CystoNet for real-time cystoscopy dynamics may allow for clinically useful AI-augmented cystoscopy.
Collapse
Affiliation(s)
- Timothy Chan Chang
- Stanford University School of Medicine, Department of Urology, 453 Quarry Road, Urology - 5656, Palo Alto, California, United States, 94304;
| | - Eugene Shkolyar
- Stanford University School of Medicine, 10624, Urology, 300 Pasteur Dr, Stanford, California, United States, 94305;
| | - Francesco Del Giudice
- Sapienza Rome University, Department of Maternal-Child and Urological Sciences, Rome, Italy;
| | - Okyaz Eminaga
- Stanford University School of Medicine, 10624, Urology, Stanford, California, United States;
| | - Timothy Lee
- Stanford University School of Medicine, 10624, Urology, Stanford, California, United States;
| | - Mark Laurie
- Stanford University School of Medicine, 10624, Urology, Stanford, California, United States;
| | - Caleb Seufert
- Stanford University School of Medicine, 10624, Urology, Stanford, California, United States;
| | - Xiao Jia
- Stanford University School of Medicine, 10624, Radiation Oncology, Stanford, California, United States;
| | - Kathleen E Mach
- Stanford University School of Medicine, Urology, Stanford, California, United States;
| | - Lei Xing
- Stanford University School of Medicine, 10624, Radiation Oncology, Stanford, California, United States;
| | - Joseph C Liao
- Stanford, Urology, 300 Pasteur Dr., S-287, Stanford, California, United States, 94305-5118;
| |
Collapse
|
7
|
Eminaga O, Lee TJ, Ge J, Shkolyar E, Laurie M, Long J, Hockman LG, Liao JC. Conceptual framework and documentation standards of cystoscopic media content for artificial intelligence. J Biomed Inform 2023; 142:104369. [PMID: 37088456 PMCID: PMC10643098 DOI: 10.1016/j.jbi.2023.104369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 04/03/2023] [Accepted: 04/18/2023] [Indexed: 04/25/2023]
Abstract
BACKGROUND The clinical documentation of cystoscopy includes visual and textual materials. However, the secondary use of visual cystoscopic data for educational and research purposes remains limited due to inefficient data management in routine clinical practice. METHODS A conceptual framework was designed to document cystoscopy in a standardized manner with three major sections: data management, annotation management, and utilization management. A Swiss-cheese model was proposed for quality control and root cause analyses. We defined the infrastructure required to implement the framework with respect to FAIR (findable, accessible, interoperable, reusable) principles. We applied two scenarios exemplifying data sharing for research and educational projects to ensure compliance with FAIR principles. RESULTS The framework was successfully implemented while following FAIR principles. The cystoscopy atlas produced from the framework could be presented in an educational web portal; a total of 68 full-length qualitative videos and corresponding annotation data were sharable for artificial intelligence projects covering frame classification and segmentation problems at case, lesion, and frame levels. CONCLUSION Our study shows that the proposed framework facilitates the storage of visual documentation in a standardized manner and enables FAIR data for education and artificial intelligence research.
Collapse
Affiliation(s)
- Okyaz Eminaga
- Department of Urology, Stanford University School of Medicine, Stanford, USA; Center for Artificial Intelligence and Medical Imaging, Stanford University School of Medicine, Stanford, CA, USA.
| | - Timothy Jiyong Lee
- Department of Urology, Stanford University School of Medicine, Stanford, USA
| | - Jessie Ge
- Department of Urology, Stanford University School of Medicine, Stanford, USA
| | - Eugene Shkolyar
- Department of Urology, Stanford University School of Medicine, Stanford, USA
| | - Mark Laurie
- Department of Urology, Stanford University School of Medicine, Stanford, USA
| | - Jin Long
- Center for Artificial Intelligence and Medical Imaging, Stanford University School of Medicine, Stanford, CA, USA
| | | | - Joseph C Liao
- Department of Urology, Stanford University School of Medicine, Stanford, USA; Center for Artificial Intelligence and Medical Imaging, Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|
8
|
Karabağ C, Ortega-Ruíz MA, Reyes-Aldasoro CC. Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy. J Imaging 2023; 9:59. [PMID: 36976110 PMCID: PMC10058680 DOI: 10.3390/jimaging9030059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/16/2023] [Accepted: 02/17/2023] [Indexed: 03/06/2023] Open
Abstract
This paper investigates the impact of the amount of training data and the shape variability on the segmentation provided by the deep learning architecture U-Net. Further, the correctness of ground truth (GT) was also evaluated. The input data consisted of a three-dimensional set of images of HeLa cells observed with an electron microscope with dimensions 8192×8192×517. From there, a smaller region of interest (ROI) of 2000×2000×300 was cropped and manually delineated to obtain the ground truth necessary for a quantitative evaluation. A qualitative evaluation was performed on the 8192×8192 slices due to the lack of ground truth. Pairs of patches of data and labels for the classes nucleus, nuclear envelope, cell and background were generated to train U-Net architectures from scratch. Several training strategies were followed, and the results were compared against a traditional image processing algorithm. The correctness of GT, that is, the inclusion of one or more nuclei within the region of interest was also evaluated. The impact of the extent of training data was evaluated by comparing results from 36,000 pairs of data and label patches extracted from the odd slices in the central region, to 135,000 patches obtained from every other slice in the set. Then, 135,000 patches from several cells from the 8192×8192 slices were generated automatically using the image processing algorithm. Finally, the two sets of 135,000 pairs were combined to train once more with 270,000 pairs. As would be expected, the accuracy and Jaccard similarity index improved as the number of pairs increased for the ROI. This was also observed qualitatively for the 8192×8192 slices. When the 8192×8192 slices were segmented with U-Nets trained with 135,000 pairs, the architecture trained with automatically generated pairs provided better results than the architecture trained with the pairs from the manually segmented ground truths. This suggests that the pairs that were extracted automatically from many cells provided a better representation of the four classes of the various cells in the 8192×8192 slice than those pairs that were manually segmented from a single cell. Finally, the two sets of 135,000 pairs were combined, and the U-Net trained with these provided the best results.
Collapse
Affiliation(s)
- Cefa Karabağ
- giCentre, Department of Computer Science, School of Science and Technology, City, University of London, London EC1V 0HB, UK
| | - Mauricio Alberto Ortega-Ruíz
- giCentre, Department of Computer Science, School of Science and Technology, City, University of London, London EC1V 0HB, UK
- Departamento de Ingeniería, Campus Coyoacán, Universidad del Valle de México, Ciudad de México C.P. 04910, Mexico
| | | |
Collapse
|
9
|
Eminaga O, Ge TJ, Shkolyar E, Laurie MA, Lee TJ, Hockman L, Jia X, Xing L, Liao JC. An Efficient Framework for Video Documentation of Bladder Lesions for Cystoscopy: A Proof-of-Concept Study. J Med Syst 2022; 46:73. [PMID: 36190581 PMCID: PMC10751224 DOI: 10.1007/s10916-022-01862-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 09/07/2022] [Indexed: 10/10/2022]
Abstract
Processing full-length cystoscopy videos is challenging for documentation and research purposes. We therefore designed a surgeon-guided framework to extract short video clips with bladder lesions for more efficient content navigation and extraction. Screenshots of bladder lesions were captured during transurethral resection of bladder tumor, then manually labeled according to case identification, date, lesion location, imaging modality, and pathology. The framework used the screenshot to search for and extract a corresponding 10-seconds video clip. Each video clip included a one-second space holder with a QR barcode informing the video content. The success of the framework was measured by the secondary use of these short clips and the reduction of storage volume required for video materials. From 86 cases, the framework successfully generated 249 video clips from 230 screenshots, with 14 erroneous video clips from 8 screenshots excluded. The HIPPA-compliant barcodes provided information of video contents with a 100% data completeness. A web-based educational gallery was curated with various diagnostic categories and annotated frame sequences. Compared with the unedited videos, the informative short video clips reduced the storage volume by 99.5%. In conclusion, our framework expedites the generation of visual contents with surgeon's instruction for cystoscopy and potential incorporation of video data towards applications including clinical documentation, education, and research.
Collapse
Affiliation(s)
- Okyaz Eminaga
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA.
- Department of Urology, Stanford University School of Medicine, 453 Quarry Road, Mail Code 5656, Palo Alto, CA, 94304, USA.
| | - T Jessie Ge
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Eugene Shkolyar
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Mark A Laurie
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Timothy J Lee
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Lukas Hockman
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Xiao Jia
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA
| | - Joseph C Liao
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA.
- Department of Urology, Stanford University School of Medicine, 453 Quarry Road, Mail Code 5656, Palo Alto, CA, 94304, USA.
| |
Collapse
|