1
|
Vesal S, Gayo I, Bhattacharya I, Natarajan S, Marks LS, Barratt DC, Fan RE, Hu Y, Sonn GA, Rusu M. Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study. Med Image Anal 2022; 82:102620. [PMID: 36148705 PMCID: PMC10161676 DOI: 10.1016/j.media.2022.102620] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 08/31/2022] [Accepted: 09/05/2022] [Indexed: 11/24/2022]
Abstract
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.
Collapse
Affiliation(s)
- Sulaiman Vesal
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| | - Iani Gayo
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Indrani Bhattacharya
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Shyam Natarajan
- Department of Urology, University of California Los Angeles, 200 Medical Plaza Driveway, Los Angeles, CA 90024, USA
| | - Leonard S Marks
- Department of Urology, University of California Los Angeles, 200 Medical Plaza Driveway, Los Angeles, CA 90024, USA
| | - Dean C Barratt
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Richard E Fan
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Geoffrey A Sonn
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| |
Collapse
|
2
|
Training deep neural networks with noisy clinical labels: toward accurate detection of prostate cancer in US data. Int J Comput Assist Radiol Surg 2022; 17:1697-1705. [PMID: 35881210 DOI: 10.1007/s11548-022-02707-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 06/21/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Ultrasound is the standard-of-care to guide the systematic biopsy of the prostate. During the biopsy procedure, up to 12 biopsy cores are randomly sampled from six zones within the prostate, where the histopathology of those cores is used to determine the presence and grade of the cancer. Histopathology reports only provide statistical information on the presence of cancer and do not normally contain fine-grain information of cancer distribution within each core. This limitation hinders the development of machine learning models to detect the presence of cancer in ultrasound so that biopsy can be more targeted to highly suspicious prostate regions. METHODS In this paper, we tackle this challenge in the form of training with noisy labels derived from histopathology. Noisy labels often result in the model overfitting to the training data, hence limiting its generalizability. To avoid overfitting, we focus on the generalization of the features of the model and present an iterative data label refinement algorithm to amend the labels gradually. We simultaneously train two classifiers, with the same structure, and automatically stop the training when we observe any sign of overfitting. Then, we use a confident learning approach to clean the data labels and continue with the training. This process is iteratively applied to the training data and labels until convergence. RESULTS We illustrate the performance of the proposed method by classifying prostate cancer using a dataset of ultrasound images from 353 biopsy cores obtained from 90 patients. We achieve area under the curve, sensitivity, specificity, and accuracy of 0.73, 0.80, 0.63, and 0.69, respectively. CONCLUSION Our approach is able to provide clinicians with a visualization of regions that likely contain cancerous tissue to obtain more accurate biopsy samples. The results demonstrate that our proposed method produces superior accuracy compared to the state-of-the-art methods.
Collapse
|
3
|
Zhou B, Yang X, Curran WJ, Liu T. Artificial Intelligence in Quantitative Ultrasound Imaging: A Survey. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:1329-1342. [PMID: 34467542 DOI: 10.1002/jum.15819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 08/01/2021] [Accepted: 08/16/2021] [Indexed: 06/13/2023]
Abstract
Quantitative ultrasound (QUS) imaging is a safe, reliable, inexpensive, and real-time technique to extract physically descriptive parameters for assessing pathologies. Compared with other major imaging modalities such as computed tomography and magnetic resonance imaging, QUS suffers from several major drawbacks: poor image quality and inter- and intra-observer variability. Therefore, there is a great need to develop automated methods to improve the image quality of QUS. In recent years, there has been increasing interest in artificial intelligence (AI) applications in medical imaging, and a large number of research studies in AI in QUS have been conducted. The purpose of this review is to describe and categorize recent research into AI applications in QUS. We first introduce the AI workflow and then discuss the various AI applications in QUS. Finally, challenges and future potential AI applications in QUS are discussed.
Collapse
Affiliation(s)
- Boran Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
4
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
5
|
Song KD. Current status of deep learning applications in abdominal ultrasonography. Ultrasonography 2020; 40:177-182. [PMID: 33242931 PMCID: PMC7994733 DOI: 10.14366/usg.20085] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 09/02/2020] [Indexed: 12/12/2022] Open
Abstract
Deep learning is one of the most popular artificial intelligence techniques used in the medical field. Although it is at an early stage compared to deep learning analyses of computed tomography or magnetic resonance imaging, studies applying deep learning to ultrasound imaging have been actively conducted. This review analyzes recent studies that applied deep learning to ultrasound imaging of various abdominal organs and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
- Kyoung Doo Song
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| |
Collapse
|
6
|
Shams R, Picot F, Grajales D, Sheehy G, Dallaire F, Birlea M, Saad F, Trudel D, Menard C, Leblond F, Kadoury S. Pre-clinical evaluation of an image-guided in-situ Raman spectroscopy navigation system for targeted prostate cancer interventions. Int J Comput Assist Radiol Surg 2020; 15:867-876. [PMID: 32227280 DOI: 10.1007/s11548-020-02136-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 03/18/2020] [Indexed: 01/13/2023]
Abstract
PURPOSE Transrectal ultrasound (TRUS) image guidance is the standard of care for diagnostic and therapeutic interventions in prostate cancer (PCa) patients, but can lead to high false-negative rates, compromising downstream effectiveness of therapeutic choices. A promising approach to improve in-situ detection of PCa lies in using the optical properties of the tissue to discern cancer from healthy tissue. In this work, we present the first in-situ image-guided navigation system for a spatially tracked Raman spectroscopy probe integrated in a PCa workflow, capturing the optical tissue fingerprint. The probe is guided with fused TRUS/MR imaging and tested with both tissue-simulating phantoms and ex-vivo prostates. The workflow was designed to be integrated the clinical workflow for trans-perineal prostate biopsies, as well as for high-dose rate (HDR) brachytherapy. METHODS The proposed system developed in 3D Slicer includes an electromagnetically tracked Raman spectroscopy probe, along with tracked TRUS imaging automatically registered to diagnostic MRI. The proposed system is tested on both custom gelatin tissue-simulating optical phantoms and biological tissue phantoms. A random-forest classifier was then trained on optical spectrums from ex-vivo prostates following prostatectomy using our optical probe. Preliminary in-human results are presented with the Raman spectroscopy instrument to detect malignant tissue in-situ with histopathology confirmation. RESULTS In 5 synthetic gelatin and biological tissue phantoms, we demonstrate the ability of the image-guided Raman system by detecting over 95% of lesions, based on biopsy samples. The included lesion volumes ranged from 0.1 to 0.61 cc. We showed the compatibility of our workflow with the current HDR brachytherapy setup. In ex-vivo prostates of PCa patients, the system showed a 81% detection accuracy in high grade lesions. CONCLUSION Pre-clinical experiments demonstrated promising results for in-situ confirmation of lesion locations in prostates using Raman spectroscopy, both in phantoms and human ex-vivo prostate tissue, which is required for integration in HDR brachytherapy procedures.
Collapse
Affiliation(s)
| | | | | | | | | | - Mirela Birlea
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada
| | - Fred Saad
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada
| | - Dominique Trudel
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada
| | - Cynthia Menard
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada
| | | | - Samuel Kadoury
- Polytechnique Montreal, Montreal, Canada.
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada.
| |
Collapse
|
7
|
Azizi S, Bayat S, Yan P, Tahmasebi A, Kwak JT, Xu S, Turkbey B, Choyke P, Pinto P, Wood B, Mousavi P, Abolmaesumi P. Deep Recurrent Neural Networks for Prostate Cancer Detection: Analysis of Temporal Enhanced Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2695-2703. [PMID: 29994471 PMCID: PMC7983161 DOI: 10.1109/tmi.2018.2849959] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Temporal enhanced ultrasound (TeUS), comprising the analysis of variations in backscattered signals from a tissue over a sequence of ultrasound frames, has been previously proposed as a new paradigm for tissue characterization. In this paper, we propose to use deep recurrent neural networks (RNN) to explicitly model the temporal information in TeUS. By investigating several RNN models, we demonstrate that long short-term memory (LSTM) networks achieve the highest accuracy in separating cancer from benign tissue in the prostate. We also present algorithms for in-depth analysis of LSTM networks. Our in vivo study includes data from 255 prostate biopsy cores of 157 patients. We achieve area under the curve, sensitivity, specificity, and accuracy of 0.96, 0.76, 0.98, and 0.93, respectively. Our result suggests that temporal modeling of TeUS using RNN can significantly improve cancer detection accuracy over previously presented works.
Collapse
|