1
|
Johnson LA, Harmon SA, Yilmaz EC, Lin Y, Belue MJ, Merriman KM, Lay NS, Sanford TH, Sarma KV, Arnold CW, Xu Z, Roth HR, Yang D, Tetreault J, Xu D, Patel KR, Gurram S, Wood BJ, Citrin DE, Pinto PA, Choyke PL, Turkbey B. Automated prostate gland segmentation in challenging clinical cases: comparison of three artificial intelligence methods. Abdom Radiol (NY) 2024; 49:1545-1556. [PMID: 38512516 DOI: 10.1007/s00261-024-04242-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 02/05/2024] [Accepted: 02/06/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVE Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.
Collapse
Affiliation(s)
- Latrice A Johnson
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Stephanie A Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Enis C Yilmaz
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Yue Lin
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Mason J Belue
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Katie M Merriman
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nathan S Lay
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Karthik V Sarma
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, USA
| | - Corey W Arnold
- Department of Radiology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Ziyue Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Dong Yang
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Daguang Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | - Krishnan R Patel
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sandeep Gurram
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Bradford J Wood
- Center for Interventional Oncology, National Cancer Institute, NIH, Bethesda, MD, USA
- Department of Radiology, Clinical Center, NIH, Bethesda, MD, USA
| | - Deborah E Citrin
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter A Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA.
- Molecular Imaging Branch (B.T.), National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, MD, 20892, USA.
| |
Collapse
|
2
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
3
|
Zhao JZ, Ni R, Chow R, Rink A, Weersink R, Croke J, Raman S. Artificial intelligence applications in brachytherapy: A literature review. Brachytherapy 2023; 22:429-445. [PMID: 37248158 DOI: 10.1016/j.brachy.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Artificial intelligence (AI) has the potential to simplify and optimize various steps of the brachytherapy workflow, and this literature review aims to provide an overview of the work done in this field. METHODS AND MATERIALS We conducted a literature search in June 2022 on PubMed, Embase, and Cochrane for papers that proposed AI applications in brachytherapy. RESULTS A total of 80 papers satisfied inclusion/exclusion criteria. These papers were categorized as follows: segmentation (24), registration and image processing (6), preplanning (13), dose prediction and treatment planning (11), applicator/catheter/needle reconstruction (16), and quality assurance (10). AI techniques ranged from classical models such as support vector machines and decision tree-based learning to newer techniques such as U-Net and deep reinforcement learning, and were applied to facilitate small steps of a process (e.g., optimizing applicator selection) or even automate the entire step of the workflow (e.g., end-to-end preplanning). Many of these algorithms demonstrated human-level performance and offer significant improvements in speed. CONCLUSIONS AI has potential to augment, automate, and/or accelerate many steps of the brachytherapy workflow. We recommend that future studies adhere to standard reporting guidelines. We also stress the importance of using larger sample sizes and reporting results using clinically interpretable measures.
Collapse
Affiliation(s)
- Jonathan Zl Zhao
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ruiyan Ni
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Ronald Chow
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Alexandra Rink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Robert Weersink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Jennifer Croke
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Srinivas Raman
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada.
| |
Collapse
|
4
|
Karimi D, Rollins CK, Velasco-Annis C, Ouaalam A, Gholipour A. Learning to segment fetal brain tissue from noisy annotations. Med Image Anal 2023; 85:102731. [PMID: 36608414 PMCID: PMC9974964 DOI: 10.1016/j.media.2022.102731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 11/17/2022] [Accepted: 12/23/2022] [Indexed: 01/03/2023]
Abstract
Automatic fetal brain tissue segmentation can enhance the quantitative assessment of brain development at this critical stage. Deep learning methods represent the state of the art in medical image segmentation and have also achieved impressive results in brain segmentation. However, effective training of a deep learning model to perform this task requires a large number of training images to represent the rapid development of the transient fetal brain structures. On the other hand, manual multi-label segmentation of a large number of 3D images is prohibitive. To address this challenge, we segmented 272 training images, covering 19-39 gestational weeks, using an automatic multi-atlas segmentation strategy based on deformable registration and probabilistic atlas fusion, and manually corrected large errors in those segmentations. Since this process generated a large training dataset with noisy segmentations, we developed a novel label smoothing procedure and a loss function to train a deep learning model with smoothed noisy segmentations. Our proposed methods properly account for the uncertainty in tissue boundaries. We evaluated our method on 23 manually-segmented test images of a separate set of fetuses. Results show that our method achieves an average Dice similarity coefficient of 0.893 and 0.916 for the transient structures of younger and older fetuses, respectively. Our method generated results that were significantly more accurate than several state-of-the-art methods including nnU-Net that achieved the closest results to our method. Our trained model can serve as a valuable tool to enhance the accuracy and reproducibility of fetal brain analysis in MRI.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Caitlin K Rollins
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clemente Velasco-Annis
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Abdelhakim Ouaalam
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Ali Gholipour
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Lu X, Zhang S, Liu Z, Liu S, Huang J, Kong G, Li M, Liang Y, Cui Y, Yang C, Zhao S. Ultrasonographic pathological grading of prostate cancer using automatic region-based Gleason grading network. Comput Med Imaging Graph 2022; 102:102125. [PMID: 36257091 DOI: 10.1016/j.compmedimag.2022.102125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/26/2022] [Accepted: 09/20/2022] [Indexed: 11/05/2022]
Abstract
The Gleason scoring system is a reliable method for quantifying the aggressiveness of prostate cancer, which provides an important reference value for clinical assessment on therapeutic strategies. However, to the best of our knowledge, no study has been done on the pathological grading of prostate cancer from single ultrasound images. In this work, a novel Automatic Region-based Gleason Grading (ARGG) network for prostate cancer based on deep learning is proposed. ARGG consists of two stages: (1) a region labeling object detection (RLOD) network is designed to label the prostate cancer lesion region; (2) a Gleason grading network (GNet) is proposed for pathological grading of prostate ultrasound images. In RLOD, a new feature fusion structure Skip-connected Feature Pyramid Network (CFPN) is proposed as an auxiliary branch for extracting features and enhancing the fusion of high-level features and low-level features, which helps to detect the small lesion and extract the image detail information. In GNet, we designed a synchronized pulse enhancement module (SPEM) based on pulse-coupled neural networks for enhancing the results of RLOD detection and used as training samples, and then fed the enhanced results and the original ones into the channel attention classification network (CACN), which introduces an attention mechanism to benefit the prediction of cancer grading. Experimental performance on the dataset of prostate ultrasound images collected from hospitals shows that the proposed Gleason grading model outperforms the manual diagnosis by physicians with a precision of 0.830. In addition, we have evaluated the lesions detection performance of RLOD, which achieves a mean Dice metric of 0.815.
Collapse
Affiliation(s)
- Xu Lu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China; Pazhou Lab, Guangzhou 510330, China
| | - Shulian Zhang
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Zhiyong Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Shaopeng Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Jun Huang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Guoquan Kong
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Mingzhu Li
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yinying Liang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yunneng Cui
- Department of Radiology, Foshan Maternity and Children's Healthcare Hospital Affiliated to Southern Medical University, Foshan 528000, China
| | - Chuan Yang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Shen Zhao
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
6
|
Vesal S, Gayo I, Bhattacharya I, Natarajan S, Marks LS, Barratt DC, Fan RE, Hu Y, Sonn GA, Rusu M. Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study. Med Image Anal 2022; 82:102620. [PMID: 36148705 PMCID: PMC10161676 DOI: 10.1016/j.media.2022.102620] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 08/31/2022] [Accepted: 09/05/2022] [Indexed: 11/24/2022]
Abstract
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.
Collapse
Affiliation(s)
- Sulaiman Vesal
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| | - Iani Gayo
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Indrani Bhattacharya
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Shyam Natarajan
- Department of Urology, University of California Los Angeles, 200 Medical Plaza Driveway, Los Angeles, CA 90024, USA
| | - Leonard S Marks
- Department of Urology, University of California Los Angeles, 200 Medical Plaza Driveway, Los Angeles, CA 90024, USA
| | - Dean C Barratt
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Richard E Fan
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Geoffrey A Sonn
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| |
Collapse
|
7
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
8
|
Artificial Intelligence and Machine Learning in Prostate Cancer Patient Management-Current Trends and Future Perspectives. Diagnostics (Basel) 2021; 11:diagnostics11020354. [PMID: 33672608 PMCID: PMC7924061 DOI: 10.3390/diagnostics11020354] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 02/16/2021] [Accepted: 02/17/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) is the field of computer science that aims to build smart devices performing tasks that currently require human intelligence. Through machine learning (ML), the deep learning (DL) model is teaching computers to learn by example, something that human beings are doing naturally. AI is revolutionizing healthcare. Digital pathology is becoming highly assisted by AI to help researchers in analyzing larger data sets and providing faster and more accurate diagnoses of prostate cancer lesions. When applied to diagnostic imaging, AI has shown excellent accuracy in the detection of prostate lesions as well as in the prediction of patient outcomes in terms of survival and treatment response. The enormous quantity of data coming from the prostate tumor genome requires fast, reliable and accurate computing power provided by machine learning algorithms. Radiotherapy is an essential part of the treatment of prostate cancer and it is often difficult to predict its toxicity for the patients. Artificial intelligence could have a future potential role in predicting how a patient will react to the therapy side effects. These technologies could provide doctors with better insights on how to plan radiotherapy treatment. The extension of the capabilities of surgical robots for more autonomous tasks will allow them to use information from the surgical field, recognize issues and implement the proper actions without the need for human intervention.
Collapse
|
9
|
Abstract
Automatic and accurate prostate segmentation is an essential prerequisite for assisting diagnosis and treatment, such as guiding biopsy procedures and radiation therapy. Therefore, this paper proposes a cascaded dual attention network (CDA-Net) for automatic prostate segmentation in MRI scans. The network includes two stages of RAS-FasterRCNN and RAU-Net. Firstly, RAS-FasterRCNN uses improved FasterRCNN and sequence correlation processing to extract regions of interest (ROI) of organs. This ROI extraction serves as a hard attention mechanism to focus the segmentation of the subsequent network on a certain area. Secondly, the addition of residual convolution block and self-attention mechanism in RAU-Net enables the network to gradually focus on the area where the organ exists while making full use of multiscale features. The algorithm was evaluated on the PROMISE12 and ASPS13 datasets and presents the dice similarity coefficient of 92.88% and 92.65%, respectively, surpassing the state-of-the-art algorithms. In a variety of complex slice images, especially for the base and apex of slice sequences, the algorithm also achieved credible segmentation performance.
Collapse
|
10
|
Song KD. Current status of deep learning applications in abdominal ultrasonography. Ultrasonography 2020; 40:177-182. [PMID: 33242931 PMCID: PMC7994733 DOI: 10.14366/usg.20085] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 09/02/2020] [Indexed: 12/12/2022] Open
Abstract
Deep learning is one of the most popular artificial intelligence techniques used in the medical field. Although it is at an early stage compared to deep learning analyses of computed tomography or magnetic resonance imaging, studies applying deep learning to ultrasound imaging have been actively conducted. This review analyzes recent studies that applied deep learning to ultrasound imaging of various abdominal organs and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
- Kyoung Doo Song
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| |
Collapse
|
11
|
Abstract
Artificial intelligence (AI) - the ability of a machine to perform cognitive tasks to achieve a particular goal based on provided data - is revolutionizing and reshaping our health-care systems. The current availability of ever-increasing computational power, highly developed pattern recognition algorithms and advanced image processing software working at very high speeds has led to the emergence of computer-based systems that are trained to perform complex tasks in bioinformatics, medical imaging and medical robotics. Accessibility to 'big data' enables the 'cognitive' computer to scan billions of bits of unstructured information, extract the relevant information and recognize complex patterns with increasing confidence. Computer-based decision-support systems based on machine learning (ML) have the potential to revolutionize medicine by performing complex tasks that are currently assigned to specialists to improve diagnostic accuracy, increase efficiency of throughputs, improve clinical workflow, decrease human resource costs and improve treatment choices. These characteristics could be especially helpful in the management of prostate cancer, with growing applications in diagnostic imaging, surgical interventions, skills training and assessment, digital pathology and genomics. Medicine must adapt to this changing world, and urologists, oncologists, radiologists and pathologists, as high-volume users of imaging and pathology, need to understand this burgeoning science and acknowledge that the development of highly accurate AI-based decision-support applications of ML will require collaboration between data scientists, computer researchers and engineers.
Collapse
|
12
|
A partial augmented reality system with live ultrasound and registered preoperative MRI for guiding robot-assisted radical prostatectomy. Med Image Anal 2019; 60:101588. [PMID: 31739281 DOI: 10.1016/j.media.2019.101588] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 07/31/2019] [Accepted: 10/10/2019] [Indexed: 12/12/2022]
Abstract
We propose an image guidance system for robot assisted laparoscopic radical prostatectomy (RALRP). A virtual 3D reconstruction of the surgery scene is displayed underneath the endoscope's feed on the surgeon's console. This scene consists of an annotated preoperative Magnetic Resonance Image (MRI) registered to intraoperative 3D Trans-rectal Ultrasound (TRUS) as well as real-time sagittal 2D TRUS images of the prostate, 3D models of the prostate, the surgical instrument and the TRUS transducer. We display these components with accurate real-time coordinates with respect to the robot system. Since the scene is rendered from the viewpoint of the endoscope, given correct parameters of the camera, an augmented scene can be overlaid on the video output. The surgeon can rotate the ultrasound transducer and determine the position of the projected axial plane in the MRI using one of the registered da Vinci instruments. This system was tested in the laboratory on custom-made agar prostate phantoms. We achieved an average total registration accuracy of 3.2 ± 1.3 mm. We also report on the successful application of this system in the operating room in 12 patients. The average registration error between the TRUS and the da Vinci system for the last 8 patients was 1.4 ± 0.3 mm and average target registration error of 2.1 ± 0.8 mm, resulting in an in vivo overall robot system to MRI mean registration error of 3.5 mm or less, which is consistent with our laboratory studies.
Collapse
|
13
|
Lei Y, Tian S, He X, Wang T, Wang B, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Med Phys 2019; 46:3194-3206. [PMID: 31074513 PMCID: PMC6625925 DOI: 10.1002/mp.13577] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 04/14/2019] [Accepted: 05/01/2019] [Indexed: 01/09/2023] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. METHODS AND MATERIALS We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. RESULTS Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively. CONCLUSION We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Bo Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| |
Collapse
|