1
|
Qi L, Li X, Yang Y, Zhao M, Lin A, Ma L. Accuracy of machine learning in the preoperative identification of ovarian borderline tumors: a meta-analysis. Clin Radiol 2024; 79:501-514. [PMID: 38670918 DOI: 10.1016/j.crad.2024.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 01/07/2024] [Accepted: 02/22/2024] [Indexed: 04/28/2024]
Abstract
AIM The objective of this study is to explore the diagnostic value of machine learning (ML) in borderline ovarian tumors through meta-analysis. METHODS Pubmed, Embase, Web of Science, and Cochrane Library databases were comprehensively retrieved from database inception untill February 16, 2023. The Prediction Model Risk of Bias Assessment Tool (PROBAST) was adopted to evaluate the risk of bias in the original studies. Sub-group analyses of ML were conducted according to clinical features and radiomics features. We separately discussed the discriminative value of ML for borderline vs benign and borderline vs malignant tumors. RESULTS Eighteen studies involving 12,778 subjects were included in our analysis. The modeling variables mainly consisted of radiomics features (n=13) and a small number of clinical features (n=5). When distinguishing between borderline and benign tumors, the ML model based on radiomic features achieved a c-index of 0.782 (95% CI: 0.732-0.831), sensitivity of 0.75 (95% CI: 0.67-0.82), and specificity of 0.75 (95% CI: 0.67-0.81) in the validation set. When distinguishing between borderline and malignant tumors, the ML model based on radiomic features achieved a c-index of 0.916 (95% CI: 0.891-0.940), sensitivity of 0.86 (95% CI: 0.78-0.91), and specificity of 0.88 (95% CI: 0.82-0.92) in the validation set. In addition, we analyzed the discriminatory ability of radiologists and found that their sensitivity was 0.26 (95% CI: 0.12-0.46) and specificity was 0.94 (95% CI: 0.90-0.97). CONCLUSIONS ML has tremendous potential in the preoperative diagnosis and differentiation of borderline ovarian tumors and may be more accurate than radiologists in diagnosing and differentiating borderline ovarian tumors.
Collapse
Affiliation(s)
- L Qi
- Department of Gynecology and Obstetrics, Yantai Yuhuangding Hospital Affiliated to Qingdao University, Yantai City, Shandong Province, China
| | - X Li
- Department of Pathology, Yantai Yuhuangding Hospital Affiliated to Qingdao University, Yantai City, Shandong Province, China
| | - Y Yang
- Emergency Department, HongQi Hospital Affiliated to MuDanJiang Medical University, MuDanJiang City, Heilongjiang Province, China
| | - M Zhao
- Department of Gynecology and Obstetrics, Yantai Yuhuangding Hospital Affiliated to Qingdao University, Yantai City, Shandong Province, China
| | - A Lin
- Department of Gynecology and Obstetrics, Yantai Yuhuangding Hospital Affiliated to Qingdao University, Yantai City, Shandong Province, China.
| | - L Ma
- Center for Laboratory Diagnosis, Yantai Yuhuangding Hospital Affiliated to Qingdao University, Yantai City, Shandong Province, China.
| |
Collapse
|
2
|
Correia ETDO, Baydoun A, Li Q, Costa DN, Bittencourt LK. Emerging and anticipated innovations in prostate cancer MRI and their impact on patient care. Abdom Radiol (NY) 2024:10.1007/s00261-024-04423-4. [PMID: 38877356 DOI: 10.1007/s00261-024-04423-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 05/27/2024] [Accepted: 05/28/2024] [Indexed: 06/16/2024]
Abstract
Prostate cancer (PCa) remains the leading malignancy affecting men, with over 3 million men living with the disease in the US, and an estimated 288,000 new cases and almost 35,000 deaths in 2023 in the United States alone. Over the last few decades, imaging has been a cornerstone in PCa care, with a crucial role in the detection, staging, and assessment of PCa recurrence or by guiding diagnostic or therapeutic interventions. To improve diagnostic accuracy and outcomes in PCa care, remarkable advancements have been made to different imaging modalities in recent years. This paper focuses on reviewing the main innovations in the field of PCa magnetic resonance imaging, including MRI protocols, MRI-guided procedural interventions, artificial intelligence algorithms and positron emission tomography, which may impact PCa care in the future.
Collapse
Affiliation(s)
| | - Atallah Baydoun
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Qiubai Li
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Daniel N Costa
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Leonardo Kayat Bittencourt
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA.
- Department of Radiology, Case Western Reserve University, 11100 Euclid Ave, Cleveland, OH, 44106, USA.
| |
Collapse
|
3
|
Peng J, Stowe HB, Samson PP, Robinson CG, Yang C, Hu W, Zhang Z, Kim T, Hugo GD, Mazur TR, Cai B. Inter-fractional portability of deep learning models for lung target tracking on cine imaging acquired in MRI-guided radiotherapy. Phys Eng Sci Med 2024; 47:769-777. [PMID: 38198064 DOI: 10.1007/s13246-023-01371-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 12/11/2023] [Indexed: 01/11/2024]
Abstract
MRI-guided radiotherapy systems enable beam gating by tracking the target on planar, two-dimensional cine images acquired during treatment. This study aims to evaluate how deep-learning (DL) models for target tracking that are trained on data from one fraction can be translated to subsequent fractions. Cine images were acquired for six patients treated on an MRI-guided radiotherapy platform (MRIdian, Viewray Inc.) with an onboard 0.35 T MRI scanner. Three DL models (U-net, attention U-net and nested U-net) for target tracking were trained using two training strategies: (1) uniform training using data obtained only from the first fraction with testing performed on data from subsequent fractions and (2) adaptive training in which training was updated each fraction by adding 20 samples from the current fraction with testing performed on the remaining images from that fraction. Tracking performance was compared between algorithms, models and training strategies by evaluating the Dice similarity coefficient (DSC) and 95% Hausdorff Distance (HD95) between automatically generated and manually specified contours. The mean DSC for all six patients in comparing manual contours and contours generated by the onboard algorithm (OBT) were 0.68 ± 0.16. Compared to OBT, the DSC values improved 17.0 - 19.3% for the three DL models with uniform training, and 24.7 - 25.7% for the models based on adaptive training. The HD95 values improved 50.6 - 54.5% for the models based on adaptive training. DL-based techniques achieved better tracking performance than the onboard, registration-based tracking approach. DL-based tracking performance improved when implementing an adaptive strategy that augments training data fraction-by-fraction.
Collapse
Affiliation(s)
- Jiayuan Peng
- Academy for Engineering and Technology, Fudan University, Shanghai, China
- Department of Radiation Oncology, Washington University, 63110, St. Louis, MO, USA
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Hayley B Stowe
- Department of Radiation Oncology, Washington University, 63110, St. Louis, MO, USA
| | - Pamela P Samson
- Department of Radiation Oncology, Washington University, 63110, St. Louis, MO, USA
| | - Clifford G Robinson
- Department of Radiation Oncology, Washington University, 63110, St. Louis, MO, USA
| | - Cui Yang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Taeho Kim
- Department of Radiation Oncology, Washington University, 63110, St. Louis, MO, USA
| | - Geoffrey D Hugo
- Department of Radiation Oncology, Washington University, 63110, St. Louis, MO, USA
| | - Thomas R Mazur
- Department of Radiation Oncology, Washington University, 63110, St. Louis, MO, USA.
| | - Bin Cai
- Department of Radiation Oncology, Washington University, 63110, St. Louis, MO, USA.
- Department of Radiation Oncology's Division of Medical Physics & Engineering, University of Texas Southwestern Medical Center, 75390, Dallas, TX, USA.
| |
Collapse
|
4
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
5
|
Bengs M, Sprenger J, Gerlach S, Neidhardt M, Schlaefer A. Real-Time Motion Analysis With 4D Deep Learning for Ultrasound-Guided Radiotherapy. IEEE Trans Biomed Eng 2023; 70:2690-2699. [PMID: 37030809 DOI: 10.1109/tbme.2023.3262422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
Motion compensation in radiation therapy is a challenging scenario that requires estimating and forecasting motion of tissue structures to deliver the target dose. Ultrasound offers direct imaging of tissue in real-time and is considered for image guidance in radiation therapy. Recently, fast volumetric ultrasound has gained traction, but motion analysis with such high-dimensional data remains difficult. While deep learning could bring many advantages, such as fast data processing and high performance, it remains unclear how to process sequences of hundreds of image volumes efficiently and effectively. We present a 4D deep learning approach for real-time motion estimation and forecasting using long-term 4D ultrasound data. Using motion traces acquired during radiation therapy combined with various tissue types, our results demonstrate that long-term motion estimation can be performed markerless with a tracking error of 0.35±0.2 mm and with an inference time of less than 5 ms. Also, we demonstrate forecasting directly from the image data up to 900 ms into the future. Overall, our findings highlight that 4D deep learning is a promising approach for motion analysis during radiotherapy.
Collapse
|
6
|
Zhao JZ, Ni R, Chow R, Rink A, Weersink R, Croke J, Raman S. Artificial intelligence applications in brachytherapy: A literature review. Brachytherapy 2023; 22:429-445. [PMID: 37248158 DOI: 10.1016/j.brachy.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Artificial intelligence (AI) has the potential to simplify and optimize various steps of the brachytherapy workflow, and this literature review aims to provide an overview of the work done in this field. METHODS AND MATERIALS We conducted a literature search in June 2022 on PubMed, Embase, and Cochrane for papers that proposed AI applications in brachytherapy. RESULTS A total of 80 papers satisfied inclusion/exclusion criteria. These papers were categorized as follows: segmentation (24), registration and image processing (6), preplanning (13), dose prediction and treatment planning (11), applicator/catheter/needle reconstruction (16), and quality assurance (10). AI techniques ranged from classical models such as support vector machines and decision tree-based learning to newer techniques such as U-Net and deep reinforcement learning, and were applied to facilitate small steps of a process (e.g., optimizing applicator selection) or even automate the entire step of the workflow (e.g., end-to-end preplanning). Many of these algorithms demonstrated human-level performance and offer significant improvements in speed. CONCLUSIONS AI has potential to augment, automate, and/or accelerate many steps of the brachytherapy workflow. We recommend that future studies adhere to standard reporting guidelines. We also stress the importance of using larger sample sizes and reporting results using clinically interpretable measures.
Collapse
Affiliation(s)
- Jonathan Zl Zhao
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ruiyan Ni
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Ronald Chow
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Alexandra Rink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Robert Weersink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Jennifer Croke
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Srinivas Raman
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada.
| |
Collapse
|
7
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
8
|
Ni C, Feng B, Yao J, Zhou X, Shen J, Ou D, Peng C, Xu D. Value of deep learning models based on ultrasonic dynamic videos for distinguishing thyroid nodules. Front Oncol 2023; 12:1066508. [PMID: 36733368 PMCID: PMC9887311 DOI: 10.3389/fonc.2022.1066508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 12/28/2022] [Indexed: 01/18/2023] Open
Abstract
Objective This study was designed to distinguish benign and malignant thyroid nodules by using deep learning(DL) models based on ultrasound dynamic videos. Methods Ultrasound dynamic videos of 1018 thyroid nodules were retrospectively collected from 657 patients in Zhejiang Cancer Hospital from January 2020 to December 2020 for the tests with 5 DL models. Results In the internal test set, the area under the receiver operating characteristic curve (AUROC) was 0.929(95% CI: 0.888,0.970) for the best-performing model LSTM Two radiologists interpreted the dynamic video with AUROC values of 0.760 (95% CI: 0.653, 0.867) and 0.815 (95% CI: 0.778, 0.853). In the external test set, the best-performing DL model had AUROC values of 0.896(95% CI: 0.847,0.945), and two ultrasound radiologist had AUROC values of 0.754 (95% CI: 0.649,0.850) and 0.833 (95% CI: 0.797,0.869). Conclusion This study demonstrates that the DL model based on ultrasound dynamic videos performs better than the ultrasound radiologists in distinguishing thyroid nodules.
Collapse
Affiliation(s)
- Chen Ni
- The Second Clinical School of Zhejiang Chinese Medical University, Hangzhou, China
| | - Bojian Feng
- Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Jincao Yao
- Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Xueqin Zhou
- Clinical Research Department, Esaote (Shenzhen) Medical Equipment Co., Ltd., Xinyilingyu Research Center, Shenzhen, China
| | - Jiafei Shen
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China; Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Di Ou
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China; Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Chanjuan Peng
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China; Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Dong Xu
- Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China,Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, Zhejiang, China,*Correspondence: Dong Xu,
| |
Collapse
|
9
|
Vesal S, Gayo I, Bhattacharya I, Natarajan S, Marks LS, Barratt DC, Fan RE, Hu Y, Sonn GA, Rusu M. Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study. Med Image Anal 2022; 82:102620. [PMID: 36148705 PMCID: PMC10161676 DOI: 10.1016/j.media.2022.102620] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 08/31/2022] [Accepted: 09/05/2022] [Indexed: 11/24/2022]
Abstract
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.
Collapse
Affiliation(s)
- Sulaiman Vesal
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| | - Iani Gayo
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Indrani Bhattacharya
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Shyam Natarajan
- Department of Urology, University of California Los Angeles, 200 Medical Plaza Driveway, Los Angeles, CA 90024, USA
| | - Leonard S Marks
- Department of Urology, University of California Los Angeles, 200 Medical Plaza Driveway, Los Angeles, CA 90024, USA
| | - Dean C Barratt
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Richard E Fan
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Geoffrey A Sonn
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| |
Collapse
|
10
|
Zhou S, Xu X, Bai J, Bragin M. Combining multi-view ensemble and surrogate lagrangian relaxation for real-time 3D biomedical image segmentation on the edge. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
11
|
Wang S, Singh VK, Cheah E, Wang X, Li Q, Chou SH, Lehman CD, Kumar V, Samir AE. Stacked dilated convolutions and asymmetric architecture for U-Net-based medical image segmentation. Comput Biol Med 2022; 148:105891. [PMID: 35932729 PMCID: PMC9596264 DOI: 10.1016/j.compbiomed.2022.105891] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 06/25/2022] [Accepted: 07/16/2022] [Indexed: 12/18/2022]
Abstract
Deep learning has been widely utilized for medical image segmentation. The most commonly used U-Net and its variants often share two common characteristics but lack solid evidence for the effectiveness. First, each block (i.e., consecutive convolutions of feature maps of the same resolution) outputs feature maps from the last convolution, limiting the variety of the receptive fields. Second, the network has a symmetric structure where the encoder and the decoder paths have similar numbers of channels. We explored two novel revisions: a stacked dilated operation that outputs feature maps from multi-scale receptive fields to replace the consecutive convolutions; an asymmetric architecture with fewer channels in the decoder path. Two novel models were developed: U-Net using the stacked dilated operation (SDU-Net) and asymmetric SDU-Net (ASDU-Net). We used both publicly available and private datasets to assess the efficacy of the proposed models. Extensive experiments confirmed SDU-Net outperformed or achieved performance similar to the state-of-the-art while using fewer parameters (40% of U-Net). ASDU-Net further reduced the model parameters to 20% of U-Net with performance comparable to SDU-Net. In conclusion, the stacked dilated operation and the asymmetric structure are promising for improving the performance of U-Net and its variants.
Collapse
Affiliation(s)
- Shuhang Wang
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, 02114, MA, USA.
| | - Vivek Kumar Singh
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, 02114, MA, USA
| | - Eugene Cheah
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, 02114, MA, USA
| | - Xiaohong Wang
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, 02114, MA, USA
| | - Qian Li
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, 02114, MA, USA
| | - Shinn-Huey Chou
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, 02114, MA, USA
| | - Constance D Lehman
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, 02114, MA, USA
| | - Viksit Kumar
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, 02114, MA, USA
| | - Anthony E Samir
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, 02114, MA, USA
| |
Collapse
|
12
|
Shu X, Gu Y, Zhang X, Hu C, Cheng K. FCRB U-Net: A novel fully connected residual block U-Net for fetal cerebellum ultrasound image segmentation. Comput Biol Med 2022; 148:105693. [PMID: 35717404 DOI: 10.1016/j.compbiomed.2022.105693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 05/15/2022] [Accepted: 05/31/2022] [Indexed: 11/29/2022]
Abstract
In this paper, we propose a novel U-Net with fully connected residual blocks (FCRB U-Net) for the fetal cerebellum Ultrasound image segmentation task. FCRB U-Net, an improved convolutional neural network (CNN) based on U-Net, replaces the double convolution operation in the original model with the fully connected residual block and embeds an effective channel attention module to enhance the extraction of valid features. Moreover, in the decoding stage, a feature reuse module is employed to form a fully connected decoder to make full use of deep features. FCRB U-Net can effectively alleviate the problem of the loss of feature information during the convolution process and improve segmentation accuracy. Experimental results demonstrate that the proposed approach is effective and promising in the field of fetal cerebellar segmentation in actual Ultrasound images. The average IoU value and mean Dice index reach 86.72% and 90.45%, respectively, which are 3.07% and 5.25% higher than that of the basic U-Net.
Collapse
Affiliation(s)
- Xin Shu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China.
| | - Yingyan Gu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China
| | - Xin Zhang
- Department of Medical Ultrasound, Affiliated Hospital of Jiangsu University, Zhenjiang, 212003, China.
| | - Chunlong Hu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China
| | - Ke Cheng
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China
| |
Collapse
|
13
|
Peng T, Wu Y, Qin J, Wu QJ, Cai J. H-ProSeg: Hybrid ultrasound prostate segmentation based on explainability-guided mathematical model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106752. [PMID: 35338887 DOI: 10.1016/j.cmpb.2022.106752] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 02/16/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and robust prostate segmentation in transrectal ultrasound (TRUS) images is of great interest for image-guided prostate interventions and prostate cancer diagnosis. However, it remains a challenging task for various reasons, including a missing or ambiguous boundary between the prostate and surrounding tissues, the presence of shadow artifacts, intra-prostate intensity heterogeneity, and anatomical variations. METHODS Here, we present a hybrid method for prostate segmentation (H-ProSeg) in TRUS images, using a small number of radiologist-defined seed points as the prior points. This method consists of three subnetworks. The first subnetwork uses an improved principal curve-based model to obtain data sequences consisting of seed points and their corresponding projection index. The second subnetwork uses an improved differential evolution-based artificial neural network for training to decrease the model error. The third subnetwork uses the parameters of the artificial neural network to explain the smooth mathematical description of the prostate contour. The performance of the H-ProSeg method was assessed in 55 brachytherapy patients using Dice similarity coefficient (DSC), Jaccard similarity coefficient (Ω), and accuracy (ACC) values. RESULTS The H-ProSeg method achieved excellent segmentation accuracy, with DSC, Ω, and ACC values of 95.8%, 94.3%, and 95.4%, respectively. Meanwhile, the DSC, Ω, and ACC values of the proposed method were as high as 93.3%, 91.9%, and 93%, respectively, due to the influence of Gaussian noise (standard deviation of Gaussian function, σ = 50). Although the σ increased from 10 to 50, the DSC, Ω, and ACC values fluctuated by a maximum of approximately 2.5%, demonstrating the excellent robustness of our method. CONCLUSIONS Here, we present a hybrid method for accurate and robust prostate ultrasound image segmentation. The H-ProSeg method achieved superior performance compared with current state-of-the-art techniques. The knowledge of precise boundaries of the prostate is crucial for the conservation of risk structures. The proposed models have the potential to improve prostate cancer diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Tao Peng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Yiyun Wu
- Department of Medical Technology, Jiangsu Province Hospital, Nanjing, Jiangsu, China
| | - Jing Qin
- Department of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Qingrong Jackie Wu
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| |
Collapse
|
14
|
Peng T, Tang C, Wu Y, Cai J. H-SegMed: A Hybrid Method for Prostate Segmentation in TRUS Images via Improved Closed Principal Curve and Improved Enhanced Machine Learning. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01619-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
Orlando N, Gyacskov I, Gillies DJ, Guo F, Romagnoli C, D'Souza D, Cool DW, Hoover DA, Fenster A. Effect of dataset size, image quality, and image type on deep learning-based automatic prostate segmentation in 3D ultrasound. Phys Med Biol 2022; 67. [PMID: 35240585 DOI: 10.1088/1361-6560/ac5a93] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 03/03/2022] [Indexed: 11/12/2022]
Abstract
Three-dimensional (3D) transrectal ultrasound (TRUS) is utilized in prostate cancer diagnosis and treatment, necessitating time-consuming manual prostate segmentation. We have previously developed an automatic 3D prostate segmentation algorithm involving deep learning prediction on radially sampled 2D images followed by 3D reconstruction, trained on a large, clinically diverse dataset with variable image quality. As large clinical datasets are rare, widespread adoption of automatic segmentation could be facilitated with efficient 2D-based approaches and the development of an image quality grading method. The complete training dataset of 6761 2D images, resliced from 206 3D TRUS volumes acquired using end-fire and side-fire acquisition methods, was split to train two separate networks using either end-fire or side-fire images. Split datasets were reduced to 1000, 500, 250, and 100 2D images. For deep learning prediction, modified U-Net and U-Net++ architectures were implemented and compared using an unseen test dataset of 40 3D TRUS volumes. A 3D TRUS image quality grading scale with three factors (acquisition quality, artifact severity, and boundary visibility) was developed to assess the impact on segmentation performance. For the complete training dataset, U-Net and U-Net++ networks demonstrated equivalent performance, but when trained using split end-fire/side-fire datasets, U-Net++ significantly outperformed the U-Net. Compared to the complete training datasets, U-Net++ trained using reduced-size end-fire and side-fire datasets demonstrated equivalent performance down to 500 training images. For this dataset, image quality had no impact on segmentation performance for end-fire images but did have a significant effect for side-fire images, with boundary visibility having the largest impact. Our algorithm provided fast (<1.5 s) and accurate 3D segmentations across clinically diverse images, demonstrating generalizability and efficiency when employed on smaller datasets, supporting the potential for widespread use, even when data is scarce. The development of an image quality grading scale provides a quantitative tool for assessing segmentation performance.
Collapse
Affiliation(s)
- Nathan Orlando
- Department of Medical Biophysics, Western University, London, Ontario N6A 3K7, Canada.,Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada
| | - Igor Gyacskov
- Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada
| | - Derek J Gillies
- London Health Sciences Centre, London, Ontario N6A 5W9, Canada
| | - Fumin Guo
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario M4N 3M5, Canada
| | - Cesare Romagnoli
- London Health Sciences Centre, London, Ontario N6A 5W9, Canada.,Department of Medical Imaging, Western University, London, Ontario N6A 3K7, Canada
| | - David D'Souza
- London Health Sciences Centre, London, Ontario N6A 5W9, Canada.,Department of Oncology, Western University, London, Ontario N6A 3K7, Canada
| | - Derek W Cool
- London Health Sciences Centre, London, Ontario N6A 5W9, Canada.,Department of Medical Imaging, Western University, London, Ontario N6A 3K7, Canada
| | - Douglas A Hoover
- Department of Medical Biophysics, Western University, London, Ontario N6A 3K7, Canada.,London Health Sciences Centre, London, Ontario N6A 5W9, Canada.,Department of Oncology, Western University, London, Ontario N6A 3K7, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Western University, London, Ontario N6A 3K7, Canada.,Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada.,Department of Medical Imaging, Western University, London, Ontario N6A 3K7, Canada.,Department of Oncology, Western University, London, Ontario N6A 3K7, Canada
| |
Collapse
|
16
|
Shen Z, Wu H, Chen Z, Hu J, Pan J, Kong J, Lin T. The Global Research of Artificial Intelligence on Prostate Cancer: A 22-Year Bibliometric Analysis. Front Oncol 2022; 12:843735. [PMID: 35299747 PMCID: PMC8921533 DOI: 10.3389/fonc.2022.843735] [Citation(s) in RCA: 46] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Accepted: 01/28/2022] [Indexed: 01/03/2023] Open
Abstract
Background With the rapid development of technology, artificial intelligence (AI) has been widely used in the diagnosis and prognosis prediction of a variety of diseases, including prostate cancer. Facts have proved that AI has broad prospects in the accurate diagnosis and treatment of prostate cancer. Objective This study mainly summarizes the research on the application of artificial intelligence in the field of prostate cancer through bibliometric analysis and explores possible future research hotspots. Methods The articles and reviews regarding application of AI in prostate cancer between 1999 and 2020 were selected from Web of Science Core Collection on August 23, 2021. Microsoft Excel 2019 and GraphPad Prism 8 were applied to analyze the targeted variables. VOSviewer (version 1.6.16), Citespace (version 5.8.R2), and a widely used online bibliometric platform were used to conduct co-authorship, co-citation, and co-occurrence analysis of countries, institutions, authors, references, and keywords in this field. Results A total of 2,749 articles were selected in this study. AI-related research on prostate cancer increased exponentially in recent years, of which the USA was the most productive country with 1,342 publications, and had close cooperation with many countries. The most productive institution and researcher were the Henry Ford Health System and Tewari. However, the cooperation among most institutions or researchers was not close even if the high research outputs. The result of keyword analysis could divide all studies into three clusters: “Diagnosis and Prediction AI-related study”, “Non-surgery AI-related study”, and “Surgery AI-related study”. Meanwhile, the current research hotspots were “deep learning” and “multiparametric MRI”. Conclusions Artificial intelligence has broad application prospects in prostate cancer, and a growing number of scholars are devoted to AI-related research on prostate cancer. Meanwhile, the cooperation among various countries and institutions needs to be strengthened in the future. It can be projected that noninvasive diagnosis and accurate minimally invasive treatment through deep learning technology will still be the research focus in the next few years.
Collapse
Affiliation(s)
- Zefeng Shen
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Haiyang Wu
- Graduate School, Tianjin Medical University, Tianjin, China
| | - Zeshi Chen
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jintao Hu
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jiexin Pan
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jianqiu Kong
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, China
| | - Tianxin Lin
- Department of Urology, Sun Yat-Sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, China
| |
Collapse
|
17
|
Xu X, Sanford T, Turkbey B, Xu S, Wood BJ, Yan P. Polar transform network for prostate ultrasound segmentation with uncertainty estimation. Med Image Anal 2022; 78:102418. [PMID: 35349838 PMCID: PMC9082929 DOI: 10.1016/j.media.2022.102418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 12/08/2021] [Accepted: 03/07/2022] [Indexed: 10/18/2022]
Abstract
Automatic and accurate prostate ultrasound segmentation is a long-standing and challenging problem due to the severe noise and ambiguous/missing prostate boundaries. In this work, we propose a novel polar transform network (PTN) to handle this problem from a fundamentally new perspective, where the prostate is represented and segmented in the polar coordinate space rather than the original image grid space. This new representation gives a prostate volume, especially the most challenging apex and base sub-areas, much denser samples than the background and thus facilitate the learning of discriminative features for accurate prostate segmentation. Moreover, in the polar representation, the prostate surface can be efficiently parameterized using a 2D surface radius map with respect to a centroid coordinate, which allows the proposed PTN to obtain superior accuracy compared with its counterparts using convolutional neural networks while having significantly fewer (18%∼41%) trainable parameters. We also equip our PTN with a novel strategy of centroid perturbed test-time augmentation (CPTTA), which is designed to further improve the segmentation accuracy and quantitatively assess the model uncertainty at the same time. The uncertainty estimation function provides valuable feedback to clinicians when manual modifications or approvals are required for the segmentation, substantially improving the clinical significance of our work. We conduct a three-fold cross validation on a clinical dataset consisting of 315 transrectal ultrasound (TRUS) images to comprehensively evaluate the performance of the proposed method. The experimental results show that our proposed PTN with CPTTA outperforms the state-of-the-art methods with statistical significance on most of the metrics while exhibiting a much smaller model size. Source code of the proposed PTN is released at https://github.com/DIAL-RPI/PTN.
Collapse
|
18
|
Prostate Segmentation via Dynamic Fusion Model. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-021-06502-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
19
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
20
|
Fichtinger G, Mousavi P, Ungi T, Fenster A, Abolmaesumi P, Kronreif G, Ruiz-Alzola J, Ndoye A, Diao B, Kikinis R. Design of an Ultrasound-Navigated Prostate Cancer Biopsy System for Nationwide Implementation in Senegal. J Imaging 2021; 7:154. [PMID: 34460790 PMCID: PMC8404908 DOI: 10.3390/jimaging7080154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 08/04/2021] [Accepted: 08/07/2021] [Indexed: 12/05/2022] Open
Abstract
This paper presents the design of NaviPBx, an ultrasound-navigated prostate cancer biopsy system. NaviPBx is designed to support an affordable and sustainable national healthcare program in Senegal. It uses spatiotemporal navigation and multiparametric transrectal ultrasound to guide biopsies. NaviPBx integrates concepts and methods that have been independently validated previously in clinical feasibility studies and deploys them together in a practical prostate cancer biopsy system. NaviPBx is based entirely on free open-source software and will be shared as a free open-source program with no restriction on its use. NaviPBx is set to be deployed and sustained nationwide through the Senegalese Military Health Service. This paper reports on the results of the design process of NaviPBx. Our approach concentrates on "frugal technology", intended to be affordable for low-middle income (LMIC) countries. Our project promises the wide-scale application of prostate biopsy and will foster time-efficient development and programmatic implementation of ultrasound-guided diagnostic and therapeutic interventions in Senegal and beyond.
Collapse
Affiliation(s)
- Gabor Fichtinger
- School of Computing, Queen’s University, Kingston, ON K7L 2N8, Canada; (P.M.); (T.U.)
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON K7L 2N8, Canada; (P.M.); (T.U.)
| | - Tamas Ungi
- School of Computing, Queen’s University, Kingston, ON K7L 2N8, Canada; (P.M.); (T.U.)
| | - Aaron Fenster
- Department of Medical Biophysics, Schulich School of Medicine & Dentistry, Western University, London, ON N6A 5B7, Canada;
| | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada;
| | - Gernot Kronreif
- Austrian Center for Medical Innovation and Technology, 2700 Wiener Neustadt, Austria;
| | - Juan Ruiz-Alzola
- Departamento de Señales y Comunicaciones, University of Las Palmas de Gran Canaria, 35001 Las Palmas, Spain;
| | - Alain Ndoye
- Department of Urology, Hôpital Aristide Le Dantec, Cheikh Anta Diop University, Dakar 10700, Senegal; (A.N.); (B.D.)
| | - Babacar Diao
- Department of Urology, Hôpital Aristide Le Dantec, Cheikh Anta Diop University, Dakar 10700, Senegal; (A.N.); (B.D.)
- Department of Urology, Ouakam Military Hospital, Dakar BP 5321, Senegal
| | - Ron Kikinis
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA 02115, USA;
| |
Collapse
|
21
|
Han S, Hwang SI, Lee HJ. A Weak and Semi-supervised Segmentation Method for Prostate Cancer in TRUS Images. J Digit Imaging 2021; 33:838-845. [PMID: 32043178 DOI: 10.1007/s10278-020-00323-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
The purpose of this research is to exploit a weak and semi-supervised deep learning framework to segment prostate cancer in TRUS images, alleviating the time-consuming work of radiologists to draw the boundary of the lesions and training the neural network on the data that do not have complete annotations. A histologic-proven benchmarking dataset of 102 case images was built and 22 images were randomly selected for evaluation. Some portion of the training images were strong supervised, annotated pixel by pixel. Using the strong supervised images, a deep learning neural network was trained. The rest of the training images with only weak supervision, which is just the location of the lesion, were fed to the trained network to produce the intermediate pixelwise labels for the weak supervised images. Then, we retrained the neural network on the all training images with the original labels and the intermediate labels and fed the training images to the retrained network to produce the refined labels. Comparing the distance of the center of mass of the refined labels and the intermediate labels to the weak supervision location, the closer one replaced the previous label, which could be considered as the label updates. After the label updates, test set images were fed to the retrained network for evaluation. The proposed method shows better result with weak and semi-supervised data than the method using only small portion of strong supervised data, although the improvement may not be as much as when the fully strong supervised dataset is used. In terms of mean intersection over union (mIoU), the proposed method reached about 0.6 when the ratio of the strong supervised data was 40%, about 2% decreased performance compared to that of 100% strong supervised case. The proposed method seems to be able to help to alleviate the time-consuming work of radiologists to draw the boundary of the lesions, and to train the neural network on the data that do not have complete annotations.
Collapse
Affiliation(s)
- Seokmin Han
- Department of Computer Science and Information Engineering, Korea National University of Transportation, Uiwang-si, Kyunggi-do, South Korea
| | - Sung Il Hwang
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si, Kyunggi-do, South Korea.
| | - Hak Jong Lee
- Department of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam-si, Kyunggi-do, South Korea.,Department of Nanoconvergence, Seoul National University Graduate School of Convergence Science and Technology, Suwon-si, Kyunggi-do, South Korea
| |
Collapse
|
22
|
Automatic Segmentation of Pancreatic Tumors Using Deep Learning on a Video Image of Contrast-Enhanced Endoscopic Ultrasound. J Clin Med 2021; 10:jcm10163589. [PMID: 34441883 PMCID: PMC8397137 DOI: 10.3390/jcm10163589] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/10/2021] [Accepted: 08/12/2021] [Indexed: 02/06/2023] Open
Abstract
Background: Contrast-enhanced endoscopic ultrasound (CE-EUS) is useful for the differentiation of pancreatic tumors. Using deep learning for the segmentation and classification of pancreatic tumors might further improve the diagnostic capability of CE-EUS. Aims: The aim of this study was to evaluate the capability of deep learning for the automatic segmentation of pancreatic tumors on CE-EUS video images and possible factors affecting the automatic segmentation. Methods: This retrospective study included 100 patients who underwent CE-EUS for pancreatic tumors. The CE-EUS video images were converted from the originals to 90-s segments with six frames per second. Manual segmentation of pancreatic tumors from B-mode images was performed as ground truth. Automatic segmentation was performed using U-Net with 100 epochs and was evaluated with 4-fold cross-validation. The degree of respiratory movement (RM) and tumor boundary (TB) were divided into 3-degree intervals in each patient and evaluated as possible factors affecting the segmentation. The concordance rate was calculated using the intersection over union (IoU). Results: The median IoU of all cases was 0.77. The median IoUs in TB-1 (clear around), TB-2, and TB-3 (unclear more than half) were 0.80, 0.76, and 0.69, respectively. The IoU for TB-1 was significantly higher than that of TB-3 (p < 0.01). However, there was no significant difference between the degrees of RM. Conclusions: Automatic segmentation of pancreatic tumors using U-Net on CE-EUS video images showed a decent concordance rate. The concordance rate was lowered by an unclear TB but was not affected by RM.
Collapse
|
23
|
DDV: A Taxonomy for Deep Learning Methods in Detecting Prostate Cancer. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10485-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
24
|
Brodie A, Dai N, Teoh JYC, Decaestecker K, Dasgupta P, Vasdev N. Artificial intelligence in urological oncology: An update and future applications. Urol Oncol 2021; 39:379-399. [PMID: 34024704 DOI: 10.1016/j.urolonc.2021.03.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 12/20/2020] [Accepted: 03/21/2021] [Indexed: 01/16/2023]
Abstract
There continues to be rapid developments and research in the field of Artificial Intelligence (AI) in Urological Oncology worldwide. In this review we discuss the basics of AI, application of AI per tumour group (Renal, Prostate and Bladder Cancer) and application of AI in Robotic Urological Surgery. We also discuss future applications of AI being developed with the benefits to patients with Urological Oncology.
Collapse
Affiliation(s)
- Andrew Brodie
- Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, United Kingdom
| | - Nick Dai
- Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, United Kingdom
| | - Jeremy Yuen-Chun Teoh
- S.H. Ho Urology Centre, Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Prokar Dasgupta
- Faculty of Life Sciences and Medicine, King's College London, London, United Kingdom
| | - Nikhil Vasdev
- Hertfordshire and Bedfordshire Urological Cancer Centre, Department of Urology, Lister Hospital, Stevenage, United Kingdom; School of Medicine and Life Sciences, University of Hertfordshire, Hatfield, United Kingdom.
| |
Collapse
|
25
|
Abstract
PURPOSE OF REVIEW Over the last decade, major advancements in artificial intelligence technology have emerged and revolutionized the extent to which physicians are able to personalize treatment modalities and care for their patients. Artificial intelligence technology aimed at mimicking/simulating human mental processes, such as deep learning artificial neural networks (ANNs), are composed of a collection of individual units known as 'artificial neurons'. These 'neurons', when arranged and interconnected in complex architectural layers, are capable of analyzing the most complex patterns. The aim of this systematic review is to give a comprehensive summary of the contemporary applications of deep learning ANNs in urological medicine. RECENT FINDINGS Fifty-five articles were included in this systematic review and each article was assigned an 'intermediate' score based on its overall quality. Of these 55 articles, nine studies were prospective, but no nonrandomized control trials were identified. SUMMARY In urological medicine, the application of novel artificial intelligence technologies, particularly ANNs, have been considered to be a promising step in improving physicians' diagnostic capabilities, especially with regards to predicting the aggressiveness and recurrence of various disorders. For benign urological disorders, for example, the use of highly predictive and reliable algorithms could be helpful for the improving diagnoses of male infertility, urinary tract infections, and pediatric malformations. In addition, articles with anecdotal experiences shed light on the potential of artificial intelligence-assisted surgeries, such as with the aid of virtual reality or augmented reality.
Collapse
|
26
|
Lee K, Kim JY, Lee MH, Choi CH, Hwang JY. Imbalanced Loss-Integrated Deep-Learning-Based Ultrasound Image Analysis for Diagnosis of Rotator-Cuff Tear. SENSORS 2021; 21:s21062214. [PMID: 33809972 PMCID: PMC8005102 DOI: 10.3390/s21062214] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 03/08/2021] [Accepted: 03/11/2021] [Indexed: 12/19/2022]
Abstract
A rotator cuff tear (RCT) is an injury in adults that causes difficulty in moving, weakness, and pain. Only limited diagnostic tools such as magnetic resonance imaging (MRI) and ultrasound Imaging (UI) systems can be utilized for an RCT diagnosis. Although UI offers comparable performance at a lower cost to other diagnostic instruments such as MRI, speckle noise can occur the degradation of the image resolution. Conventional vision-based algorithms exhibit inferior performance for the segmentation of diseased regions in UI. In order to achieve a better segmentation for diseased regions in UI, deep-learning-based diagnostic algorithms have been developed. However, it has not yet reached an acceptable level of performance for application in orthopedic surgeries. In this study, we developed a novel end-to-end fully convolutional neural network, denoted as Segmentation Model Adopting a pRe-trained Classification Architecture (SMART-CA), with a novel integrated on positive loss function (IPLF) to accurately diagnose the locations of RCT during an orthopedic examination using UI. Using the pre-trained network, SMART-CA can extract remarkably distinct features that cannot be extracted with a normal encoder. Therefore, it can improve the accuracy of segmentation. In addition, unlike other conventional loss functions, which are not suited for the optimization of deep learning models with an imbalanced dataset such as the RCT dataset, IPLF can efficiently optimize the SMART-CA. Experimental results have shown that SMART-CA offers an improved precision, recall, and dice coefficient of 0.604% (+38.4%), 0.942% (+14.0%) and 0.736% (+38.6%) respectively. The RCT segmentation from a normal ultrasound image offers the improved precision, recall, and dice coefficient of 0.337% (+22.5%), 0.860% (+15.8%) and 0.484% (+28.5%), respectively, in the RCT segmentation from an ultrasound image with severe speckle noise. The experimental results demonstrated the IPLF outperforms other conventional loss functions, and the proposed SMART-CA optimized with the IPLF showed better performance than other state-of-the-art networks for the RCT segmentation with high robustness to speckle noise.
Collapse
Affiliation(s)
- Kyungsu Lee
- Information and Communication Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu 42988, Korea; (K.L.); (M.H.L.)
| | - Jun Young Kim
- The Department of Orthopedic Surgery, School of Medicine, Catholic University, Daegu 42472, Korea;
| | - Moon Hwan Lee
- Information and Communication Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu 42988, Korea; (K.L.); (M.H.L.)
| | - Chang-Hyuk Choi
- The Department of Orthopedic Surgery, School of Medicine, Catholic University, Daegu 42472, Korea;
- Correspondence: (C.-H.C.); (J.Y.H.)
| | - Jae Youn Hwang
- The Department of Orthopedic Surgery, School of Medicine, Catholic University, Daegu 42472, Korea;
- Correspondence: (C.-H.C.); (J.Y.H.)
| |
Collapse
|
27
|
Artificial Intelligence and Machine Learning in Prostate Cancer Patient Management-Current Trends and Future Perspectives. Diagnostics (Basel) 2021; 11:diagnostics11020354. [PMID: 33672608 PMCID: PMC7924061 DOI: 10.3390/diagnostics11020354] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 02/16/2021] [Accepted: 02/17/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) is the field of computer science that aims to build smart devices performing tasks that currently require human intelligence. Through machine learning (ML), the deep learning (DL) model is teaching computers to learn by example, something that human beings are doing naturally. AI is revolutionizing healthcare. Digital pathology is becoming highly assisted by AI to help researchers in analyzing larger data sets and providing faster and more accurate diagnoses of prostate cancer lesions. When applied to diagnostic imaging, AI has shown excellent accuracy in the detection of prostate lesions as well as in the prediction of patient outcomes in terms of survival and treatment response. The enormous quantity of data coming from the prostate tumor genome requires fast, reliable and accurate computing power provided by machine learning algorithms. Radiotherapy is an essential part of the treatment of prostate cancer and it is often difficult to predict its toxicity for the patients. Artificial intelligence could have a future potential role in predicting how a patient will react to the therapy side effects. These technologies could provide doctors with better insights on how to plan radiotherapy treatment. The extension of the capabilities of surgical robots for more autonomous tasks will allow them to use information from the surgical field, recognize issues and implement the proper actions without the need for human intervention.
Collapse
|
28
|
Reinertsen I, Collins DL, Drouin S. The Essential Role of Open Data and Software for the Future of Ultrasound-Based Neuronavigation. Front Oncol 2021; 10:619274. [PMID: 33604299 PMCID: PMC7884817 DOI: 10.3389/fonc.2020.619274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 12/11/2020] [Indexed: 01/17/2023] Open
Abstract
With the recent developments in machine learning and modern graphics processing units (GPUs), there is a marked shift in the way intra-operative ultrasound (iUS) images can be processed and presented during surgery. Real-time processing of images to highlight important anatomical structures combined with in-situ display, has the potential to greatly facilitate the acquisition and interpretation of iUS images when guiding an operation. In order to take full advantage of the recent advances in machine learning, large amounts of high-quality annotated training data are necessary to develop and validate the algorithms. To ensure efficient collection of a sufficient number of patient images and external validity of the models, training data should be collected at several centers by different neurosurgeons, and stored in a standard format directly compatible with the most commonly used machine learning toolkits and libraries. In this paper, we argue that such effort to collect and organize large-scale multi-center datasets should be based on common open source software and databases. We first describe the development of existing open-source ultrasound based neuronavigation systems and how these systems have contributed to enhanced neurosurgical guidance over the last 15 years. We review the impact of the large number of projects worldwide that have benefited from the publicly available datasets “Brain Images of Tumors for Evaluation” (BITE) and “Retrospective evaluation of Cerebral Tumors” (RESECT) that include MR and US data from brain tumor cases. We also describe the need for continuous data collection and how this effort can be organized through the use of a well-adapted and user-friendly open-source software platform that integrates both continually improved guidance and automated data collection functionalities.
Collapse
Affiliation(s)
- Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - D Louis Collins
- NIST Laboratory, McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montréal, QC, Canada
| | - Simon Drouin
- Laboratoire Multimédia, École de Technologie Supérieure, Montréal, QC, Canada
| |
Collapse
|
29
|
Wang S, Liu M, Lian J, Shen D. Boundary Coding Representation for Organ Segmentation in Prostate Cancer Radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:310-320. [PMID: 32956051 PMCID: PMC8202780 DOI: 10.1109/tmi.2020.3025517] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Accurate segmentation of the prostate and organs at risk (OARs, e.g., bladder and rectum) in male pelvic CT images is a critical step for prostate cancer radiotherapy. Unfortunately, the unclear organ boundary and large shape variation make the segmentation task very challenging. Previous studies usually used representations defined directly on unclear boundaries as context information to guide segmentation. Those boundary representations may not be so discriminative, resulting in limited performance improvement. To this end, we propose a novel boundary coding network (BCnet) to learn a discriminative representation for organ boundary and use it as the context information to guide the segmentation. Specifically, we design a two-stage learning strategy in the proposed BCnet: 1) Boundary coding representation learning. Two sub-networks under the supervision of the dilation and erosion masks transformed from the manually delineated organ mask are first separately trained to learn the spatial-semantic context near the organ boundary. Then we encode the organ boundary based on the predictions of these two sub-networks and design a multi-atlas based refinement strategy by transferring the knowledge from training data to inference. 2) Organ segmentation. The boundary coding representation as context information, in addition to the image patches, are used to train the final segmentation network. Experimental results on a large and diverse male pelvic CT dataset show that our method achieves superior performance compared with several state-of-the-art methods.
Collapse
|
30
|
Lara Hernandez KA, Rienmüller T, Baumgartner D, Baumgartner C. Deep learning in spatiotemporal cardiac imaging: A review of methodologies and clinical usability. Comput Biol Med 2020; 130:104200. [PMID: 33421825 DOI: 10.1016/j.compbiomed.2020.104200] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 12/16/2020] [Accepted: 12/21/2020] [Indexed: 12/24/2022]
Abstract
The use of different cardiac imaging modalities such as MRI, CT or ultrasound enables the visualization and interpretation of altered morphological structures and function of the heart. In recent years, there has been an increasing interest in AI and deep learning that take into account spatial and temporal information in medical image analysis. In particular, deep learning tools using temporal information in image processing have not yet found their way into daily clinical practice, despite its presumed high diagnostic and prognostic value. This review aims to synthesize the most relevant deep learning methods and discuss their clinical usability in dynamic cardiac imaging using for example the complete spatiotemporal image information of the heart cycle. Selected articles were categorized according to the following indicators: clinical applications, quality of datasets, preprocessing and annotation, learning methods and training strategy, and test performance. Clinical usability was evaluated based on these criteria by classifying the selected papers into (i) clinical level, (ii) robust candidate and (iii) proof of concept applications. Interestingly, not a single one of the reviewed papers was classified as a "clinical level" study. Almost 39% of the articles achieved a "robust candidate" and as many as 61% a "proof of concept" status. In summary, deep learning in spatiotemporal cardiac imaging is still strongly research-oriented and its implementation in clinical application still requires considerable efforts. Challenges that need to be addressed are the quality of datasets together with clinical verification and validation of the performance achieved by the used method.
Collapse
Affiliation(s)
- Karen Andrea Lara Hernandez
- Institute of Health Care Engineering with European Testing Center of Medical Devices, Graz University of Technology, Graz, Austria; Department of Biomedical Engineering, Galileo University, Guatemala City, Guatemala
| | - Theresa Rienmüller
- Institute of Health Care Engineering with European Testing Center of Medical Devices, Graz University of Technology, Graz, Austria
| | | | - Christian Baumgartner
- Institute of Health Care Engineering with European Testing Center of Medical Devices, Graz University of Technology, Graz, Austria.
| |
Collapse
|
31
|
Andersén C, Rydén T, Thunberg P, Lagerlöf JH. Deep learning-based digitization of prostate brachytherapy needles in ultrasound images. Med Phys 2020; 47:6414-6420. [PMID: 33012023 PMCID: PMC7821271 DOI: 10.1002/mp.14508] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 09/12/2020] [Accepted: 09/21/2020] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To develop, and evaluate the performance of, a deep learning-based three-dimensional (3D) convolutional neural network (CNN) artificial intelligence (AI) algorithm aimed at finding needles in ultrasound images used in prostate brachytherapy. METHODS Transrectal ultrasound (TRUS) image volumes from 1102 treatments were used to create a clinical ground truth (CGT) including 24422 individual needles that had been manually digitized by medical physicists during brachytherapy procedures. A 3D CNN U-net with 128 × 128 × 128 TRUS image volumes as input was trained using 17215 needle examples. Predictions of voxels constituting a needle were combined to yield a 3D linear function describing the localization of each needle in a TRUS volume. Manual and AI digitizations were compared in terms of the root-mean-square distance (RMSD) along each needle, expressed as median and interquartile range (IQR). The method was evaluated on a data set including 7207 needle examples. A subgroup of the evaluation data set (n = 188) was created, where the needles were digitized once more by a medical physicist (G1) trained in brachytherapy. The digitization procedure was timed. RESULTS The RMSD between the AI and CGT was 0.55 (IQR: 0.35-0.86) mm. In the smaller subset, the RMSD between AI and CGT was similar (0.52 [IQR: 0.33-0.79] mm) but significantly smaller (P < 0.001) than the difference of 0.75 (IQR: 0.49-1.20) mm between AI and G1. The difference between CGT and G1 was 0.80 (IQR: 0.48-1.18) mm, implying that the AI performed as well as the CGT in relation to G1. The mean time needed for human digitization was 10 min 11 sec, while the time needed for the AI was negligible. CONCLUSIONS A 3D CNN can be trained to identify needles in TRUS images. The performance of the network was similar to that of a medical physicist trained in brachytherapy. Incorporating a CNN for needle identification can shorten brachytherapy treatment procedures substantially.
Collapse
Affiliation(s)
- Christoffer Andersén
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
| | - Tobias Rydén
- Department of Medical Physics and Biomedical EngineeringSahlgrenska University HospitalGothenburgSweden
| | - Per Thunberg
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
| | - Jakob H. Lagerlöf
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
- Department of Medical PhysicsKarlstad Central HospitalKarlstadSweden
| |
Collapse
|
32
|
Chen MY, Woodruff MA, Dasgupta P, Rukin NJ. Variability in accuracy of prostate cancer segmentation among radiologists, urologists, and scientists. Cancer Med 2020; 9:7172-7182. [PMID: 32810385 PMCID: PMC7541146 DOI: 10.1002/cam4.3386] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 07/19/2020] [Accepted: 07/27/2020] [Indexed: 12/11/2022] Open
Abstract
Background There is increasing research in using segmentation of prostate cancer to create a digital 3D model from magnetic resonance imaging (MRI) scans for purposes of education or surgical planning. However, the variation in segmentation of prostate cancer among users and potential inaccuracy has not been studied. Methods Four consultant radiologists, four consultant urologists, four urology trainees, and four nonclinician segmentation scientists were asked to segment a single slice of a lateral T3 prostate tumor on MRI (“Prostate 1”), an anterior zone prostate tumor MRI (“Prostate 2”), and a kidney tumor computed tomography (CT) scan (“Kidney”). Time taken and self‐rated subjective accuracy out of a maximum score of 10 were recorded. Root mean square error, Dice coefficient, Matthews correlation coefficient, Jaccard index, specificity, and sensitivity were calculated using the radiologists as the ground truth. Results There was high variance among the radiologists in segmentation of Prostate 1 and 2 tumors with mean Dice coefficients of 0.81 and 0.58, respectively, compared to 0.96 for the kidney tumor. Urologists and urology trainees had similar accuracy, while nonclinicians had the lowest accuracy scores for Prostate 1 and 2 tumors (0.60 and 0.47) but similar for kidney tumor (0.95). Mean sensitivity in Prostate 1 (0.63) and Prostate 2 (0.61) was lower than specificity (0.92 and 0.93) suggesting under‐segmentation of tumors in the non‐radiologist groups. Participants spent less time on the kidney tumor segmentation and self‐rated accuracy was higher than both prostate tumors. Conclusion Segmentation of prostate cancers is more difficult than other anatomy such as kidney tumors. Less experienced participants appear to under‐segment models and underestimate the size of prostate tumors. Segmentation of prostate cancer is highly variable even among radiologists, and 3D modeling for clinical use must be performed with caution. Further work to develop a methodology to maximize segmentation accuracy is needed.
Collapse
Affiliation(s)
- Michael Y Chen
- Science and Engineering Faculty, Queensland University of Technology, Brisbane, Queensland, Australia.,Redcliffe Hospital, Metro North Hospital and Health Service, Herston, Queensland, Australia.,School of Medicine, University of Queensland, Brisbane, Queensland, Australia.,Herston Biofabrication Institute, Metro North Hospital and Health Service, Brisbane, Australia
| | - Maria A Woodruff
- Science and Engineering Faculty, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Prokar Dasgupta
- King's College London, Guy's Hospital, London, United Kingdom
| | - Nicholas J Rukin
- Science and Engineering Faculty, Queensland University of Technology, Brisbane, Queensland, Australia.,Redcliffe Hospital, Metro North Hospital and Health Service, Herston, Queensland, Australia.,School of Medicine, University of Queensland, Brisbane, Queensland, Australia.,Herston Biofabrication Institute, Metro North Hospital and Health Service, Brisbane, Australia
| |
Collapse
|
33
|
Girum KB, Lalande A, Hussain R, Créhange G. A deep learning method for real-time intraoperative US image segmentation in prostate brachytherapy. Int J Comput Assist Radiol Surg 2020; 15:1467-1476. [PMID: 32691302 DOI: 10.1007/s11548-020-02231-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 07/08/2020] [Indexed: 01/28/2023]
Abstract
PURPOSE This paper addresses the detection of the clinical target volume (CTV) in transrectal ultrasound (TRUS) image-guided intraoperative for permanent prostate brachytherapy. Developing a robust and automatic method to detect the CTV on intraoperative TRUS images is clinically important to have faster and reproducible interventions that can benefit both the clinical workflow and patient health. METHODS We present a multi-task deep learning method for an automatic prostate CTV boundary detection in intraoperative TRUS images by leveraging both the low-level and high-level (prior shape) information. Our method includes a channel-wise feature calibration strategy for low-level feature extraction and learning-based prior knowledge modeling for prostate CTV shape reconstruction. It employs CTV shape reconstruction from automatically sampled boundary surface coordinates (pseudo-landmarks) to detect the low-contrast and noisy regions across the prostate boundary, while being less biased from shadowing, inherent speckles, and artifact signals from the needle and implanted radioactive seeds. RESULTS The proposed method was evaluated on a clinical database of 145 patients who underwent permanent prostate brachytherapy under TRUS guidance. Our method achieved a mean accuracy of [Formula: see text] and a mean surface distance error of [Formula: see text]. Extensive ablation and comparison studies show that our method outperformed previous deep learning-based methods by more than 7% for the Dice similarity coefficient and 6.9 mm reduced 3D Hausdorff distance error. CONCLUSION Our study demonstrates the potential of shape model-based deep learning methods for an efficient and accurate CTV segmentation in an ultrasound-guided intervention. Moreover, learning both low-level features and prior shape knowledge with channel-wise feature calibration can significantly improve the performance of deep learning methods in medical image segmentation.
Collapse
Affiliation(s)
- Kibrom Berihu Girum
- ImViA Laboratory, University of Burgundy, Batiment I3M, 64b rue sully, 21000, Dijon, France. .,Radiation Oncology Department, CGFL, Dijon, France.
| | - Alain Lalande
- ImViA Laboratory, University of Burgundy, Batiment I3M, 64b rue sully, 21000, Dijon, France.,Medical Imaging Department, CHU Dijon, Dijon, France
| | - Raabid Hussain
- ImViA Laboratory, University of Burgundy, Batiment I3M, 64b rue sully, 21000, Dijon, France
| | - Gilles Créhange
- ImViA Laboratory, University of Burgundy, Batiment I3M, 64b rue sully, 21000, Dijon, France.,Radiation Oncology Department, CGFL, Dijon, France
| |
Collapse
|
34
|
Borkovkina S, Camino A, Janpongsri W, Sarunic MV, Jian Y. Real-time retinal layer segmentation of OCT volumes with GPU accelerated inferencing using a compressed, low-latency neural network. BIOMEDICAL OPTICS EXPRESS 2020; 11:3968-3984. [PMID: 33014579 PMCID: PMC7510892 DOI: 10.1364/boe.395279] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/18/2020] [Accepted: 06/18/2020] [Indexed: 05/18/2023]
Abstract
Segmentation of retinal layers in optical coherence tomography (OCT) is an essential step in OCT image analysis for screening, diagnosis, and assessment of retinal disease progression. Real-time segmentation together with high-speed OCT volume acquisition allows rendering of en face OCT of arbitrary retinal layers, which can be used to increase the yield rate of high-quality scans, provide real-time feedback during image-guided surgeries, and compensate aberrations in adaptive optics (AO) OCT without using wavefront sensors. We demonstrate here unprecedented real-time OCT segmentation of eight retinal layer boundaries achieved by 3 levels of optimization: 1) a modified, low complexity, neural network structure, 2) an innovative scheme of neural network compression with TensorRT, and 3) specialized GPU hardware to accelerate computation. Inferencing with the compressed network U-NetRT took 3.5 ms, improving by 21 times the speed of conventional U-Net inference without reducing the accuracy. The latency of the entire pipeline from data acquisition to inferencing was only 41 ms, enabled by parallelized batch processing. The system and method allow real-time updating of en face OCT and OCTA visualizations of arbitrary retinal layers and plexuses in continuous mode scanning. To the best our knowledge, our work is the first demonstration of an ophthalmic imager with embedded artificial intelligence (AI) providing real-time feedback.
Collapse
Affiliation(s)
| | - Acner Camino
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 27239, USA
| | - Worawee Janpongsri
- Department of Engineering Science, Simon Fraser University, Burnaby, Canada
| | - Marinko V. Sarunic
- Department of Engineering Science, Simon Fraser University, Burnaby, Canada
| | - Yifan Jian
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 27239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
35
|
Carton FX, Chabanas M, Le Lann F, Noble JH. Automatic segmentation of brain tumor resections in intraoperative ultrasound images using U-Net. J Med Imaging (Bellingham) 2020; 7:031503. [PMID: 32090137 PMCID: PMC7026519 DOI: 10.1117/1.jmi.7.3.031503] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 01/17/2020] [Indexed: 11/14/2022] Open
Abstract
To compensate for the intraoperative brain tissue deformation, computer-assisted intervention methods have been used to register preoperative magnetic resonance images with intraoperative images. In order to model the deformation due to tissue resection, the resection cavity needs to be segmented in intraoperative images. We present an automatic method to segment the resection cavity in intraoperative ultrasound (iUS) images. We trained and evaluated two-dimensional (2-D) and three-dimensional (3-D) U-Net networks on two datasets of 37 and 13 cases that contain images acquired from different ultrasound systems. The best overall performing method was the 3-D network, which resulted in a 0.72 mean and 0.88 median Dice score over the whole dataset. The 2-D network also had good results with less computation time, with a median Dice score over 0.8. We also evaluated the sensitivity of network performance to training and testing with images from different ultrasound systems and image field of view. In this application, we found specialized networks to be more accurate for processing similar images than a general network trained with all the data. Overall, promising results were obtained for both datasets using specialized networks. This motivates further studies with additional clinical data, to enable training and validation of a clinically viable deep-learning model for automated delineation of the tumor resection cavity in iUS images.
Collapse
Affiliation(s)
- François-Xavier Carton
- University of Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, Grenoble, France
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Matthieu Chabanas
- University of Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, Grenoble, France
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Florian Le Lann
- Grenoble Alpes University Hospital, Department of Neurosurgery, Grenoble, France
| | - Jack H. Noble
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| |
Collapse
|
36
|
Orlando N, Gillies DJ, Gyacskov I, Romagnoli C, D’Souza D, Fenster A. Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Med Phys 2020; 47:2413-2426. [DOI: 10.1002/mp.14134] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 02/10/2020] [Accepted: 02/21/2020] [Indexed: 02/04/2023] Open
Affiliation(s)
- Nathan Orlando
- Department of Medical Biophysics Western University London ON N6A 3K7Canada
- Robarts Research Institute Western University London ON N6A 3K7Canada
| | - Derek J. Gillies
- Department of Medical Biophysics Western University London ON N6A 3K7Canada
- Robarts Research Institute Western University London ON N6A 3K7Canada
| | - Igor Gyacskov
- Robarts Research Institute Western University London ON N6A 3K7Canada
| | - Cesare Romagnoli
- Department of Medical Imaging Western University London ON N6A 3K7Canada
- London Health Sciences Centre London ON N6A 5W9Canada
| | - David D’Souza
- London Health Sciences Centre London ON N6A 5W9Canada
- Department of Oncology Western University London ON N6A 3K7Canada
| | - Aaron Fenster
- Department of Medical Biophysics Western University London ON N6A 3K7Canada
- Robarts Research Institute Western University London ON N6A 3K7Canada
- Department of Medical Imaging Western University London ON N6A 3K7Canada
- Department of Oncology Western University London ON N6A 3K7Canada
| |
Collapse
|
37
|
Du B, Wang J, Zheng H, Xiao C, Fang S, Lu M, Mao R. A novel transcranial ultrasound imaging method with diverging wave transmission and deep learning approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 186:105308. [PMID: 31978869 DOI: 10.1016/j.cmpb.2019.105308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 12/23/2019] [Accepted: 12/29/2019] [Indexed: 06/10/2023]
Abstract
Real time brain transcranial ultrasound imaging is extremely intriguing because of its numerous applications. However, the skull causes phase distortion and amplitude attenuation of ultrasound signals due to its density: the speed of sound is significantly different in bone tissue than in soft tissue. In this study, we propose an ultrafast transcranial ultrasound imaging technique with diverging wave (DW) transmission and a deep learning approach to achieve large field-of-view with high resolution and real time brain ultrasound imaging. DW transmission provides a frame rate of several kiloHz and a large field of view that is suitable for human brain imaging via a small acoustic window. However, it suffers from poor image quality because the diverging waves are all unfocused. Here, we adopted adaptive beamforming algorithms to improve both the image contrast and the lateral resolution. Both simulated and in situ experiments with a human skull resulted in significant image improvements. However, the skull still introduces a wavefront offset and distortion, which degrades the image quality even when adaptive beamforming methods are used. Thus, we also employed a U-Net neural network to detect the contour and position of the skull directly from the acquired RF signal matrix. This approach avoids the need for beamforming, image reconstruction, and image segmentation, making it more suitable for clinical use.
Collapse
Affiliation(s)
- Bin Du
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, 518060, China
| | - Jinyan Wang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, 518060, China
| | - Haoteng Zheng
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, 518060, China
| | - Chenhui Xiao
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, 518060, China
| | - Siyuan Fang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, 518060, China
| | - Minhua Lu
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, 518060, China.
| | - Rui Mao
- Guangdong Province Engineering Center of China-made High Performance Data Computing System, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
| |
Collapse
|
38
|
Abstract
Artificial intelligence (AI) - the ability of a machine to perform cognitive tasks to achieve a particular goal based on provided data - is revolutionizing and reshaping our health-care systems. The current availability of ever-increasing computational power, highly developed pattern recognition algorithms and advanced image processing software working at very high speeds has led to the emergence of computer-based systems that are trained to perform complex tasks in bioinformatics, medical imaging and medical robotics. Accessibility to 'big data' enables the 'cognitive' computer to scan billions of bits of unstructured information, extract the relevant information and recognize complex patterns with increasing confidence. Computer-based decision-support systems based on machine learning (ML) have the potential to revolutionize medicine by performing complex tasks that are currently assigned to specialists to improve diagnostic accuracy, increase efficiency of throughputs, improve clinical workflow, decrease human resource costs and improve treatment choices. These characteristics could be especially helpful in the management of prostate cancer, with growing applications in diagnostic imaging, surgical interventions, skills training and assessment, digital pathology and genomics. Medicine must adapt to this changing world, and urologists, oncologists, radiologists and pathologists, as high-volume users of imaging and pathology, need to understand this burgeoning science and acknowledge that the development of highly accurate AI-based decision-support applications of ML will require collaboration between data scientists, computer researchers and engineers.
Collapse
|
39
|
Vercauteren T, Unberath M, Padoy N, Navab N. CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer Assisted Interventions. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:198-214. [PMID: 31920208 PMCID: PMC6952279 DOI: 10.1109/jproc.2019.2946993] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 09/12/2019] [Accepted: 10/04/2019] [Indexed: 05/10/2023]
Abstract
Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.
Collapse
Affiliation(s)
- Tom Vercauteren
- School of Biomedical Engineering & Imaging SciencesKing’s College LondonLondonWC2R 2LSU.K.
| | - Mathias Unberath
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMD21218USA
| | - Nicolas Padoy
- ICube institute, CNRS, IHU Strasbourg, University of Strasbourg67081StrasbourgFrance
| | - Nassir Navab
- Fakultät für InformatikTechnische Universität München80333MunichGermany
| |
Collapse
|
40
|
Wang Y, Dou H, Hu X, Zhu L, Yang X, Xu M, Qin J, Heng PA, Wang T, Ni D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2768-2778. [PMID: 31021793 DOI: 10.1109/tmi.2019.2913184] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
Collapse
|
41
|
Lei Y, Tian S, He X, Wang T, Wang B, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Med Phys 2019; 46:3194-3206. [PMID: 31074513 PMCID: PMC6625925 DOI: 10.1002/mp.13577] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 04/14/2019] [Accepted: 05/01/2019] [Indexed: 01/09/2023] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. METHODS AND MATERIALS We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. RESULTS Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively. CONCLUSION We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Bo Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| |
Collapse
|
42
|
Shahedi M, Halicek M, Dormer JD, Schuster DM, Fei B. Deep learning-based three-dimensional segmentation of the prostate on computed tomography images. J Med Imaging (Bellingham) 2019; 6:025003. [PMID: 31065570 DOI: 10.1117/1.jmi.6.2.025003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2019] [Accepted: 04/04/2019] [Indexed: 11/14/2022] Open
Abstract
Segmentation of the prostate in computed tomography (CT) is used for planning and guidance of prostate treatment procedures. However, due to the low soft-tissue contrast of the images, manual delineation of the prostate on CT is a time-consuming task with high interobserver variability. We developed an automatic, three-dimensional (3-D) prostate segmentation algorithm based on a customized U-Net architecture. Our dataset contained 92 3-D abdominal CT scans from 92 patients, of which 69 images were used for training and validation and the remaining for testing the convolutional neural network model. Compared to manual segmentation by an expert radiologist, our method achieved 83 % ± 6 % for Dice similarity coefficient (DSC), 2.3 ± 0.6 mm for mean absolute distance (MAD), and 1.9 ± 4.0 cm 3 for signed volume difference ( Δ V ). The average recorded interexpert difference measured on the same test dataset was 92% (DSC), 1.1 mm (MAD), and 2.1 cm 3 ( Δ V ). The proposed algorithm is fast, accurate, and robust for 3-D segmentation of the prostate on CT images.
Collapse
Affiliation(s)
- Maysam Shahedi
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States
| | - Martin Halicek
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States.,Emory University and Georgia Institute of Technology, Department of Biomedical Engineering, Atlanta, Georgia, United States
| | - James D Dormer
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States
| | - David M Schuster
- Emory University School of Medicine, Department of Radiology and Imaging Science, Atlanta, Georgia, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States.,University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, Texas, United States.,University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| |
Collapse
|
43
|
van Sloun RJG, Wildeboer RR, Mannaerts CK, Postema AW, Gayet M, Beerlage HP, Salomon G, Wijkstra H, Mischi M. Deep Learning for Real-time, Automatic, and Scanner-adapted Prostate (Zone) Segmentation of Transrectal Ultrasound, for Example, Magnetic Resonance Imaging-transrectal Ultrasound Fusion Prostate Biopsy. Eur Urol Focus 2019; 7:78-85. [PMID: 31028016 DOI: 10.1016/j.euf.2019.04.009] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 03/25/2019] [Accepted: 04/10/2019] [Indexed: 02/06/2023]
Abstract
BACKGROUND Although recent advances in multiparametric magnetic resonance imaging (MRI) led to an increase in MRI-transrectal ultrasound (TRUS) fusion prostate biopsies, these are time consuming, laborious, and costly. Introduction of deep-learning approach would improve prostate segmentation. OBJECTIVE To exploit deep learning to perform automatic, real-time prostate (zone) segmentation on TRUS images from different scanners. DESIGN, SETTING, AND PARTICIPANTS Three datasets with TRUS images were collected at different institutions, using an iU22 (Philips Healthcare, Bothell, WA, USA), a Pro Focus 2202a (BK Medical), and an Aixplorer (SuperSonic Imagine, Aix-en-Provence, France) ultrasound scanner. The datasets contained 436 images from 181 men. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Manual delineations from an expert panel were used as ground truth. The (zonal) segmentation performance was evaluated in terms of the pixel-wise accuracy, Jaccard index, and Hausdorff distance. RESULTS AND LIMITATIONS The developed deep-learning approach was demonstrated to significantly improve prostate segmentation compared with a conventional automated technique, reaching median accuracy of 98% (95% confidence interval 95-99%), a Jaccard index of 0.93 (0.80-0.96), and a Hausdorff distance of 3.0 (1.3-8.7) mm. Zonal segmentation yielded pixel-wise accuracy of 97% (95-99%) and 98% (96-99%) for the peripheral and transition zones, respectively. Supervised domain adaptation resulted in retainment of high performance when applied to images from different ultrasound scanners (p > 0.05). Moreover, the algorithm's assessment of its own segmentation performance showed a strong correlation with the actual segmentation performance (Pearson's correlation 0.72, p < 0.001), indicating that possible incorrect segmentations can be identified swiftly. CONCLUSIONS Fusion-guided prostate biopsies, targeting suspicious lesions on MRI using TRUS are increasingly performed. The requirement for (semi)manual prostate delineation places a substantial burden on clinicians. Deep learning provides a means for fast and accurate (zonal) prostate segmentation of TRUS images that translates to different scanners. PATIENT SUMMARY Artificial intelligence for automatic delineation of the prostate on ultrasound was shown to be reliable and applicable to different scanners. This method can, for example, be applied to speed up, and possibly improve, guided prostate biopsies using magnetic resonance imaging-transrectal ultrasound fusion.
Collapse
Affiliation(s)
- Ruud J G van Sloun
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Rogier R Wildeboer
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Christophe K Mannaerts
- Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Arnoud W Postema
- Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Maudy Gayet
- Department of Urology, Jeroen Bosch Hospital, 's-Hertogenbosch, The Netherlands
| | - Harrie P Beerlage
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Georg Salomon
- Martini Klinik-Prostate Cancer Center, University Hospital Hamburg Eppendorf, Hamburg, Germany
| | - Hessel Wijkstra
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Massimo Mischi
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
44
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 698] [Impact Index Per Article: 116.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|
45
|
Ghavami N, Hu Y, Bonmati E, Rodell R, Gibson E, Moore C, Barratt D. Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images. J Med Imaging (Bellingham) 2018; 6:011003. [PMID: 30840715 PMCID: PMC6102407 DOI: 10.1117/1.jmi.6.1.011003] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 07/30/2018] [Indexed: 12/04/2022] Open
Abstract
Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.
Collapse
Affiliation(s)
- Nooshin Ghavami
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Yipeng Hu
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Ester Bonmati
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Rachael Rodell
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Eli Gibson
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| | - Caroline Moore
- University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom.,University College London, Division of Surgery and Interventional Science, London, United Kingdom.,University College London Hospitals NHS Foundation Trust, Department of Urology, London, United Kingdom
| | - Dean Barratt
- University College London, UCL Center for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, London, United Kingdom.,University College London, Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, United Kingdom
| |
Collapse
|
46
|
Biermann M, Reisæter LR. Automated Analysis of Gray-Scale Ultrasound Images of Thyroid Nodules (“Radiomics”) May Outperform Image Interpretation by Less Experienced Thyroid Radiologists. ACTA ACUST UNITED AC 2018. [DOI: 10.1089/ct.2018;30.332-336] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
| | - Lars R. Reisæter
- Department of Radiology, Bergen University Hospital, and Institute of Clinical Medicine, University of Bergen, Bergen, Norway
| |
Collapse
|