1
|
Johnson LA, Harmon SA, Yilmaz EC, Lin Y, Belue MJ, Merriman KM, Lay NS, Sanford TH, Sarma KV, Arnold CW, Xu Z, Roth HR, Yang D, Tetreault J, Xu D, Patel KR, Gurram S, Wood BJ, Citrin DE, Pinto PA, Choyke PL, Turkbey B. Automated prostate gland segmentation in challenging clinical cases: comparison of three artificial intelligence methods. Abdom Radiol (NY) 2024; 49:1545-1556. [PMID: 38512516 DOI: 10.1007/s00261-024-04242-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 02/05/2024] [Accepted: 02/06/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVE Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.
Collapse
Affiliation(s)
- Latrice A Johnson
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Stephanie A Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Enis C Yilmaz
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Yue Lin
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Mason J Belue
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Katie M Merriman
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nathan S Lay
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Karthik V Sarma
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, USA
| | - Corey W Arnold
- Department of Radiology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Ziyue Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Dong Yang
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Daguang Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | - Krishnan R Patel
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sandeep Gurram
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Bradford J Wood
- Center for Interventional Oncology, National Cancer Institute, NIH, Bethesda, MD, USA
- Department of Radiology, Clinical Center, NIH, Bethesda, MD, USA
| | - Deborah E Citrin
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter A Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA.
- Molecular Imaging Branch (B.T.), National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, MD, 20892, USA.
| |
Collapse
|
2
|
Parida PK, Dora L, Swain M, Agrawal S, Panda R. Data science methodologies in smart healthcare: a review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00648-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
3
|
Aldoj N, Biavati F, Dewey M, Hennemuth A, Asbach P, Sack I. Fully automated quantification of in vivo viscoelasticity of prostate zones using magnetic resonance elastography with Dense U-net segmentation. Sci Rep 2022; 12:2001. [PMID: 35132102 PMCID: PMC8821548 DOI: 10.1038/s41598-022-05878-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 01/05/2022] [Indexed: 11/13/2022] Open
Abstract
Magnetic resonance elastography (MRE) for measuring viscoelasticity heavily depends on proper tissue segmentation, especially in heterogeneous organs such as the prostate. Using trained network-based image segmentation, we investigated if MRE data suffice to extract anatomical and viscoelastic information for automatic tabulation of zonal mechanical properties of the prostate. Overall, 40 patients with benign prostatic hyperplasia (BPH) or prostate cancer (PCa) were examined with three magnetic resonance imaging (MRI) sequences: T2-weighted MRI (T2w), diffusion-weighted imaging (DWI), and MRE-based tomoelastography, yielding six independent sets of imaging data per patient (T2w, DWI, apparent diffusion coefficient, MRE magnitude, shear wave speed, and loss angle maps). Combinations of these data were used to train Dense U-nets with manually segmented masks of the entire prostate gland (PG), central zone (CZ), and peripheral zone (PZ) in 30 patients and to validate them in 10 patients. Dice score (DS), sensitivity, specificity, and Hausdorff distance were determined. We found that segmentation based on MRE magnitude maps alone (DS, PG: 0.93 ± 0.04, CZ: 0.95 ± 0.03, PZ: 0.77 ± 0.05) was more accurate than magnitude maps combined with T2w and DWI_b (DS, PG: 0.91 ± 0.04, CZ: 0.91 ± 0.06, PZ: 0.63 ± 0.16) or T2w alone (DS, PG: 0.92 ± 0.03, CZ: 0.91 ± 0.04, PZ: 0.65 ± 0.08). Automatically tabulated MRE values were not different from ground-truth values (P>0.05). In conclusion, MRE combined with Dense U-net segmentation allows tabulation of quantitative imaging markers without manual analysis and independent of other MRI sequences and can thus contribute to PCa detection and classification.
Collapse
Affiliation(s)
- Nader Aldoj
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Federico Biavati
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Marc Dewey
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany.,DKTK (German Cancer Consortium), Partner Site Berlin, Berlin, Germany.,Berlin Institute of Health at Charité, Universitätsmedizin Berlin, Berlin, Germany
| | - Anja Hennemuth
- Institute of Computer-assisted Cardiovascular Medicine, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Patrick Asbach
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Ingolf Sack
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany.
| |
Collapse
|
4
|
Motamed S, Rogalla P, Khalvati F. RANDGAN: Randomized generative adversarial network for detection of COVID-19 in chest X-ray. Sci Rep 2021; 11:8602. [PMID: 33883609 PMCID: PMC8060427 DOI: 10.1038/s41598-021-87994-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 04/06/2021] [Indexed: 01/08/2023] Open
Abstract
COVID-19 spread across the globe at an immense rate and has left healthcare systems incapacitated to diagnose and test patients at the needed rate. Studies have shown promising results for detection of COVID-19 from viral bacterial pneumonia in chest X-rays. Automation of COVID-19 testing using medical images can speed up the testing process of patients where health care systems lack sufficient numbers of the reverse-transcription polymerase chain reaction tests. Supervised deep learning models such as convolutional neural networks need enough labeled data for all classes to correctly learn the task of detection. Gathering labeled data is a cumbersome task and requires time and resources which could further strain health care systems and radiologists at the early stages of a pandemic such as COVID-19. In this study, we propose a randomized generative adversarial network (RANDGAN) that detects images of an unknown class (COVID-19) from known and labelled classes (Normal and Viral Pneumonia) without the need for labels and training data from the unknown class of images (COVID-19). We used the largest publicly available COVID-19 chest X-ray dataset, COVIDx, which is comprised of Normal, Pneumonia, and COVID-19 images from multiple public databases. In this work, we use transfer learning to segment the lungs in the COVIDx dataset. Next, we show why segmentation of the region of interest (lungs) is vital to correctly learn the task of classification, specifically in datasets that contain images from different resources as it is the case for the COVIDx dataset. Finally, we show improved results in detection of COVID-19 cases using our generative model (RANDGAN) compared to conventional generative adversarial networks for anomaly detection in medical images, improving the area under the ROC curve from 0.71 to 0.77.
Collapse
Affiliation(s)
- Saman Motamed
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada.
- Department of Diagnostic Imaging, Neurosciences and Mental Health, The Hospital for Sick Children, University of Toronto, Toronto, ON, Canada.
| | | | - Farzad Khalvati
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Neurosciences and Mental Health, The Hospital for Sick Children, University of Toronto, Toronto, ON, Canada
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
5
|
A 3D-2D Hybrid U-Net Convolutional Neural Network Approach to Prostate Organ Segmentation of Multiparametric MRI. AJR Am J Roentgenol 2020; 216:111-116. [PMID: 32812797 DOI: 10.2214/ajr.19.22168] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
OBJECTIVE Prostate cancer is the most commonly diagnosed cancer in men in the United States with more than 200,000 new cases in 2018. Multiparametric MRI (mpMRI) is increasingly used for prostate cancer evaluation. Prostate organ segmentation is an essential step of surgical planning for prostate fusion biopsies. Deep learning convolutional neural networks (CNNs) are the predominant method of machine learning for medical image recognition. In this study, we describe a deep learning approach, a subset of artificial intelligence, for automatic localization and segmentation of prostates from mpMRI. MATERIALS AND METHODS This retrospective study included patients who underwent prostate MRI and ultrasound-MRI fusion transrectal biopsy between September 2014 and December 2016. Axial T2-weighted images were manually segmented by two abdominal radiologists, which served as ground truth. These manually segmented images were used for training on a customized hybrid 3D-2D U-Net CNN architecture in a fivefold cross-validation paradigm for neural network training and validation. The Dice score, a measure of overlap between manually segmented and automatically derived segmentations, and Pearson linear correlation coefficient of prostate volume were used for statistical evaluation. RESULTS The CNN was trained on 299 MRI examinations (total number of MR images = 7774) of 287 patients. The customized hybrid 3D-2D U-Net had a mean Dice score of 0.898 (range, 0.890-0.908) and a Pearson correlation coefficient for prostate volume of 0.974. CONCLUSION A deep learning CNN can automatically segment the prostate organ from clinical MR images. Further studies should examine developing pattern recognition for lesion localization and quantification.
Collapse
|
6
|
Pesteie M, Abolmaesumi P, Rohling RN. Adaptive Augmentation of Medical Data Using Independently Conditional Variational Auto-Encoders. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2807-2820. [PMID: 31059432 DOI: 10.1109/tmi.2019.2914656] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Current deep supervised learning methods typically require large amounts of labeled data for training. Since there is a significant cost associated with clinical data acquisition and labeling, medical datasets used for training these models are relatively small in size. In this paper, we aim to alleviate this limitation by proposing a variational generative model along with an effective data augmentation approach that utilizes the generative model to synthesize data. In our approach, the model learns the probability distribution of image data conditioned on a latent variable and the corresponding labels. The trained model can then be used to synthesize new images for data augmentation. We demonstrate the effectiveness of the approach on two independent clinical datasets consisting of ultrasound images of the spine and magnetic resonance images of the brain. For the spine dataset, a baseline and a residual model achieve an accuracy of 85% and 92%, respectively, using our method compared to 78% and 83% using a conventional training approach for image classification task. For the brain dataset, a baseline and a U-net network achieve an accuracy of 84% and 88%, respectively, in Dice coefficient in tumor segmentation compared to 80% and 83% for the convention training approach.
Collapse
|
7
|
Reda I, Khalil A, Elmogy M, Abou El-Fetouh A, Shalaby A, Abou El-Ghar M, Elmaghraby A, Ghazal M, El-Baz A. Deep Learning Role in Early Diagnosis of Prostate Cancer. Technol Cancer Res Treat 2019; 17:1533034618775530. [PMID: 29804518 PMCID: PMC5972199 DOI: 10.1177/1533034618775530] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system.
Collapse
Affiliation(s)
- Islam Reda
- 1 Faculty of Computers and Information, Mansoura University, Mansoura, Egypt.,2 Department of Bioengineering, University of Louisville, Louisville, KY, USA
| | - Ashraf Khalil
- 3 Electrical and Computer Engineering Department, Abu Dhabi University, Abu Dhabi, United Arab Emirates
| | - Mohammed Elmogy
- 1 Faculty of Computers and Information, Mansoura University, Mansoura, Egypt.,2 Department of Bioengineering, University of Louisville, Louisville, KY, USA
| | | | - Ahmed Shalaby
- 2 Department of Bioengineering, University of Louisville, Louisville, KY, USA
| | | | - Adel Elmaghraby
- 5 Department of Computer Engineering and Computer Science, University of Louisville, Louisville, KY, USA
| | - Mohammed Ghazal
- 3 Electrical and Computer Engineering Department, Abu Dhabi University, Abu Dhabi, United Arab Emirates
| | - Ayman El-Baz
- 2 Department of Bioengineering, University of Louisville, Louisville, KY, USA
| |
Collapse
|
8
|
Mazurowski MA, Buda M, Saha A, Bashir MR. Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on MRI. J Magn Reson Imaging 2019; 49:939-954. [PMID: 30575178 PMCID: PMC6483404 DOI: 10.1002/jmri.26534] [Citation(s) in RCA: 202] [Impact Index Per Article: 40.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 09/14/2018] [Accepted: 09/17/2018] [Indexed: 12/15/2022] Open
Abstract
Deep learning is a branch of artificial intelligence where networks of simple interconnected units are used to extract patterns from data in order to solve complex problems. Deep-learning algorithms have shown groundbreaking performance in a variety of sophisticated tasks, especially those related to images. They have often matched or exceeded human performance. Since the medical field of radiology mainly relies on extracting useful information from images, it is a very natural application area for deep learning, and research in this area has rapidly grown in recent years. In this article, we discuss the general context of radiology and opportunities for application of deep-learning algorithms. We also introduce basic concepts of deep learning, including convolutional neural networks. Then, we present a survey of the research in deep learning applied to radiology. We organize the studies by the types of specific tasks that they attempt to solve and review a broad range of deep-learning algorithms being utilized. Finally, we briefly discuss opportunities and challenges for incorporating deep learning in the radiology practice of the future. Level of Evidence: 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;49:939-954.
Collapse
Affiliation(s)
- Maciej A. Mazurowski
- Department of Radiology, Duke University, Durham, NC
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
- Duke Medical Physics Program, Duke University, Durham, NC
| | - Mateusz Buda
- Department of Radiology, Duke University, Durham, NC
| | | | - Mustafa R. Bashir
- Department of Radiology, Duke University, Durham, NC
- Center for Advanced Magnetic Resonance Development, Duke University, Durham, NC
| |
Collapse
|
9
|
Tan L, Liang A, Li L, Liu W, Kang H, Chen C. Automatic prostate segmentation based on fusion between deep network and variational methods. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:821-837. [PMID: 31403960 DOI: 10.3233/xst-190524] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND Segmentation of prostate from magnetic resonance images (MRI) is a critical process for guiding prostate puncture and biopsy. Currently, the best results are obtained by Convolutional Neural Network (CNN). However, challenges still exist when applying CNN to segment prostate, such as data distribution issue caused by insubstantial and inconsistent intensity levels and vague boundaries in MRI. OBJECTIVE To segment prostate gland from a MRI dataset including different prostate images with limited resolution and quality. METHODS We propose and apply a global histogram matching approach to make intensity distribution of the MRI dataset closer to uniformity. To capture the real boundaries and improve segmentation accuracy, we employ a module of variational models to help improve performance. RESULTS Using seven evaluation metrics to quantify improvements of our proposed fusion approach compared with the state of art V-net model resulted in increase in the Dice Coefficient (11.2%), Jaccard Coefficient (13.7%), Volumetric Similarity (12.3%), Adjusted Rand Index (11.1%), Area under ROC Curve (11.6%), and reduction of the Mean Hausdorff Distance (16.1%) and Mahalanobis Distance (2.8%). The 3D reconstruction also validates the advantages of our proposed framework, especially in terms of smoothness, uniformity, and accuracy. In addition, observations from the selected examples of 2D visualization show that our segmentation results are closer to the real boundaries of the prostate, and better represent the prostate shapes. CONCLUSIONS Our proposed approach achieves significant performance improvements compared with the existing methods based on the original CNN or pure variational models.
Collapse
Affiliation(s)
- Lu Tan
- School of Electrical Engineering, Computing and Mathematical Sciences (Computing Discipline), Curtin University, Bentley, Western Australia, Australia
| | - Antoni Liang
- School of Electrical Engineering, Computing and Mathematical Sciences (Computing Discipline), Curtin University, Bentley, Western Australia, Australia
| | - Ling Li
- School of Electrical Engineering, Computing and Mathematical Sciences (Computing Discipline), Curtin University, Bentley, Western Australia, Australia
| | - Wanquan Liu
- School of Electrical Engineering, Computing and Mathematical Sciences (Computing Discipline), Curtin University, Bentley, Western Australia, Australia
| | - Hanwen Kang
- Department of Mechanical and Aerospace Engineering, Monash University, Clayton, VIC, Australia
| | - Chao Chen
- Department of Mechanical and Aerospace Engineering, Monash University, Clayton, VIC, Australia
| |
Collapse
|