1
|
Magoulianitis V, Yang J, Yang Y, Xue J, Kaneko M, Cacciamani G, Abreu A, Duddalwar V, Kuo CCJ, Gill IS, Nikias C. PCa-RadHop: A transparent and lightweight feed-forward method for clinically significant prostate cancer segmentation. Comput Med Imaging Graph 2024; 116:102408. [PMID: 38908295 DOI: 10.1016/j.compmedimag.2024.102408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 05/30/2024] [Accepted: 05/31/2024] [Indexed: 06/24/2024]
Abstract
Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as "black-boxes" in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.
Collapse
Affiliation(s)
- Vasileios Magoulianitis
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA.
| | - Jiaxin Yang
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Yijing Yang
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Jintang Xue
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Masatomo Kaneko
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Giovanni Cacciamani
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Andre Abreu
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Vinay Duddalwar
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA; Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - C-C Jay Kuo
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Inderbir S Gill
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Chrysostomos Nikias
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| |
Collapse
|
2
|
Kou W, Rey C, Marshall H, Chiu B. Interactive Cascaded Network for Prostate Cancer Segmentation from Multimodality MRI with Automated Quality Assessment. Bioengineering (Basel) 2024; 11:796. [PMID: 39199754 PMCID: PMC11351867 DOI: 10.3390/bioengineering11080796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2024] [Revised: 07/17/2024] [Accepted: 07/30/2024] [Indexed: 09/01/2024] Open
Abstract
The accurate segmentation of prostate cancer (PCa) from multiparametric MRI is crucial in clinical practice for guiding biopsy and treatment planning. Existing automated methods often lack the necessary accuracy and robustness in localizing PCa, whereas interactive segmentation methods, although more accurate, require user intervention on each input image, thereby limiting the cost-effectiveness of the segmentation workflow. Our innovative framework addresses the limitations of current methods by combining a coarse segmentation network, a rejection network, and an interactive deep network known as Segment Anything Model (SAM). The coarse segmentation network automatically generates initial segmentation results, which are evaluated by the rejection network to estimate their quality. Low-quality results are flagged for user interaction, with the user providing a region of interest (ROI) enclosing the lesions, whereas for high-quality results, ROIs were cropped from the automatic segmentation. Both manually and automatically defined ROIs are fed into SAM to produce the final fine segmentation. This approach significantly reduces the annotation burden and achieves substantial improvements by flagging approximately 20% of the images with the lowest quality scores for manual annotation. With only half of the images manually annotated, the final segmentation accuracy is statistically indistinguishable from that achieved using full manual annotation. Although this paper focuses on prostate lesion segmentation from multimodality MRI, the framework can be adapted to other medical image segmentation applications to improve segmentation efficiency while maintaining high accuracy standards.
Collapse
Affiliation(s)
- Weixuan Kou
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong;
| | - Cristian Rey
- Schulich School of Medicine & Dentistry, Western University, London, ON N6A 5C1, Canada;
| | - Harry Marshall
- Department of Radiology, Vanderbilt University Medical Center, Nashville, TN 37232, USA;
| | - Bernard Chiu
- Department of Physics & Computer Science, Wilfrid Laurier University, Waterloo, ON N2L 3C5, Canada
| |
Collapse
|
3
|
Li Y, El Habib Daho M, Conze PH, Zeghlache R, Le Boité H, Tadayoni R, Cochener B, Lamard M, Quellec G. A review of deep learning-based information fusion techniques for multimodal medical image classification. Comput Biol Med 2024; 177:108635. [PMID: 38796881 DOI: 10.1016/j.compbiomed.2024.108635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/18/2024] [Accepted: 05/18/2024] [Indexed: 05/29/2024]
Abstract
Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.
Collapse
Affiliation(s)
- Yihao Li
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Mostafa El Habib Daho
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France.
| | | | - Rachid Zeghlache
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Hugo Le Boité
- Sorbonne University, Paris, France; Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France
| | - Ramin Tadayoni
- Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France; Paris Cité University, Paris, France
| | - Béatrice Cochener
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France; Ophthalmology Department, CHRU Brest, Brest, France
| | - Mathieu Lamard
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | | |
Collapse
|
4
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
5
|
Weißer C, Netzer N, Görtz M, Schütz V, Hielscher T, Schwab C, Hohenfellner M, Schlemmer HP, Maier-Hein KH, Bonekamp D. Weakly Supervised MRI Slice-Level Deep Learning Classification of Prostate Cancer Approximates Full Voxel- and Slice-Level Annotation: Effect of Increasing Training Set Size. J Magn Reson Imaging 2024; 59:1409-1422. [PMID: 37504495 DOI: 10.1002/jmri.28891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 06/16/2023] [Accepted: 06/16/2023] [Indexed: 07/29/2023] Open
Abstract
BACKGROUND Weakly supervised learning promises reduced annotation effort while maintaining performance. PURPOSE To compare weakly supervised training with full slice-wise annotated training of a deep convolutional classification network (CNN) for prostate cancer (PC). STUDY TYPE Retrospective. SUBJECTS One thousand four hundred eighty-nine consecutive institutional prostate MRI examinations from men with suspicion for PC (65 ± 8 years) between January 2015 and November 2020 were split into training (N = 794, enriched with 204 PROSTATEx examinations) and test set (N = 695). FIELD STRENGTH/SEQUENCE 1.5 and 3T, T2-weighted turbo-spin-echo and diffusion-weighted echo-planar imaging. ASSESSMENT Histopathological ground truth was provided by targeted and extended systematic biopsy. Reference training was performed using slice-level annotation (SLA) and compared to iterative training utilizing patient-level annotations (PLAs) with supervised feedback of CNN estimates into the next training iteration at three incremental training set sizes (N = 200, 500, 998). Model performance was assessed by comparing specificity at fixed sensitivity of 0.97 [254/262] emulating PI-RADS ≥ 3, and 0.88-0.90 [231-236/262] emulating PI-RADS ≥ 4 decisions. STATISTICAL TESTS Receiver operating characteristic (ROC) and area under the curve (AUC) was compared using DeLong and Obuchowski test. Sensitivity and specificity were compared using McNemar test. Statistical significance threshold was P = 0.05. RESULTS Test set (N = 695) ROC-AUC performance of SLA (trained with 200/500/998 exams) was 0.75/0.80/0.83, respectively. PLA achieved lower ROC-AUC of 0.64/0.72/0.78. Both increased performance significantly with increasing training set size. ROC-AUC for SLA at 500 exams was comparable to PLA at 998 exams (P = 0.28). ROC-AUC was significantly different between SLA and PLA at same training set sizes, however the ROC-AUC difference decreased significantly from 200 to 998 training exams. Emulating PI-RADS ≥ 3 decisions, difference between PLA specificity of 0.12 [51/433] and SLA specificity of 0.13 [55/433] became undetectable (P = 1.0) at 998 exams. Emulating PI-RADS ≥ 4 decisions, at 998 exams, SLA specificity of 0.51 [221/433] remained higher than PLA specificity at 0.39 [170/433]. However, PLA specificity at 998 exams became comparable to SLA specificity of 0.37 [159/433] at 200 exams (P = 0.70). DATA CONCLUSION Weakly supervised training of a classification CNN using patient-level-only annotation had lower performance compared to training with slice-wise annotations, but improved significantly faster with additional training data. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Cedric Weißer
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
| | - Nils Netzer
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
| | - Magdalena Görtz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
- Junior Clinical Cooperation Unit, Multiparametric Methods for Early Detection of Prostate Cancer, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Viktoria Schütz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Thomas Hielscher
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Constantin Schwab
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Markus Hohenfellner
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), Germany
| | - Klaus H Maier-Hein
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), Germany
| |
Collapse
|
6
|
Khanfari H, Mehranfar S, Cheki M, Mohammadi Sadr M, Moniri S, Heydarheydari S, Rezaeijo SM. Exploring the efficacy of multi-flavored feature extraction with radiomics and deep features for prostate cancer grading on mpMRI. BMC Med Imaging 2023; 23:195. [PMID: 37993801 PMCID: PMC10664625 DOI: 10.1186/s12880-023-01140-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 10/26/2023] [Indexed: 11/24/2023] Open
Abstract
BACKGROUND The purpose of this study is to investigate the use of radiomics and deep features obtained from multiparametric magnetic resonance imaging (mpMRI) for grading prostate cancer. We propose a novel approach called multi-flavored feature extraction or tensor, which combines four mpMRI images using eight different fusion techniques to create 52 images or datasets for each patient. We evaluate the effectiveness of this approach in grading prostate cancer and compare it to traditional methods. METHODS We used the PROSTATEx-2 dataset consisting of 111 patients' images from T2W-transverse, T2W-sagittal, DWI, and ADC images. We used eight fusion techniques to merge T2W, DWI, and ADC images, namely Laplacian Pyramid, Ratio of the low-pass pyramid, Discrete Wavelet Transform, Dual-Tree Complex Wavelet Transform, Curvelet Transform, Wavelet Fusion, Weighted Fusion, and Principal Component Analysis. Prostate cancer images were manually segmented, and radiomics features were extracted using the Pyradiomics library in Python. We also used an Autoencoder for deep feature extraction. We used five different feature sets to train the classifiers: all radiomics features, all deep features, radiomics features linked with PCA, deep features linked with PCA, and a combination of radiomics and deep features. We processed the data, including balancing, standardization, PCA, correlation, and Least Absolute Shrinkage and Selection Operator (LASSO) regression. Finally, we used nine classifiers to classify different Gleason grades. RESULTS Our results show that the SVM classifier with deep features linked with PCA achieved the most promising results, with an AUC of 0.94 and a balanced accuracy of 0.79. Logistic regression performed best when using only the deep features, with an AUC of 0.93 and balanced accuracy of 0.76. Gaussian Naive Bayes had lower performance compared to other classifiers, while KNN achieved high performance using deep features linked with PCA. Random Forest performed well with the combination of deep features and radiomics features, achieving an AUC of 0.94 and balanced accuracy of 0.76. The Voting classifiers showed higher performance when using only the deep features, with Voting 2 achieving the highest performance, with an AUC of 0.95 and balanced accuracy of 0.78. CONCLUSION Our study concludes that the proposed multi-flavored feature extraction or tensor approach using radiomics and deep features can be an effective method for grading prostate cancer. Our findings suggest that deep features may be more effective than radiomics features alone in accurately classifying prostate cancer.
Collapse
Affiliation(s)
- Hasan Khanfari
- Department of Mechanical Engineering, Petroleum University of Technology, Ahvaz, Iran
| | - Saeed Mehranfar
- Department of Electrical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Mohsen Cheki
- Department of Medical Imaging and Radiation Sciences, Faculty of Paramedicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Mahmoud Mohammadi Sadr
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Samir Moniri
- Department of Medical Imaging and Radiation Sciences, Faculty of Paramedicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Sahel Heydarheydari
- Department of Medical Imaging and Radiation Sciences, Faculty of Paramedicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Seyed Masoud Rezaeijo
- Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.
- Cancer Research Center, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.
| |
Collapse
|
7
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
8
|
Kumar GV, Bellary MI, Reddy TB. Prostate cancer classification with MRI using Taylor-Bird Squirrel Optimization based Deep Recurrent Neural Network. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2165242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
Affiliation(s)
- Goddumarri Vijay Kumar
- Dept. of Computer Science and Technology, Sri Krishnadevaraya University, Ananthapuram, A.P., India
| | - Mohammed Ismail Bellary
- Department of Artificial Intelligence & Machine Learning, P.A. College of Engineering, Managalore, Affiliated to Visvesvaraya Technological University, Belagavi, K.A., India
| | - Thota Bhaskara Reddy
- Dept. of Computer Science and Technology, Sri Krishnadevaraya University, Ananthapuram, A.P., India
| |
Collapse
|
9
|
Buvaneswari B, Vijayaraj J, Satheesh Kumar B. Histopathological image-based breast cancer detection employing 3D-convolutional neural network feature extraction and Stochastic Diffusion Kernel Recursive Neural Networks classification. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2161148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- B. Buvaneswari
- Department of Information Technology, Panimalar Engineering College, Chennai, India
| | - J. Vijayaraj
- Department of Artificial Intelligence and Data Science, Easwari Engineering College, Chennai, India
| | - B. Satheesh Kumar
- Department of Computer Science and Engineering, School of Computing Science and Engineering, Galgotias University, Greater Noida, India
| |
Collapse
|
10
|
Belue MJ, Harmon SA, Lay NS, Daryanani A, Phelps TE, Choyke PL, Turkbey B. The Low Rate of Adherence to Checklist for Artificial Intelligence in Medical Imaging Criteria Among Published Prostate MRI Artificial Intelligence Algorithms. J Am Coll Radiol 2023; 20:134-145. [PMID: 35922018 PMCID: PMC9887098 DOI: 10.1016/j.jacr.2022.05.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 05/13/2022] [Accepted: 05/18/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To determine the rigor, generalizability, and reproducibility of published classification and detection artificial intelligence (AI) models for prostate cancer (PCa) on MRI using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines, a 42-item checklist that is considered a measure of best practice for presenting and reviewing medical imaging AI research. MATERIALS AND METHODS This review searched English literature for studies proposing PCa AI detection and classification models on MRI. Each study was evaluated with the CLAIM checklist. The additional outcomes for which data were sought included measures of AI model performance (eg, area under the curve [AUC], sensitivity, specificity, free-response operating characteristic curves), training and validation and testing group sample size, AI approach, detection versus classification AI, public data set utilization, MRI sequences used, and definition of gold standard for ground truth. The percentage of CLAIM checklist fulfillment was used to stratify studies into quartiles. Wilcoxon's rank-sum test was used for pair-wise comparisons. RESULTS In all, 75 studies were identified, and 53 studies qualified for analysis. The original CLAIM items that most studies did not fulfill includes item 12 (77% no): de-identification methods; item 13 (68% no): handling missing data; item 15 (47% no): rationale for choosing ground truth reference standard; item 18 (55% no): measurements of inter- and intrareader variability; item 31 (60% no): inclusion of validated interpretability maps; item 37 (92% no): inclusion of failure analysis to elucidate AI model weaknesses. An AUC score versus percentage CLAIM fulfillment quartile revealed a significant difference of the mean AUC scores between quartile 1 versus quartile 2 (0.78 versus 0.86, P = .034) and quartile 1 versus quartile 4 (0.78 versus 0.89, P = .003) scores. Based on additional information and outcome metrics gathered in this study, additional measures of best practice are defined. These new items include disclosure of public dataset usage, ground truth definition in comparison to other referenced works in the defined task, and sample size power calculation. CONCLUSION A large proportion of AI studies do not fulfill key items in CLAIM guidelines within their methods and results sections. The percentage of CLAIM checklist fulfillment is weakly associated with improved AI model performance. Additions or supplementations to CLAIM are recommended to improve publishing standards and aid reviewers in determining study rigor.
Collapse
Affiliation(s)
- Mason J Belue
- Medical Research Scholars Program Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Stephanie A Harmon
- Staff Scientist, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Nathan S Lay
- Staff Scientist, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Asha Daryanani
- Intramural Research Training Program Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Tim E Phelps
- Postdoctoral Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Peter L Choyke
- Artificial Intelligence Resource, Chief of Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Baris Turkbey
- Senior Clinician/Director, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
11
|
Zhao L, Bao J, Qiao X, Jin P, Ji Y, Li Z, Zhang J, Su Y, Ji L, Shen J, Zhang Y, Niu L, Xie W, Hu C, Shen H, Wang X, Liu J, Tian J. Predicting clinically significant prostate cancer with a deep learning approach: a multicentre retrospective study. Eur J Nucl Med Mol Imaging 2023; 50:727-741. [PMID: 36409317 PMCID: PMC9852176 DOI: 10.1007/s00259-022-06036-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 11/06/2022] [Indexed: 11/22/2022]
Abstract
PURPOSE This study aimed to develop deep learning (DL) models based on multicentre biparametric magnetic resonance imaging (bpMRI) for the diagnosis of clinically significant prostate cancer (csPCa) and compare the performance of these models with that of the Prostate Imaging and Reporting and Data System (PI-RADS) assessment by expert radiologists based on multiparametric MRI (mpMRI). METHODS We included 1861 consecutive male patients who underwent radical prostatectomy or biopsy at seven hospitals with mpMRI. These patients were divided into the training (1216 patients in three hospitals) and external validation cohorts (645 patients in four hospitals). PI-RADS assessment was performed by expert radiologists. We developed DL models for the classification between benign and malignant lesions (DL-BM) and that between csPCa and non-csPCa (DL-CS). An integrated model combining PI-RADS and the DL-CS model, abbreviated as PIDL-CS, was developed. The performances of the DL models and PIDL-CS were compared with that of PI-RADS. RESULTS In each external validation cohort, the area under the receiver operating characteristic curve (AUC) values of the DL-BM and DL-CS models were not significantly different from that of PI-RADS (P > 0.05), whereas the AUC of PIDL-CS was superior to that of PI-RADS (P < 0.05), except for one external validation cohort (P > 0.05). The specificity of PIDL-CS for the detection of csPCa was much higher than that of PI-RADS (P < 0.05). CONCLUSION Our proposed DL models can be a potential non-invasive auxiliary tool for predicting csPCa. Furthermore, PIDL-CS greatly increased the specificity of csPCa detection compared with PI-RADS assessment by expert radiologists, greatly reducing unnecessary biopsies and helping radiologists achieve a precise diagnosis of csPCa.
Collapse
Affiliation(s)
- Litao Zhao
- School of Engineering Medicine, Beihang University, Beijing, 100191 China ,Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of China, Beijing, 100191 China ,School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191 China
| | - Jie Bao
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, 215006 Jiangsu China
| | - Xiaomeng Qiao
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, 215006 Jiangsu China
| | - Pengfei Jin
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, 215006 Jiangsu China
| | - Yanting Ji
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, 215006 Jiangsu China ,Department of Radiology, The Affiliated Zhangjiagang Hospital of Soochow University, Zhangjiagang, 215638 Jiangsu China
| | - Zhenkai Li
- Department of Radiology, Suzhou Kowloon Hospital, Shanghai Jiaotong University School of Medicine, Suzhou, 215028 Jiangsu China
| | - Ji Zhang
- Department of Radiology, The People’s Hospital of Taizhou, Taizhou, 225399 Jiangsu China
| | - Yueting Su
- Department of Radiology, The People’s Hospital of Taizhou, Taizhou, 225399 Jiangsu China
| | - Libiao Ji
- Department of Radiology, Changshu No.1 People’s Hospital, Changshu, 215501 Jiangsu China
| | - Junkang Shen
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004 Jiangsu China
| | - Yueyue Zhang
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004 Jiangsu China
| | - Lei Niu
- Department of Radiology, The People’s Hospital of Suqian, Suqian, 223812 Jiangsu China
| | - Wanfang Xie
- School of Engineering Medicine, Beihang University, Beijing, 100191 China ,Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of China, Beijing, 100191 China ,School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191 China
| | - Chunhong Hu
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, 215006 Jiangsu China
| | - Hailin Shen
- Department of Radiology, Suzhou Kowloon Hospital, Shanghai Jiaotong University School of Medicine, Suzhou, 215028 Jiangsu China
| | - Ximing Wang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, 215006 Jiangsu China
| | - Jiangang Liu
- School of Engineering Medicine, Beihang University, Beijing, 100191 China ,Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of China, Beijing, 100191 China
| | - Jie Tian
- School of Engineering Medicine, Beihang University, Beijing, 100191 China ,Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of China, Beijing, 100191 China
| |
Collapse
|
12
|
Fang L, Wang X. Multi-input Unet model based on the integrated block and the aggregation connection for MRI brain tumor segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104027] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
13
|
Primary Open-Angle Glaucoma Diagnosis From Optic Disc Photographs Using a Siamese Network. OPHTHALMOLOGY SCIENCE 2022; 2:100209. [PMID: 36531584 PMCID: PMC9754976 DOI: 10.1016/j.xops.2022.100209] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/01/2022] [Accepted: 08/05/2022] [Indexed: 11/20/2022]
Abstract
Purpose Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. Although deep learning methods have been proposed to diagnose POAG, these methods all used a single image as input. Contrastingly, glaucoma specialists typically compare the follow-up image with the baseline image to diagnose incident glaucoma. To simulate this process, we proposed a Siamese neural network, POAGNet, to detect POAG from optic disc photographs. Design The POAGNet, an algorithm for glaucoma diagnosis, is developed using optic disc photographs. Participants The POAGNet was trained and evaluated on 2 data sets: (1) 37 339 optic disc photographs from 1636 Ocular Hypertension Treatment Study (OHTS) participants and (2) 3684 optic disc photographs from the Sequential fundus Images for Glaucoma (SIG) data set. Gold standard labels were obtained using reading center grades. Methods We proposed a Siamese network model, POAGNet, to simulate the clinical process of identifying POAG from optic disc photographs. The POAGNet consists of 2 side outputs for deep supervision and uses convolution to measure the similarity between 2 networks. Main Outcome Measures The main outcome measures are the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. Results In POAG diagnosis, extensive experiments show that POAGNet performed better than the best state-of-the-art model on the OHTS test set (area under the curve [AUC] 0.9587 versus 0.8750). It also outperformed the baseline models on the SIG test set (AUC 0.7518 versus 0.6434). To assess the transferability of POAGNet, we also validated the impact of cross-data set variability on our model. The model trained on OHTS achieved an AUC of 0.7490 on SIG, comparable to the previous model trained on the same data set. When using the combination of SIG and OHTS for training, our model achieved superior AUC to the single-data model (AUC 0.8165 versus 0.7518). These demonstrate the relative generalizability of POAGNet. Conclusions By simulating the clinical grading process, POAGNet demonstrated high accuracy in POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. The POAGNet is publicly available on https://github.com/bionlplab/poagnet.
Collapse
|
14
|
Lu X, Zhang S, Liu Z, Liu S, Huang J, Kong G, Li M, Liang Y, Cui Y, Yang C, Zhao S. Ultrasonographic pathological grading of prostate cancer using automatic region-based Gleason grading network. Comput Med Imaging Graph 2022; 102:102125. [PMID: 36257091 DOI: 10.1016/j.compmedimag.2022.102125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/26/2022] [Accepted: 09/20/2022] [Indexed: 11/05/2022]
Abstract
The Gleason scoring system is a reliable method for quantifying the aggressiveness of prostate cancer, which provides an important reference value for clinical assessment on therapeutic strategies. However, to the best of our knowledge, no study has been done on the pathological grading of prostate cancer from single ultrasound images. In this work, a novel Automatic Region-based Gleason Grading (ARGG) network for prostate cancer based on deep learning is proposed. ARGG consists of two stages: (1) a region labeling object detection (RLOD) network is designed to label the prostate cancer lesion region; (2) a Gleason grading network (GNet) is proposed for pathological grading of prostate ultrasound images. In RLOD, a new feature fusion structure Skip-connected Feature Pyramid Network (CFPN) is proposed as an auxiliary branch for extracting features and enhancing the fusion of high-level features and low-level features, which helps to detect the small lesion and extract the image detail information. In GNet, we designed a synchronized pulse enhancement module (SPEM) based on pulse-coupled neural networks for enhancing the results of RLOD detection and used as training samples, and then fed the enhanced results and the original ones into the channel attention classification network (CACN), which introduces an attention mechanism to benefit the prediction of cancer grading. Experimental performance on the dataset of prostate ultrasound images collected from hospitals shows that the proposed Gleason grading model outperforms the manual diagnosis by physicians with a precision of 0.830. In addition, we have evaluated the lesions detection performance of RLOD, which achieves a mean Dice metric of 0.815.
Collapse
Affiliation(s)
- Xu Lu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China; Pazhou Lab, Guangzhou 510330, China
| | - Shulian Zhang
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Zhiyong Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Shaopeng Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Jun Huang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Guoquan Kong
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Mingzhu Li
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yinying Liang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yunneng Cui
- Department of Radiology, Foshan Maternity and Children's Healthcare Hospital Affiliated to Southern Medical University, Foshan 528000, China
| | - Chuan Yang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Shen Zhao
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
15
|
Shao L, Liu Z, Liu J, Yan Y, Sun K, Liu X, Lu J, Tian J. Patient-level grading prediction of prostate cancer from mp-MRI via GMINet. Comput Biol Med 2022; 150:106168. [PMID: 36240594 DOI: 10.1016/j.compbiomed.2022.106168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/21/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
Magnetic resonance imaging (MRI) is considered the best imaging modality for non-invasive observation of prostate cancer. However, the existing quantitative analysis methods still have challenges in patient-level prediction, including accuracy, interpretability, context understanding, tumor delineation dependence, and multiple sequence fusion. Therefore, we propose a topological graph-guided multi-instance network (GMINet) to catch global contextual information of multi-parametric MRI for patient-level prediction. We integrate visual information from multi-slice MRI with slice-to-slice correlations for a more complete context. A novel strategy of attention folwing is proposed to fuse different MRI-based network branches for mp-MRI. Our method achieves state-of-the-art performance for Prostate cancer on a multi-center dataset (N = 478) and a public dataset (N = 204). The five-classification accuracy of Grade Group is 81.1 ± 1.8% (multi-center dataset) from the test set of five-fold cross-validation, and the area under curve of detecting clinically significant prostate cancer is 0.801 ± 0.018 (public dataset) from the test set of five-fold cross-validation respectively. The model also achieves tumor detection based on attention analysis, which improves the interpretability of the model. The novel method is hopeful to further improve the accurate prediction ability of MRI in the diagnosis and treatment of prostate cancer.
Collapse
Affiliation(s)
- Lizhi Shao
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Jiangang Liu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China and Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China
| | - Ye Yan
- Department of Urology, Peking University Third Hospital, Beijing, 100191, China
| | - Kai Sun
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Xiangyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Jian Lu
- Department of Urology, Peking University Third Hospital, Beijing, 100191, China.
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China and Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China.
| |
Collapse
|
16
|
Hassan T, Shafay M, Hassan B, Akram MU, ElBaz A, Werghi N. Knowledge distillation driven instance segmentation for grading prostate cancer. Comput Biol Med 2022; 150:106124. [PMID: 36208597 DOI: 10.1016/j.compbiomed.2022.106124] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 08/29/2022] [Accepted: 09/17/2022] [Indexed: 11/27/2022]
Abstract
Prostate cancer (PCa) is one of the deadliest cancers in men, and identifying cancerous tissue patterns at an early stage can assist clinicians in timely treating the PCa spread. Many researchers have developed deep learning systems for mass-screening PCa. These systems, however, are commonly trained with well-annotated datasets in order to produce accurate results. Obtaining such data for training is often time and resource-demanding in clinical settings and can result in compromised screening performance. To address these limitations, we present a novel knowledge distillation-based instance segmentation scheme that allows conventional semantic segmentation models to perform instance-aware segmentation to extract stroma, benign, and the cancerous prostate tissues from the whole slide images (WSI) with incremental few-shot training. The extracted tissues are then used to compute majority and minority Gleason scores, which, afterward, are used in grading the PCa as per the clinical standards. The proposed scheme has been thoroughly tested on two datasets, containing around 10,516 and 11,000 WSI scans, respectively. Across both datasets, the proposed scheme outperforms state-of-the-art methods by 2.01% and 4.45%, respectively, in terms of the mean IoU score for identifying prostate tissues, and 10.73% and 11.42% in terms of F1 score for grading PCa according to the clinical standards. Furthermore, the applicability of the proposed scheme is tested under a blind experiment with a panel of expert pathologists, where it achieved a statistically significant Pearson correlation of 0.9192 and 0.8984 with the clinicians' grading.
Collapse
Affiliation(s)
- Taimur Hassan
- KUCARS and C2PS, Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, 127788, United Arab Emirates; Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan.
| | - Muhammad Shafay
- KUCARS and C2PS, Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, 127788, United Arab Emirates
| | - Bilal Hassan
- KUCARS and C2PS, Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, 127788, United Arab Emirates; School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China
| | - Muhammad Usman Akram
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Ayman ElBaz
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA
| | - Naoufel Werghi
- KUCARS and C2PS, Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, 127788, United Arab Emirates
| |
Collapse
|
17
|
Alfano R, Bauman GS, Gomez JA, Gaed M, Moussa M, Chin J, Pautler S, Ward AD. Prostate cancer classification using radiomics and machine learning on mp-MRI validated using co-registered histology. Eur J Radiol 2022; 156:110494. [PMID: 36095953 DOI: 10.1016/j.ejrad.2022.110494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 07/04/2022] [Accepted: 08/16/2022] [Indexed: 11/21/2022]
Abstract
BACKGROUND Multi-parametric magnetic resonance imaging (mp-MRI) is emerging as a useful tool for prostate cancer (PCa) detection but currently has unaddressed limitations. Computer aided diagnosis (CAD) systems have been developed to address these needs, but many approaches used to generate and validate the models have inherent biases. METHOD All clinically significant PCa on histology was mapped to mp-MRI using a previously validated registration algorithm. Shape and size matched non-PCa regions were selected using a proposed sampling algorithm to eliminate biases towards shape and size. Further analysis was performed to assess biases regarding inter-zonal variability. RESULTS A 5-feature Naïve-Bayes classifier produced an area under the receiver operating characteristic curve (AUC) of 0.80 validated using leave-one-patient-out cross-validation. As mean inter-class area mismatch increased, median AUC trended towards positively biasing classifiers to producing higher AUCs. Classifiers were invariant to differences in shape between PCa and non-PCa lesions (AUC: 0.82 vs 0.82). Performance for models trained and tested only in the peripheral zone was found to be lower than in the central gland (AUC: 0.75 vs 0.95). CONCLUSION We developed a radiomics based machine learning system to classify PCa vs non-PCa tissue on mp-MRI validated on accurately co-registered mid-gland histology with a measured target registration error. Potential biases involved in model development were interrogated to provide considerations for future work in this area.
Collapse
Affiliation(s)
- Ryan Alfano
- Baines Imaging Research Laboratory, 790 Commissioners Rd E, London, ON N6A 5W9, Canada; Lawson Health Research Institute, 750 Base Line Rd E, London, ON N6C 2R5, Canada; Western University, Department of Medical Biophysics, 1151 Richmond St., London, ON N6A 3K7, Canada.
| | - Glenn S Bauman
- Western University, Department of Medical Biophysics, 1151 Richmond St., London, ON N6A 3K7, Canada; Western University, Department of Oncology, 1151 Richmond St., London, ON N6A 3K7, Canada.
| | - Jose A Gomez
- Western University, Department of Pathology and Laboratory Medicine, 1151 Richmond St., London, ON N6A 3K7, Canada.
| | - Mena Gaed
- Western University, Department of Pathology and Laboratory Medicine, 1151 Richmond St., London, ON N6A 3K7, Canada.
| | - Madeleine Moussa
- Western University, Department of Pathology and Laboratory Medicine, 1151 Richmond St., London, ON N6A 3K7, Canada.
| | - Joseph Chin
- Western University, Department of Surgery, 1151 Richmond St., London, ON N6A 3K7, Canada; Western University, Department of Oncology, 1151 Richmond St., London, ON N6A 3K7, Canada.
| | - Stephen Pautler
- Western University, Department of Surgery, 1151 Richmond St., London, ON N6A 3K7, Canada; Western University, Department of Oncology, 1151 Richmond St., London, ON N6A 3K7, Canada.
| | - Aaron D Ward
- Baines Imaging Research Laboratory, 790 Commissioners Rd E, London, ON N6A 5W9, Canada; Lawson Health Research Institute, 750 Base Line Rd E, London, ON N6C 2R5, Canada; Western University, Department of Medical Biophysics, 1151 Richmond St., London, ON N6A 3K7, Canada; Western University, Department of Oncology, 1151 Richmond St., London, ON N6A 3K7, Canada.
| |
Collapse
|
18
|
Zhu L, Gao G, Zhu Y, Han C, Liu X, Li D, Liu W, Wang X, Zhang J, Zhang X, Wang X. Fully automated detection and localization of clinically significant prostate cancer on MR images using a cascaded convolutional neural network. Front Oncol 2022; 12:958065. [PMID: 36249048 PMCID: PMC9558117 DOI: 10.3389/fonc.2022.958065] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose To develop a cascaded deep learning model trained with apparent diffusion coefficient (ADC) and T2-weighted imaging (T2WI) for fully automated detection and localization of clinically significant prostate cancer (csPCa). Methods This retrospective study included 347 consecutive patients (235 csPCa, 112 non-csPCa) with high-quality prostate MRI data, which were randomly selected for training, validation, and testing. The ground truth was obtained using manual csPCa lesion segmentation, according to pathological results. The proposed cascaded model based on Res-UNet takes prostate MR images (T2WI+ADC or only ADC) as inputs and automatically segments the whole prostate gland, the anatomic zones, and the csPCa region step by step. The performance of the models was evaluated and compared with PI-RADS (version 2.1) assessment using sensitivity, specificity, accuracy, and Dice similarity coefficient (DSC) in the held-out test set. Results In the test set, the per-lesion sensitivity of the biparametric (ADC + T2WI) model, ADC model, and PI-RADS assessment were 95.5% (84/88), 94.3% (83/88), and 94.3% (83/88) respectively (all p > 0.05). Additionally, the mean DSC based on the csPCa lesions were 0.64 ± 0.24 and 0.66 ± 0.23 for the biparametric model and ADC model, respectively. The sensitivity, specificity, and accuracy of the biparametric model were 95.6% (108/113), 91.5% (665/727), and 92.0% (773/840) based on sextant, and were 98.6% (68/69), 64.8% (46/71), and 81.4% (114/140) based on patients. The biparametric model had a similar performance to PI-RADS assessment (p > 0.05) and had higher specificity than the ADC model (86.8% [631/727], p< 0.001) based on sextant. Conclusion The cascaded deep learning model trained with ADC and T2WI achieves good performance for automated csPCa detection and localization.
Collapse
Affiliation(s)
- Lina Zhu
- Department of Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Ge Gao
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Yi Zhu
- Department of Clinical & Technical Support, Philips Healthcare, Beijing, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiang Liu
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Derun Li
- Department of Urology, Peking University First Hospital, Beijing, China
| | - Weipeng Liu
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiangpeng Wang
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Jingyuan Zhang
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
- *Correspondence: Xiaoying Wang,
| |
Collapse
|
19
|
Saliency Transfer Learning and Central-Cropping Network for Prostate Cancer Classification. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10999-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
20
|
Lin M, Hou B, Liu L, Gordon M, Kass M, Wang F, Van Tassel SH, Peng Y. Automated diagnosing primary open-angle glaucoma from fundus image by simulating human's grading with deep learning. Sci Rep 2022; 12:14080. [PMID: 35982106 PMCID: PMC9388536 DOI: 10.1038/s41598-022-17753-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 07/30/2022] [Indexed: 11/09/2022] Open
Abstract
Primary open-angle glaucoma (POAG) is a leading cause of irreversible blindness worldwide. Although deep learning methods have been proposed to diagnose POAG, it remains challenging to develop a robust and explainable algorithm to automatically facilitate the downstream diagnostic tasks. In this study, we present an automated classification algorithm, GlaucomaNet, to identify POAG using variable fundus photographs from different populations and settings. GlaucomaNet consists of two convolutional neural networks to simulate the human grading process: learning the discriminative features and fusing the features for grading. We evaluated GlaucomaNet on two datasets: Ocular Hypertension Treatment Study (OHTS) participants and the Large-scale Attention-based Glaucoma (LAG) dataset. GlaucomaNet achieved the highest AUC of 0.904 and 0.997 for POAG diagnosis on OHTS and LAG datasets. An ensemble of network architectures further improved diagnostic accuracy. By simulating the human grading process, GlaucomaNet demonstrated high accuracy with increased transparency in POAG diagnosis (comprehensiveness scores of 97% and 36%). These methods also address two well-known challenges in the field: the need for increased image data diversity and relying heavily on perimetry for POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. GlaucomaNet is publicly available on https://github.com/bionlplab/GlaucomaNet .
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Bojian Hou
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Lei Liu
- Institute for Public Health, Washington University School of Medicine, St. Louis, MO, USA
| | - Mae Gordon
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Michael Kass
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| | | | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| |
Collapse
|
21
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
22
|
Dwivedi DK, Jagannathan NR. Emerging MR methods for improved diagnosis of prostate cancer by multiparametric MRI. MAGMA (NEW YORK, N.Y.) 2022; 35:587-608. [PMID: 35867236 DOI: 10.1007/s10334-022-01031-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 06/28/2022] [Accepted: 07/08/2022] [Indexed: 06/15/2023]
Abstract
Current challenges of using serum prostate-specific antigen (PSA) level-based screening, such as the increased false positive rate, inability to detect clinically significant prostate cancer (PCa) with random biopsy, multifocality in PCa, and the molecular heterogeneity of PCa, can be addressed by integrating advanced multiparametric MR imaging (mpMRI) approaches into the diagnostic workup of PCa. The standard method for diagnosing PCa is a transrectal ultrasonography (TRUS)-guided systematic prostate biopsy, but it suffers from sampling errors and frequently fails to detect clinically significant PCa. mpMRI not only increases the detection of clinically significant PCa, but it also helps to reduce unnecessary biopsies because of its high negative predictive value. Furthermore, non-Cartesian image acquisition and compressed sensing have resulted in faster MR acquisition with improved signal-to-noise ratio, which can be used in quantitative MRI methods such as dynamic contrast-enhanced (DCE)-MRI. With the growing emphasis on the role of pre-biopsy mpMRI in the evaluation of PCa, there is an increased demand for innovative MRI methods that can improve PCa grading, detect clinically significant PCa, and biopsy guidance. To meet these demands, in addition to routine T1-weighted, T2-weighted, DCE-MRI, diffusion MRI, and MR spectroscopy, several new MR methods such as restriction spectrum imaging, vascular, extracellular, and restricted diffusion for cytometry in tumors (VERDICT) method, hybrid multi-dimensional MRI, luminal water imaging, and MR fingerprinting have been developed for a better characterization of the disease. Further, with the increasing interest in combining MR data with clinical and genomic data, there is a growing interest in utilizing radiomics and radiogenomics approaches. These big data can also be utilized in the development of computer-aided diagnostic tools, including automatic segmentation and the detection of clinically significant PCa using machine learning methods.
Collapse
Affiliation(s)
- Durgesh Kumar Dwivedi
- Department of Radiodiagnosis, King George Medical University, Lucknow, UP, 226 003, India.
| | - Naranamangalam R Jagannathan
- Department of Radiology, Chettinad Hospital and Research Institute, Chettinad Academy of Research and Education, Kelambakkam, TN, 603 103, India.
- Department of Radiology, Sri Ramachandra Institute of Higher Education and Research, Chennai, TN, 600 116, India.
- Department of Electrical Engineering, Indian Institute Technology Madras, Chennai, TN, 600 036, India.
| |
Collapse
|
23
|
Huang W, Wang X, Huang Y, Lin F, Tang X. Multi-parametric Magnetic Resonance Imaging Fusion for Automatic Classification of Prostate Cancer. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:471-474. [PMID: 36085623 DOI: 10.1109/embc48229.2022.9871334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Computer-aided diagnosis (CAD) of prostate cancer (PCa) using multi-parametric magnetic resonance imaging (mp-MRI) has recently gained great research interest. In this work, a fully automatic CAD pipeline of PCa using mp-MRI data is presented. In order to fully explore the mp-MRI data, we systematically investigate three multi-modal medical image fusion strategies in convolutional neural networks, namely input-level fusion, feature-level fusion, and decision-level fusion. Extensive experiments are conducted on two datasets with different PCa-related diagnostic tasks. We identify a pipeline that works relatively the best for both diagnostic tasks, two important components of which are stacking three adjacent slices as the input and performing decision-level fusion with specific loss weights. Clinical relevance- This work provides a practical method for automated diagnosis of PCa based on multi-parametric MRI.
Collapse
|
24
|
Estimation of the Prostate Volume from Abdominal Ultrasound Images by Image-Patch Voting. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031390] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Estimation of the prostate volume with ultrasound offers many advantages such as portability, low cost, harmlessness, and suitability for real-time operation. Abdominal Ultrasound (AUS) is a practical procedure that deserves more attention in automated prostate-volume-estimation studies. As the experts usually consider automatic end-to-end volume-estimation procedures as non-transparent and uninterpretable systems, we proposed an expert-in-the-loop automatic system that follows the classical prostate-volume-estimation procedures. Our system directly estimates the diameter parameters of the standard ellipsoid formula to produce the prostate volume. To obtain the diameters, our system detects four diameter endpoints from the transverse and two diameter endpoints from the sagittal AUS images as defined by the classical procedure. These endpoints are estimated using a new image-patch voting method to address characteristic problems of AUS images. We formed a novel prostate AUS data set from 305 patients with both transverse and sagittal planes. The data set includes MRI images for 75 of these patients. At least one expert manually marked all the data. Extensive experiments performed on this data set showed that the proposed system results ranged among experts’ volume estimations, and our system can be used in clinical practice.
Collapse
|
25
|
Duran A, Dussert G, Rouviére O, Jaouen T, Jodoin PM, Lartizien C. ProstAttention-Net: a deep attention model for prostate cancer segmentation by aggressiveness in MRI scans. Med Image Anal 2022; 77:102347. [DOI: 10.1016/j.media.2021.102347] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/20/2021] [Accepted: 12/31/2021] [Indexed: 11/27/2022]
|
26
|
Brunese L, Brunese MC, Carbone M, Ciccone V, Mercaldo F, Santone A. Automatic PI-RADS assignment by means of formal methods. Radiol Med 2021; 127:83-89. [PMID: 34822102 DOI: 10.1007/s11547-021-01431-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 11/08/2021] [Indexed: 11/26/2022]
Abstract
INTRODUCTION AND OBJECTIVES The Prostate Imaging Reporting and Data System (PI-RADS) version 2 emerged as standard in prostate magnetic resonance imaging examination. The Pi-RADS scores are assigned by radiologists and indicate the likelihood of a clinically significant cancer. The aim of this paper is to propose a methodology to automatically mark a magnetic resonance imaging with its related PI-RADS. MATERIALS AND METHODS We collected a dataset from two different institutions composed by DWI ADC MRI for 91 patients marked by expert radiologists with different PI-RADS score. A formal model is generated starting from a prostate magnetic resonance imaging, and a set of properties related to the different PI-RADS scores are formulated with the help of expert radiologists and pathologists. RESULTS Our methodology relies on the adoption of formal methods and radiomic features, and in the experimental analysis, we obtain a specificity and sensitivity equal to 1. Q CONCLUSIONS The proposed methodology is able to assign the PI-RADS score by analyzing prostate magnetic resonance imaging with a very high accuracy.
Collapse
Affiliation(s)
- Luca Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Maria Chiara Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Mattia Carbone
- Dipartimento Diagnostico per Immagini U.O.C. di Radiologia, Ospedale San Giovanni di Dio e Ruggi d'Aragona, Salerno, Italy
| | - Vincenzo Ciccone
- Dipartimento Diagnostico per Immagini U.O.C. di Radiologia, Ospedale San Giovanni di Dio e Ruggi d'Aragona, Salerno, Italy
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy.
| | - Antonella Santone
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| |
Collapse
|
27
|
A Combined Radiomics and Machine Learning Approach to Distinguish Clinically Significant Prostate Lesions on a Publicly Available MRI Dataset. J Imaging 2021; 7:jimaging7100215. [PMID: 34677301 PMCID: PMC8540196 DOI: 10.3390/jimaging7100215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/01/2021] [Accepted: 10/13/2021] [Indexed: 12/14/2022] Open
Abstract
Although prostate cancer is one of the most common causes of mortality and morbidity in advancing-age males, early diagnosis improves prognosis and modifies the therapy of choice. The aim of this study was the evaluation of a combined radiomics and machine learning approach on a publicly available dataset in order to distinguish a clinically significant from a clinically non-significant prostate lesion. A total of 299 prostate lesions were included in the analysis. A univariate statistical analysis was performed to prove the goodness of the 60 extracted radiomic features in distinguishing prostate lesions. Then, a 10-fold cross-validation was used to train and test some models and the evaluation metrics were calculated; finally, a hold-out was performed and a wrapper feature selection was applied. The employed algorithms were Naïve bayes, K nearest neighbour and some tree-based ones. The tree-based algorithms achieved the highest evaluation metrics, with accuracies over 80%, and area-under-the-curve receiver-operating characteristics below 0.80. Combined machine learning algorithms and radiomics based on clinical, routine, multiparametric, magnetic-resonance imaging were demonstrated to be a useful tool in prostate cancer stratification.
Collapse
|
28
|
Finck T, Schinz D, Grundl L, Eisawy R, Yigitsoy M, Moosbauer J, Pfister F, Wiestler B. Automated Pathology Detection and Patient Triage in Routinely Acquired Head Computed Tomography Scans. Invest Radiol 2021; 56:571-578. [PMID: 33813571 DOI: 10.1097/rli.0000000000000775] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
OBJECTIVES Anomaly detection systems can potentially uncover the entire spectrum of pathologies through deviations from a learned norm, meaningfully supporting the radiologist's workflow. We aim to report on the utility of a weakly supervised machine learning (ML) tool to detect pathologies in head computed tomography (CT) and adequately triage patients in an unselected patient cohort. MATERIALS AND METHODS All patients having undergone a head CT at a tertiary care hospital in March 2020 were eligible for retrospective analysis. Only the first scan of each patient was included. Anomaly detection was performed using a weakly supervised ML technique. Anomalous findings were displayed on voxel-level and pooled to an anomaly score ranging from 0 to 1. Thresholds for this score classified patients into the 3 classes: "normal," "pathological," or "inconclusive." Expert-validated radiological reports with multiclass pathology labels were considered as ground truth. Test assessment was performed with receiver operator characteristics analysis; inconclusive results were pooled to "pathological" predictions for accuracy measurements. External validity was tested in a publicly available external data set (CQ500). RESULTS During the investigation period, 297 patients were referred for head CT of which 248 could be included. Definite ratings into normal/pathological were feasible in 167 patients (67.3%); 81 scans (32.7%) remained inconclusive. The area under the curve to differentiate normal from pathological scans was 0.95 (95% confidence interval, 0.92-0.98) for the study data set and 0.87 (95% confidence interval, 0.81-0.94) in external validation. The negative predictive value to exclude pathology if a scan was classified as "normal" was 100% (25/25), and the positive predictive value was 97.6% (137/141). Sensitivity and specificity were 100% and 86%, respectively. In patients with inconclusive ratings, pathologies were found in 26 (63%) of 41 cases. CONCLUSIONS Our study provides the first clinical evaluation of a weakly supervised anomaly detection system for brain imaging. In an unselected, consecutive patient cohort, definite classification into normal/diseased was feasible in approximately two thirds of scans, going along with an excellent diagnostic accuracy and perfect negative predictive value for excluding pathology. Moreover, anomaly heat maps provide important guidance toward pathology interpretation, also in cases with inconclusive ratings.
Collapse
Affiliation(s)
- Tom Finck
- From the Department of Diagnostic and Interventional Neuroradiology, Klinikum Rechts der Isar, Technische Universität München
| | - David Schinz
- From the Department of Diagnostic and Interventional Neuroradiology, Klinikum Rechts der Isar, Technische Universität München
| | - Lioba Grundl
- From the Department of Diagnostic and Interventional Neuroradiology, Klinikum Rechts der Isar, Technische Universität München
| | | | | | | | | | - Benedikt Wiestler
- From the Department of Diagnostic and Interventional Neuroradiology, Klinikum Rechts der Isar, Technische Universität München
| |
Collapse
|
29
|
Wong T, Schieda N, Sathiadoss P, Haroon M, Abreu-Gomez J, Ukwatta E. Fully automated detection of prostate transition zone tumors on T2-weighted and apparent diffusion coefficient (ADC) map MR images using U-Net ensemble. Med Phys 2021; 48:6889-6900. [PMID: 34418108 DOI: 10.1002/mp.15181] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/19/2021] [Accepted: 08/07/2021] [Indexed: 01/10/2023] Open
Abstract
PURPOSE Accurate detection of transition zone (TZ) prostate cancer (PCa) on magnetic resonance imaging (MRI) remains challenging using clinical subjective assessment due to overlap between PCa and benign prostatic hyperplasia (BPH). The objective of this paper is to describe a deep-learning-based framework for fully automated detection of PCa in the TZ using T2-weighted (T2W) and apparent diffusion coefficient (ADC) map MR images. METHOD This was a single-center IRB-approved cross-sectional study of men undergoing 3T MRI on two systems. The dataset consisted of 196 patients (103 with and 93 without clinically significant [Grade Group 2 or higher] TZ PCa) to train and test our proposed methodology, with an additional 168 patients with peripheral zone PCa used only for training. We proposed an ensemble of classifiers in which multiple U-Net-based models are designed for prediction of TZ PCa location on ADC map MR images, with initial automated segmentation of the prostate to guide detection. We compared accuracy of ADC alone to T2W and combined ADC+T2W MRI for input images, and investigated improvements using ensembles over their constituent models with different methods of diversity in individual models by hyperparameter configuration, loss function and model architecture. RESULTS Our developed algorithm reported sensitivity and precision of 0.829 and 0.617 in 56 test cases containing 31 instances of TZ PCa and in 25 patients without clinically significant TZ tumors. Patient-wise classification accuracy had an area under receiver operator characteristic curve (AUROC) of 0.974. Single U-Net models using ADC alone (sensitivity 0.829, precision 0.534) outperformed assessment using T2W (sensitivity 0.086, precision 0.081) and assessment using combined ADC+T2W (sensitivity 0.687, precision 0.489). While the ensemble of U-Nets with varying hyperparameters demonstrated the highest performance, all ensembles improved PCa detection compared to individual models, with sensitivities and precisions close to the collective best of constituent models. CONCLUSION We describe a deep-learning-based method for fully automated TZ PCa detection using ADC map MR images that outperformed assessment by T2W and ADC+T2W.
Collapse
Affiliation(s)
- Timothy Wong
- School of Engineering, University of Guelph, Guelph, ON, Canada
| | - Nicola Schieda
- Department of Radiology, University of Ottawa, Ottawa, ON, Canada
| | - Paul Sathiadoss
- Department of Radiology, University of Ottawa, Ottawa, ON, Canada
| | - Mohammad Haroon
- Department of Radiology, University of Ottawa, Ottawa, ON, Canada
| | - Jorge Abreu-Gomez
- Joint Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Eranga Ukwatta
- School of Engineering, University of Guelph, Guelph, ON, Canada
| |
Collapse
|
30
|
Li W, Liang Y, Zhang X, Liu C, He L, Miao L, Sun W. A deep learning approach to automatic gingivitis screening based on classification and localization in RGB photos. Sci Rep 2021; 11:16831. [PMID: 34413332 PMCID: PMC8376991 DOI: 10.1038/s41598-021-96091-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 07/26/2021] [Indexed: 12/22/2022] Open
Abstract
Routine dental visit is the most common approach to detect the gingivitis. However, such diagnosis can sometimes be unavailable due to the limited medical resources in certain areas and costly for low-income populations. This study proposes to screen the existence of gingivitis and its irritants, i.e., dental calculus and soft deposits, from oral photos with a novel Multi-Task Learning convolutional neural network (CNN) model. The study can be meaningful for promoting the public dental health, since it sheds light on a cost-effective and ubiquitous solution for the early detection of dental issues. With 625 patients included in this study, the classification Area Under the Curve (AUC) for detecting gingivitis, dental calculus and soft deposits were 87.11%, 80.11%, and 78.57%, respectively; Meanwhile, according to our experiments, the model can also localize the three types of findings on oral photos with moderate accuracy, which enables the model to explain the screen results. By comparing to general-purpose CNNs, we showed our model significantly outperformed on both classification and localization tasks, which indicates the effectiveness of Multi-Task Learning on dental disease detection. In all, the study shows the potential of deep learning for enabling the screening of dental diseases among large populations.
Collapse
Affiliation(s)
- Wen Li
- Department of Endodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, No.30 Zhongyang Road, Xuanwu District, Nanjing, Jiangsu, People's Republic of China
| | - Yuan Liang
- University of California, Los Angeles, USA
| | - Xuan Zhang
- Department of Periodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, No.30 Zhongyang Road, Xuanwu District, Nanjing, Jiangsu, People's Republic of China
| | - Chao Liu
- Department of Orthodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, People's Republic of China
| | - Lei He
- University of California, Los Angeles, USA
| | - Leiying Miao
- Department of Endodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, No.30 Zhongyang Road, Xuanwu District, Nanjing, Jiangsu, People's Republic of China.
| | - Weibin Sun
- Department of Periodontics, Nanjing Stomatological Hospital, Medical School of Nanjing University, No.30 Zhongyang Road, Xuanwu District, Nanjing, Jiangsu, People's Republic of China.
| |
Collapse
|
31
|
Challenges in the Use of Artificial Intelligence for Prostate Cancer Diagnosis from Multiparametric Imaging Data. Cancers (Basel) 2021; 13:cancers13163944. [PMID: 34439099 PMCID: PMC8391234 DOI: 10.3390/cancers13163944] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 08/02/2021] [Accepted: 08/02/2021] [Indexed: 11/18/2022] Open
Abstract
Simple Summary Prostate Cancer is one of the main threats to men’s health. Its accurate diagnosis is crucial to properly treat patients depending on the cancer’s level of aggressiveness. Tumor risk-stratification is still a challenging task due to the difficulties met during the reading of multi-parametric Magnetic Resonance Images. Artificial Intelligence models may help radiologists in staging the aggressiveness of the equivocal lesions, reducing inter-observer variability and evaluation time. However, these algorithms need many high-quality images to work efficiently, bringing up overfitting and lack of standardization and reproducibility as emerging issues to be addressed. This study attempts to illustrate the state of the art of current research of Artificial Intelligence methods to stratify prostate cancer for its clinical significance suggesting how widespread use of public databases could be a possible solution to these issues. Abstract Many efforts have been carried out for the standardization of multiparametric Magnetic Resonance (mp-MR) images evaluation to detect Prostate Cancer (PCa), and specifically to differentiate levels of aggressiveness, a crucial aspect for clinical decision-making. Prostate Imaging—Reporting and Data System (PI-RADS) has contributed noteworthily to this aim. Nevertheless, as pointed out by the European Association of Urology (EAU 2020), the PI-RADS still has limitations mainly due to the moderate inter-reader reproducibility of mp-MRI. In recent years, many aspects in the diagnosis of cancer have taken advantage of the use of Artificial Intelligence (AI) such as detection, segmentation of organs and/or lesions, and characterization. Here a focus on AI as a potentially important tool for the aim of standardization and reproducibility in the characterization of PCa by mp-MRI is reported. AI includes methods such as Machine Learning and Deep learning techniques that have shown to be successful in classifying mp-MR images, with similar performances obtained by radiologists. Nevertheless, they perform differently depending on the acquisition system and protocol used. Besides, these methods need a large number of samples that cover most of the variability of the lesion aspect and zone to avoid overfitting. The use of publicly available datasets could improve AI performance to achieve a higher level of generalizability, exploiting large numbers of cases and a big range of variability in the images. Here we explore the promise and the advantages, as well as emphasizing the pitfall and the warnings, outlined in some recent studies that attempted to classify clinically significant PCa and indolent lesions using AI methods. Specifically, we focus on the overfitting issue due to the scarcity of data and the lack of standardization and reproducibility in every step of the mp-MR image acquisition and the classifier implementation. In the end, we point out that a solution can be found in the use of publicly available datasets, whose usage has already been promoted by some important initiatives. Our future perspective is that AI models may become reliable tools for clinicians in PCa diagnosis, reducing inter-observer variability and evaluation time.
Collapse
|
32
|
Cao R, Zhong X, Afshari S, Felker E, Suvannarerg V, Tubtawee T, Vangala S, Scalzo F, Raman S, Sung K. Performance of Deep Learning and Genitourinary Radiologists in Detection of Prostate Cancer Using 3-T Multiparametric Magnetic Resonance Imaging. J Magn Reson Imaging 2021; 54:474-483. [PMID: 33709532 PMCID: PMC8812258 DOI: 10.1002/jmri.27595] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 02/24/2021] [Accepted: 02/26/2021] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND Several deep learning-based techniques have been developed for prostate cancer (PCa) detection using multiparametric magnetic resonance imaging (mpMRI), but few of them have been rigorously evaluated relative to radiologists' performance or whole-mount histopathology (WMHP). PURPOSE To compare the performance of a previously proposed deep learning algorithm, FocalNet, and expert radiologists in the detection of PCa on mpMRI with WMHP as the reference. STUDY TYPE Retrospective, single-center study. SUBJECTS A total of 553 patients (development cohort: 427 patients; evaluation cohort: 126 patients) who underwent 3-T mpMRI prior to radical prostatectomy from October 2010 to February 2018. FIELD STRENGTH/SEQUENCE 3-T, T2-weighted imaging and diffusion-weighted imaging. ASSESSMENT FocalNet was trained on the development cohort to predict PCa locations by detection points, with a confidence value for each point, on the evaluation cohort. Four fellowship-trained genitourinary (GU) radiologists independently evaluated the evaluation cohort to detect suspicious PCa foci, annotate detection point locations, and assign a five-point suspicion score (1: least suspicious, 5: most suspicious) for each annotated detection point. The PCa detection performance of FocalNet and radiologists were evaluated by the lesion detection sensitivity vs. the number of false-positive detections at different thresholds on suspicion scores. Clinically significant lesions: Gleason Group (GG) ≥ 2 or pathological size ≥ 10 mm. Index lesions: the highest GG and the largest pathological size (secondary). STATISTICAL TESTS Bootstrap hypothesis test for the detection sensitivity between radiologists and FocalNet. RESULTS For the overall differential detection sensitivity, FocalNet was 5.1% and 4.7% below the radiologists for clinically significant and index lesions, respectively; however, the differences were not statistically significant (P = 0.413 and P = 0.282, respectively). DATA CONCLUSION FocalNet achieved slightly lower but not statistically significant PCa detection performance compared with GU radiologists. Compared with radiologists, FocalNet demonstrated similar detection performance for a highly sensitive setting (suspicion score ≥ 1) or a highly specific setting (suspicion score = 5), while lower performance in between. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Ruiming Cao
- Department of Bioengineering, UC Berkeley, Berkeley, California, USA
| | - Xinran Zhong
- Department of Radiation Oncology, UT Southwestern, Dallas, Texas, USA
| | - Sohrab Afshari
- Department of Radiology, UCLA, Los Angeles, California, USA
| | - Ely Felker
- Department of Radiology, UCLA, Los Angeles, California, USA
| | - Voraparee Suvannarerg
- Department of Radiology, UCLA, Los Angeles, California, USA
- Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Teeravut Tubtawee
- Department of Radiology, UCLA, Los Angeles, California, USA
- Department of Radiology, Faculty of Medicine, Prince of Songkla University, Songkhla, Thailand
| | - Sitaram Vangala
- Department of Medicine Statistics Core, UCLA, Los Angeles, California, USA
| | - Fabien Scalzo
- Department of Neurology, UCLA, Los Angeles, California, USA
| | - Steven Raman
- Department of Radiology, UCLA, Los Angeles, California, USA
| | - Kyunghyun Sung
- Department of Radiology, UCLA, Los Angeles, California, USA
| |
Collapse
|
33
|
Yu H, Yang LT, Zhang Q, Armstrong D, Deen MJ. Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.157] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
34
|
Twilt JJ, van Leeuwen KG, Huisman HJ, Fütterer JJ, de Rooij M. Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review. Diagnostics (Basel) 2021; 11:diagnostics11060959. [PMID: 34073627 PMCID: PMC8229869 DOI: 10.3390/diagnostics11060959] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/19/2021] [Accepted: 05/21/2021] [Indexed: 12/14/2022] Open
Abstract
Due to the upfront role of magnetic resonance imaging (MRI) for prostate cancer (PCa) diagnosis, a multitude of artificial intelligence (AI) applications have been suggested to aid in the diagnosis and detection of PCa. In this review, we provide an overview of the current field, including studies between 2018 and February 2021, describing AI algorithms for (1) lesion classification and (2) lesion detection for PCa. Our evaluation of 59 included studies showed that most research has been conducted for the task of PCa lesion classification (66%) followed by PCa lesion detection (34%). Studies showed large heterogeneity in cohort sizes, ranging between 18 to 499 patients (median = 162) combined with different approaches for performance validation. Furthermore, 85% of the studies reported on the stand-alone diagnostic accuracy, whereas 15% demonstrated the impact of AI on diagnostic thinking efficacy, indicating limited proof for the clinical utility of PCa AI applications. In order to introduce AI within the clinical workflow of PCa assessment, robustness and generalizability of AI applications need to be further validated utilizing external validation and clinical workflow experiments.
Collapse
|
35
|
Precise Identification of Prostate Cancer from DWI Using Transfer Learning. SENSORS 2021; 21:s21113664. [PMID: 34070290 PMCID: PMC8197382 DOI: 10.3390/s21113664] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Revised: 05/17/2021] [Accepted: 05/18/2021] [Indexed: 12/23/2022]
Abstract
Background and Objective: The use of computer-aided detection (CAD) systems can help radiologists make objective decisions and reduce the dependence on invasive techniques. In this study, a CAD system that detects and identifies prostate cancer from diffusion-weighted imaging (DWI) is developed. Methods: The proposed system first uses non-negative matrix factorization (NMF) to integrate three different types of features for the accurate segmentation of prostate regions. Then, discriminatory features in the form of apparent diffusion coefficient (ADC) volumes are estimated from the segmented regions. The ADC maps that constitute these volumes are labeled by a radiologist to identify the ADC maps with malignant or benign tumors. Finally, transfer learning is used to fine-tune two different previously-trained convolutional neural network (CNN) models (AlexNet and VGGNet) for detecting and identifying prostate cancer. Results: Multiple experiments were conducted to evaluate the accuracy of different CNN models using DWI datasets acquired at nine distinct b-values that included both high and low b-values. The average accuracy of AlexNet at the nine b-values was 89.2±1.5% with average sensitivity and specificity of 87.5±2.3% and 90.9±1.9%. These results improved with the use of the deeper CNN model (VGGNet). The average accuracy of VGGNet was 91.2±1.3% with sensitivity and specificity of 91.7±1.7% and 90.1±2.8%. Conclusions: The results of the conducted experiments emphasize the feasibility and accuracy of the developed system and the improvement of this accuracy using the deeper CNN.
Collapse
|
36
|
Shao L, Liu Z, Yan Y, Liu J, Ye X, Xia H, Zhu X, Zhang Y, Zhang Z, Chen H, He W, Liu C, Lu M, Huang Y, Sun K, Zhou X, Yang G, Lu J, Tian J. Patient-level Prediction of Multi-classification Task at Prostate MRI based on End-to-End Framework learning from Diagnostic Logic of Radiologists. IEEE Trans Biomed Eng 2021; 68:3690-3700. [PMID: 34014820 DOI: 10.1109/tbme.2021.3082176] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The grade groups (GGs) of Gleason scores (Gs) is the most critical indicator in the clinical diagnosis and treatment system of prostate cancer. End-to-end method for stratifying the patient-level pathological appearance of prostate cancer (PCa) in magnetic resonance (MRI) are of high demand for clinical decision. Existing methods typically employ a statistical method for integrating slice-level results to a patient-level result, which ignores the asymmetric use of ground truth (GT) and overall optimization. Therefore, more domain knowledge (e.g. diagnostic logic of radiologists) needs to be incorporated into the design of the framework. The patient-level GT is necessary to be logically assigned to each slice of a MRI to achieve joint optimization between slice-level analysis and patient-level decision-making. In this paper, we propose a framework (PCa-GGNet-v2) that learns from radiologists to capture signs in a separate two-dimensional (2-D) space of MRI and further associate them for the overall decision, where all steps are optimized jointly in an end-to-end trainable way. In the training phase, patient-level prediction is transferred from weak supervision to supervision with GT. An association route records the attentional slice for reweighting loss of MRI slices and interpretability. We evaluate our method in an in-house multi-center dataset (N=570) and PROSTATEx (N=204), which yields five-classification accuracy over 80% and AUC of 0.804 at patient-level respectively. Our method reveals the state-of-the-art performance for patient-level multi-classification task to personalized medicine.
Collapse
|
37
|
A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11104573] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.
Collapse
|
38
|
Mehrtash A, Kapur T, Tempany CM, Abolmaesumi P, Wells WM. PROSTATE CANCER DIAGNOSIS WITH SPARSE BIOPSY DATA AND IN PRESENCE OF LOCATION UNCERTAINTY. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2021; 2021:443-447. [PMID: 36225596 PMCID: PMC9552971 DOI: 10.1109/isbi48211.2021.9433892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Prostate cancer is the second most prevalent cancer in men worldwide. Deep neural networks have been successfully applied for prostate cancer diagnosis in magnetic resonance images (MRI). Pathology results from biopsy procedures are often used as ground truth to train such systems. There are several sources of noise in creating ground truth from biopsy data including sampling and registration errors. We propose: 1) A fully convolutional neural network (FCN) to produce cancer probability maps across the whole prostate gland in MRI; 2) A Gaussian weighted loss function to train the FCN with sparse biopsy locations; 3) A probabilistic framework to model biopsy location uncertainty and adjust cancer probability given the deep model predictions. We assess the proposed method on 325 biopsy locations from 203 patients. We observe that the proposed loss improves the area under the receiver operating characteristic curve and the biopsy location adjustment improves the sensitivity of the models.
Collapse
Affiliation(s)
- Alireza Mehrtash
- ECE Department, University of British Columbia, Vancouver, BC
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Tina Kapur
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Clare M Tempany
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | | | - William M Wells
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
39
|
Rossi A, Hosseinzadeh M, Bianchini M, Scarselli F, Huisman H. Multi-Modal Siamese Network for Diagnostically Similar Lesion Retrieval in Prostate MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:986-995. [PMID: 33296302 DOI: 10.1109/tmi.2020.3043641] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multi-parametric prostate MRI (mpMRI) is a powerful tool to diagnose prostate cancer, though difficult to interpret even for experienced radiologists. A common radiological procedure is to compare a magnetic resonance image with similarly diagnosed cases. To assist the radiological image interpretation process, computerized Content-Based Image Retrieval systems (CBIRs) can therefore be employed to improve the reporting workflow and increase its accuracy. In this article, we propose a new, supervised siamese deep learning architecture able to handle multi-modal and multi-view MR images with similar PIRADS score. An experimental comparison with well-established deep learning-based CBIRs (namely standard siamese networks and autoencoders) showed significantly improved performance with respect to both diagnostic (ROC-AUC), and information retrieval metrics (Precision-Recall, Discounted Cumulative Gain and Mean Average Precision). Finally, the new proposed multi-view siamese network is general in design, facilitating a broad use in diagnostic medical imaging retrieval.
Collapse
|
40
|
Chen J, Wan Z, Zhang J, Li W, Chen Y, Li Y, Duan Y. Medical image segmentation and reconstruction of prostate tumor based on 3D AlexNet. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105878. [PMID: 33308904 DOI: 10.1016/j.cmpb.2020.105878] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Accepted: 11/22/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND Prostate cancer is a disease with a high incidence of tumors in men. Due to the long incubation time and insidious condition, early diagnosis is difficult; especially imaging diagnosis is more difficult. In actual clinical practice, the method of manual segmentation by medical experts is mainly used, which is time-consuming and labor-intensive and relies heavily on the experience and ability of medical experts. The rapid, accurate and repeatable segmentation of the prostate area is still a challenging problem. It is important to explore the automated segmentation of prostate images based on the 3D AlexNet network. METHOD Taking the medical image of prostate cancer as the entry point, the three-dimensional data is introduced into the deep learning convolutional neural network. This paper proposes a 3D AlexNet method for the automatic segmentation of prostate cancer magnetic resonance images, and the general network ResNet 50, Inception -V4 compares network performance. RESULTS Based on the training samples of magnetic resonance images of 500 prostate cancer patients, a set of 3D AlexNet with simple structure and excellent performance was established through adaptive improvement on the basis of classic AlexNet. The accuracy rate was as high as 0.921, the specificity was 0.896, and the sensitivity It is 0.902 and the area under the receiver operating characteristic curve (AUC) is 0.964. The Mean Absolute Distance (MAD) between the segmentation result and the medical expert's gold standard is 0.356 mm, and the Hausdorff distance (HD) is 1.024 mm, the Dice similarity coefficient is 0.9768. CONCLUSION The improved 3D AlexNet can automatically complete the structured segmentation of prostate magnetic resonance images. Compared with traditional segmentation methods and depth segmentation methods, the performance of the 3D AlexNet network is superior in terms of training time and parameter amount, or network performance evaluation. Compared with the algorithm, it proves the effectiveness of this method.
Collapse
Affiliation(s)
- Jun Chen
- Department of Urology, The Second Affiliated Hospital of Zhejiang Chinese Medical University, No.318 Chaowang Road, Gongshu District, Hangzhou 310005 China
| | - Zhechao Wan
- Department of Urology, Zhuji Central Hospital, No.98 Zhugong Road, Jiyang Street, Zhuji City, 311800, Zhejiang Province, China
| | - Jiacheng Zhang
- The 2nd Clinical Medical College, Zhejiang Chinese Medical University, 548 Bin Wen Road, Hangzhou 310053, China
| | - Wenhua Li
- Department of Radiology, Xinhua Hospital affiliated to Shanghai Jiao Tong University School of Medicine, 1665 Kong Jiang Road, Shanghai 200092, China
| | - Yanbing Chen
- Computer Application Technology, School of Applied Sciences, Macao Polytechnic Institute, Macao SAR 999078, China
| | - Yuebing Li
- Department of Anaesthesiology, The Second Affiliated Hospital of Zhejiang Chinese Medical University, No.318 Chaowang Road, Gongshu District, Hangzhou 310005 China.
| | - Yue Duan
- Department of Urology, The Second Affiliated Hospital of Zhejiang Chinese Medical University, No.318 Chaowang Road, Gongshu District, Hangzhou 310005 China.
| |
Collapse
|
41
|
Shao W, Banh L, Kunder CA, Fan RE, Soerensen SJC, Wang JB, Teslovich NC, Madhuripan N, Jawahar A, Ghanouni P, Brooks JD, Sonn GA, Rusu M. ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate. Med Image Anal 2021; 68:101919. [PMID: 33385701 PMCID: PMC7856244 DOI: 10.1016/j.media.2020.101919] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 11/18/2020] [Accepted: 11/23/2020] [Indexed: 12/21/2022]
Abstract
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet.
Collapse
Affiliation(s)
- Wei Shao
- Department of Radiology, Stanford University, Stanford, CA 94305, USA.
| | - Linda Banh
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | | | - Richard E Fan
- Department of Urology, Stanford University, Stanford, CA 94305, USA
| | | | - Jeffrey B Wang
- School of Medicine, Stanford University, Stanford, CA 94305, USA
| | | | - Nikhil Madhuripan
- Department of Radiology, University of Colorado, Aurora, CO 80045, USA
| | | | - Pejman Ghanouni
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - James D Brooks
- Department of Urology, Stanford University, Stanford, CA 94305, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, Stanford, CA 94305, USA; Department of Urology, Stanford University, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
42
|
Zhang J, Shi Y, Sun J, Wang L, Zhou L, Gao Y, Shen D. Interactive medical image segmentation via a point-based interaction. Artif Intell Med 2020; 111:101998. [PMID: 33461691 DOI: 10.1016/j.artmed.2020.101998] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 10/05/2020] [Accepted: 11/23/2020] [Indexed: 11/20/2022]
Abstract
Due to low tissue contrast, irregular shape, and large location variance, segmenting the objects from different medical imaging modalities (e.g., CT, MR) is considered as an important yet challenging task. In this paper, a novel method is presented for interactive medical image segmentation with the following merits. (1) Its design is fundamentally different from previous pure patch-based and image-based segmentation methods. It is observed that during delineation, the physician repeatedly check the intensity from area inside-object to outside-object to determine the boundary, which indicates that comparison in an inside-out manner is extremely important. Thus, the method innovatively models the segmentation task as learning the representation of bi-directional sequential patches, starting from (or ending in) the given central point of the object. This can be realized by the proposed ConvRNN network embedded with a gated memory propagation unit. (2) Unlike previous interactive methods (requiring bounding box or seed points), the proposed method only asks the physician to merely click on the rough central point of the object before segmentation, which could simultaneously enhance the performance and reduce the segmentation time. (3) The method is utilized in a multi-level framework for better performance. It has been systematically evaluated in three different segmentation tasks, including CT kidney tumor, MR prostate, and PROMISE12 challenge, showing promising results compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Jian Zhang
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China; National Institute of Healthcare Data Science, Nanjing University, China
| | - Jinquan Sun
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Lei Wang
- School of Computing and Information Technology, University of Wollongong, Australia
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China; National Institute of Healthcare Data Science, Nanjing University, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China; Shanghai United Imaging Intelligence Co., Ltd., China; Department of Artificial Intelligence, Korea University, Republic of Korea
| |
Collapse
|
43
|
Schelb P, Tavakoli AA, Tubtawee T, Hielscher T, Radtke JP, Görtz M, Schütz V, Kuder TA, Schimmöller L, Stenzinger A, Hohenfellner M, Schlemmer HP, Bonekamp D. Comparison of Prostate MRI Lesion Segmentation Agreement Between Multiple Radiologists and a Fully Automatic Deep Learning System. ROFO-FORTSCHR RONTG 2020; 193:559-573. [PMID: 33212541 DOI: 10.1055/a-1290-8070] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
PURPOSE A recently developed deep learning model (U-Net) approximated the clinical performance of radiologists in the prediction of clinically significant prostate cancer (sPC) from prostate MRI. Here, we compare the agreement between lesion segmentations by U-Net with manual lesion segmentations performed by different radiologists. MATERIALS AND METHODS 165 patients with suspicion for sPC underwent targeted and systematic fusion biopsy following 3 Tesla multiparametric MRI (mpMRI). Five sets of segmentations were generated retrospectively: segmentations of clinical lesions, independent segmentations by three radiologists, and fully automated bi-parametric U-Net segmentations. Per-lesion agreement was calculated for each rater by averaging Dice coefficients with all overlapping lesions from other raters. Agreement was compared using descriptive statistics and linear mixed models. RESULTS The mean Dice coefficient for manual segmentations showed only moderate agreement at 0.48-0.52, reflecting the difficult visual task of determining the outline of otherwise jointly detected lesions. U-net segmentations were significantly smaller than manual segmentations (p < 0.0001) and exhibited a lower mean Dice coefficient of 0.22, which was significantly lower compared to manual segmentations (all p < 0.0001). These differences remained after correction for lesion size and were unaffected between sPC and non-sPC lesions and between peripheral and transition zone lesions. CONCLUSION Knowledge of the order of agreement of manual segmentations of different radiologists is important to set the expectation value for artificial intelligence (AI) systems in the task of prostate MRI lesion segmentation. Perfect agreement (Dice coefficient of one) should not be expected for AI. Lower Dice coefficients of U-Net compared to manual segmentations are only partially explained by smaller segmentation sizes and may result from a focus on the lesion core and a small relative lesion center shift. Although it is primarily important that AI detects sPC correctly, the Dice coefficient for overlapping lesions from multiple raters can be used as a secondary measure for segmentation quality in future studies. KEY POINTS · Intermediate human Dice coefficients reflect the difficulty of outlining jointly detected lesions.. · Lower Dice coefficients of deep learning motivate further research to approximate human perception.. · Comparable predictive performance of deep learning appears independent of Dice agreement.. · Dice agreement independent of significant cancer presence indicates indistinguishability of some benign imaging findings.. · Improving DWI to T2 registration may improve the observed U-Net Dice coefficients.. CITATION FORMAT · Schelb P, Tavakoli AA, Tubtawee T et al. Comparison of Prostate MRI Lesion Segmentation Agreement Between Multiple Radiologists and a Fully Automatic Deep Learning System. Fortschr Röntgenstr 2021; 193: 559 - 573.
Collapse
Affiliation(s)
- Patrick Schelb
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Teeravut Tubtawee
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Thomas Hielscher
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jan-Philipp Radtke
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Magdalena Görtz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Viktoria Schütz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Tristan Anselm Kuder
- Division of Medical Physics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Lars Schimmöller
- University Dusseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, Dusseldorf, Germany
| | - Albrecht Stenzinger
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Markus Hohenfellner
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | | | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
44
|
Yu H, Zhang X. Synthesis of Prostate MR Images for Classification Using Capsule Network-Based GAN Model. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5736. [PMID: 33050243 PMCID: PMC7601698 DOI: 10.3390/s20205736] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 09/30/2020] [Accepted: 10/07/2020] [Indexed: 01/22/2023]
Abstract
Prostate cancer remains a major health concern among elderly men. Deep learning is a state-of-the-art technique for MR image-based prostate cancer diagnosis, but one of major bottlenecks is the severe lack of annotated MR images. The traditional and Generative Adversarial Network (GAN)-based data augmentation methods cannot ensure the quality and the diversity of generated training samples. In this paper, we have proposed a novel GAN model for synthesis of MR images by utilizing its powerful ability in modeling the complex data distributions. The proposed model is designed based on the architecture of deep convolutional GAN. To learn the more equivariant representation of images that is robust to the changes in the pose and spatial relationship of objects in the images, the capsule network is applied to replace CNN used in the discriminator of regular GAN. Meanwhile, the least squares loss has been adopted for both the generator and discriminator in the proposed GAN to address the vanishing gradient problem of sigmoid cross entropy loss function in regular GAN. Extensive experiments are conducted on the simulated and real MR images. The results demonstrate that the proposed capsule network-based GAN model can generate more realistic and higher quality MR images than the compared GANs. The quantitative comparisons show that among all evaluated models, the proposed GAN generally achieves the smallest Kullback-Leibler divergence values for image generation task and provides the best classification performance when it is introduced into the deep learning method for image classification task.
Collapse
Affiliation(s)
- Houqiang Yu
- Ministry of Education Key Laboratory of Molecular Biophysics, Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, China;
- Department of Mathematics and Statistics, Hubei University of Science and Technology, No 88, Xianning Road, Xianning 437000, China
| | - Xuming Zhang
- Ministry of Education Key Laboratory of Molecular Biophysics, Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, China;
| |
Collapse
|
45
|
Shao Y, Wang J, Wodlinger B, Salcudean SE. Improving Prostate Cancer (PCa) Classification Performance by Using Three-Player Minimax Game to Reduce Data Source Heterogeneity. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3148-3158. [PMID: 32305907 DOI: 10.1109/tmi.2020.2988198] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PCa is a disease with a wide range of tissue patterns and this adds to its classification difficulty. Moreover, the data source heterogeneity, i.e. inconsistent data collected using different machines, under different conditions, by different operators, from patients of different ethnic groups, etc., further hinders the effectiveness of training a generalized PCa classifier. In this paper, for the first time, a Generative Adversarial Network (GAN)-based three-player minimax game framework is used to tackle data source heterogeneity and to improve PCa classification performance, where a proposed modified U-Net is used as the encoder. Our dataset consists of novel high-frequency ExactVu ultrasound (US) data collected from 693 patients at five data centers. Gleason Scores (GSs) are assigned to the 12 prostatic regions of each patient. Two classification tasks: benign vs. malignant and low- vs. high-grade, are conducted and the classification results of different prostatic regions are compared. For benign vs. malignant classification, the three-player minimax game framework achieves an Area Under the Receiver Operating Characteristic (AUC) of 93.4%, a sensitivity of 95.1% and a specificity of 87.7%, respectively, representing significant improvements of 5.0%, 3.9%, and 6.0% compared to those of using heterogeneous data, which confirms its effectiveness in terms of PCa classification.
Collapse
|
46
|
Liu Q, Dou Q, Yu L, Heng PA. MS-Net: Multi-Site Network for Improving Prostate Segmentation With Heterogeneous MRI Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2713-2724. [PMID: 32078543 DOI: 10.1109/tmi.2020.2974574] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Automated prostate segmentation in MRI is highly demanded for computer-assisted diagnosis. Recently, a variety of deep learning methods have achieved remarkable progress in this task, usually relying on large amounts of training data. Due to the nature of scarcity for medical images, it is important to effectively aggregate data from multiple sites for robust model training, to alleviate the insufficiency of single-site samples. However, the prostate MRIs from different sites present heterogeneity due to the differences in scanners and imaging protocols, raising challenges for effective ways of aggregating multi-site data for network training. In this paper, we propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations, leveraging multiple sources of data. To compensate for the inter-site heterogeneity of different MRI datasets, we develop Domain-Specific Batch Normalization layers in the network backbone, enabling the network to estimate statistics and perform feature normalization for each site separately. Considering the difficulty of capturing the shared knowledge from multiple datasets, a novel learning paradigm, i.e., Multi-site-guided Knowledge Transfer, is proposed to enhance the kernels to extract more generic representations from multi-site data. Extensive experiments on three heterogeneous prostate MRI datasets demonstrate that our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
Collapse
|
47
|
Hiremath A, Shiradkar R, Merisaari H, Prasanna P, Ettala O, Taimen P, Aronen HJ, Boström PJ, Jambor I, Madabhushi A. Test-retest repeatability of a deep learning architecture in detecting and segmenting clinically significant prostate cancer on apparent diffusion coefficient (ADC) maps. Eur Radiol 2020; 31:379-391. [PMID: 32700021 DOI: 10.1007/s00330-020-07065-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/22/2020] [Accepted: 07/02/2020] [Indexed: 12/16/2022]
Abstract
OBJECTIVES To evaluate short-term test-retest repeatability of a deep learning architecture (U-Net) in slice- and lesion-level detection and segmentation of clinically significant prostate cancer (csPCa: Gleason grade group > 1) using diffusion-weighted imaging fitted with monoexponential function, ADCm. METHODS One hundred twelve patients with prostate cancer (PCa) underwent 2 prostate MRI examinations on the same day. PCa areas were annotated using whole mount prostatectomy sections. Two U-Net-based convolutional neural networks were trained on three different ADCm b value settings for (a) slice- and (b) lesion-level detection and (c) segmentation of csPCa. Short-term test-retest repeatability was estimated using intra-class correlation coefficient (ICC(3,1)), proportionate agreement, and dice similarity coefficient (DSC). A 3-fold cross-validation was performed on training set (N = 78 patients) and evaluated for performance and repeatability on testing data (N = 34 patients). RESULTS For the three ADCm b value settings, repeatability of mean ADCm of csPCa lesions was ICC(3,1) = 0.86-0.98. Two CNNs with U-Net-based architecture demonstrated ICC(3,1) in the range of 0.80-0.83, agreement of 66-72%, and DSC of 0.68-0.72 for slice- and lesion-level detection and segmentation of csPCa. Bland-Altman plots suggest that there is no systematic bias in agreement between inter-scan ground truth segmentation repeatability and segmentation repeatability of the networks. CONCLUSIONS For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility. KEY POINTS • For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. • The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility.
Collapse
Affiliation(s)
- Amogh Hiremath
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | - Harri Merisaari
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Prateek Prasanna
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Otto Ettala
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Pekka Taimen
- Institute of Biomedicine, Department of Pathology, University of Turku and Turku University Hospital, Turku, Finland
| | - Hannu J Aronen
- Medical Imaging Centre of Southwest Finland, Turku University Hospital, Turku, Finland
| | - Peter J Boström
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Ivan Jambor
- Department of Diagnostic Radiology, University of Turku, Turku, Finland.,Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, USA
| |
Collapse
|
48
|
Wildeboer RR, van Sloun RJG, Wijkstra H, Mischi M. Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 189:105316. [PMID: 31951873 DOI: 10.1016/j.cmpb.2020.105316] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/09/2019] [Accepted: 01/04/2020] [Indexed: 05/16/2023]
Abstract
Prostate cancer represents today the most typical example of a pathology whose diagnosis requires multiparametric imaging, a strategy where multiple imaging techniques are combined to reach an acceptable diagnostic performance. However, the reviewing, weighing and coupling of multiple images not only places additional burden on the radiologist, it also complicates the reviewing process. Prostate cancer imaging has therefore been an important target for the development of computer-aided diagnostic (CAD) tools. In this survey, we discuss the advances in CAD for prostate cancer over the last decades with special attention to the deep-learning techniques that have been designed in the last few years. Moreover, we elaborate and compare the methods employed to deliver the CAD output to the operator for further medical decision making.
Collapse
Affiliation(s)
- Rogier R Wildeboer
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Ruud J G van Sloun
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Hessel Wijkstra
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, the Netherlands
| | - Massimo Mischi
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands
| |
Collapse
|
49
|
Bardis MD, Houshyar R, Chang PD, Ushinsky A, Glavis-Bloom J, Chahine C, Bui TL, Rupasinghe M, Filippi CG, Chow DS. Applications of Artificial Intelligence to Prostate Multiparametric MRI (mpMRI): Current and Emerging Trends. Cancers (Basel) 2020; 12:E1204. [PMID: 32403240 PMCID: PMC7281682 DOI: 10.3390/cancers12051204] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/02/2020] [Accepted: 05/08/2020] [Indexed: 01/13/2023] Open
Abstract
Prostate carcinoma is one of the most prevalent cancers worldwide. Multiparametric magnetic resonance imaging (mpMRI) is a non-invasive tool that can improve prostate lesion detection, classification, and volume quantification. Machine learning (ML), a branch of artificial intelligence, can rapidly and accurately analyze mpMRI images. ML could provide better standardization and consistency in identifying prostate lesions and enhance prostate carcinoma management. This review summarizes ML applications to prostate mpMRI and focuses on prostate organ segmentation, lesion detection and segmentation, and lesion characterization. A literature search was conducted to find studies that have applied ML methods to prostate mpMRI. To date, prostate organ segmentation and volume approximation have been well executed using various ML techniques. Prostate lesion detection and segmentation are much more challenging tasks for ML and were attempted in several studies. They largely remain unsolved problems due to data scarcity and the limitations of current ML algorithms. By contrast, prostate lesion characterization has been successfully completed in several studies because of better data availability. Overall, ML is well situated to become a tool that enhances radiologists' accuracy and speed.
Collapse
Affiliation(s)
- Michelle D. Bardis
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Roozbeh Houshyar
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Peter D. Chang
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Alexander Ushinsky
- Mallinckrodt Institute of Radiology, Washington University Saint Louis, St. Louis, MO 63110, USA;
| | - Justin Glavis-Bloom
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Chantal Chahine
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Thanh-Lan Bui
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Mark Rupasinghe
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | | | - Daniel S. Chow
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| |
Collapse
|
50
|
Her EJ, Haworth A, Rowshanfarzad P, Ebert MA. Progress towards Patient-Specific, Spatially-Continuous Radiobiological Dose Prescription and Planning in Prostate Cancer IMRT: An Overview. Cancers (Basel) 2020; 12:E854. [PMID: 32244821 PMCID: PMC7226478 DOI: 10.3390/cancers12040854] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 03/12/2020] [Accepted: 03/27/2020] [Indexed: 01/30/2023] Open
Abstract
Advances in imaging have enabled the identification of prostate cancer foci with an initial application to focal dose escalation, with subvolumes created with image intensity thresholds. Through quantitative imaging techniques, correlations between image parameters and tumour characteristics have been identified. Mathematical functions are typically used to relate image parameters to prescription dose to improve the clinical relevance of the resulting dose distribution. However, these relationships have remained speculative or invalidated. In contrast, the use of radiobiological models during treatment planning optimisation, termed biological optimisation, has the advantage of directly considering the biological effect of the resulting dose distribution. This has led to an increased interest in the accurate derivation of radiobiological parameters from quantitative imaging to inform the models. This article reviews the progress in treatment planning using image-informed tumour biology, from focal dose escalation to the current trend of individualised biological treatment planning using image-derived radiobiological parameters, with the focus on prostate intensity-modulated radiotherapy (IMRT).
Collapse
Affiliation(s)
- Emily Jungmin Her
- Department of Physics, University of Western Australia, Crawley, WA 6009, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Camperdown, NSW 2050, Australia
| | - Pejman Rowshanfarzad
- Department of Physics, University of Western Australia, Crawley, WA 6009, Australia
| | - Martin A. Ebert
- Department of Physics, University of Western Australia, Crawley, WA 6009, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Nedlands, WA 6009, Australia
- 5D Clinics, Claremont, WA 6010, Australia
| |
Collapse
|