1
|
Lu Y, Yuan R, Su Y, Liang Z, Huang H, Leng Q, Yang A, Xiao X, Lai Z, Zhang Y. Biparametric MRI-based radiomics for noninvastive discrimination of benign prostatic hyperplasia nodules (BPH) and prostate cancer nodules: a bio-centric retrospective cohort study. Sci Rep 2025; 15:654. [PMID: 39753878 PMCID: PMC11698716 DOI: 10.1038/s41598-024-84908-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 12/30/2024] [Indexed: 01/06/2025] Open
Abstract
To investigate the potential of an MRI-based radiomic model in distinguishing malignant prostate cancer (PCa) nodules from benign prostatic hyperplasia (BPH)-, as well as determining the incremental value of radiomic features to clinical variables, such as prostate-specific antigen (PSA) level and Prostate Imaging Reporting and Data System (PI-RADS) score. A restrospective analysis was performed on a total of 251 patients (training cohort, n = 119; internal validation cohort, n = 52; and external validation cohort, n = 80) with prostatic nodules who underwent biparametric MRI at two hospitals between January 2018 and December 2020. A total of 1130 radiomic features were extracted from each MRI sequence, including shape-based features, gray-level histogram-based features, texture features, and wavelet features. The clinical model was constructed using logistic regression analysis. Radiomic models were created by comparing seven machine learning classifiers. The useful clinical variables and radiomic signature were integrated to develop the combined model. Model performance was assessed by receiver operating characteristic curve, calibration curve, decision curve, and clinical impact curve. The ratio of free PSA to total PSA, PSA density, peripheral zone volume, and PI-RADS score were independent determinants of malignancy. The clinical model based on these factors achieved an AUC of 0.814 (95% CI: 0.763-0.865) and 0.791 (95% CI: 0.742-840) in the internal and external validation cohorts, respectively. The clinical-radiomic nomogram yielded the highest accuracy, with an AUC of 0.925 (95% CI: 0.894-0.956) and 0.872 (95% CI: 0.837-0.907) in the internal and external validation cohorts, respectively. DCA and CIC further confirmed the clinical usefulness of the nomogram. Biparametric MRI-based radiomics has the potential to noninvasively discriminate between-BPH and malignant PCa nodules, which outperforms screening strategies based on PSA and PI-RADS.
Collapse
Affiliation(s)
- Yangbai Lu
- Department of Urology, Zhongshan City People's Hospital, Shiqi District, No. 2, Sunwen East Road, Zhongshan, 528403, Guangdong, China
| | - Runqiang Yuan
- Department of Urology, Zhongshan City People's Hospital, Shiqi District, No. 2, Sunwen East Road, Zhongshan, 528403, Guangdong, China
| | - Yun Su
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, NO.107, Yanjiang West Road, Guangzhou, 510120, China
| | - Zhiying Liang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, No. 651, Dongfeng East Road, Guangzhou, 510060, China
| | - Hongxing Huang
- Department of Urology, Zhongshan City People's Hospital, Shiqi District, No. 2, Sunwen East Road, Zhongshan, 528403, Guangdong, China
| | - Qu Leng
- Department of Urology, Zhongshan City People's Hospital, Shiqi District, No. 2, Sunwen East Road, Zhongshan, 528403, Guangdong, China
| | - Ang Yang
- Department of MRI, Zhongshan City People's Hospital, No. 2, Sunwen East Road, Shiqi District, Zhongshan, 528403, Guangdong, China
| | - Xuehong Xiao
- Department of MRI, Zhongshan City People's Hospital, No. 2, Sunwen East Road, Shiqi District, Zhongshan, 528403, Guangdong, China
| | - Zhaoqi Lai
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, NO.107, Yanjiang West Road, Guangzhou, 510120, China.
| | - Yongxin Zhang
- Department of MRI, Zhongshan City People's Hospital, No. 2, Sunwen East Road, Shiqi District, Zhongshan, 528403, Guangdong, China.
| |
Collapse
|
2
|
Xu Y, Quan R, Xu W, Huang Y, Chen X, Liu F. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches. Bioengineering (Basel) 2024; 11:1034. [PMID: 39451409 PMCID: PMC11505408 DOI: 10.3390/bioengineering11101034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 10/08/2024] [Accepted: 10/11/2024] [Indexed: 10/26/2024] Open
Abstract
Medical image segmentation plays a critical role in accurate diagnosis and treatment planning, enabling precise analysis across a wide range of clinical tasks. This review begins by offering a comprehensive overview of traditional segmentation techniques, including thresholding, edge-based methods, region-based approaches, clustering, and graph-based segmentation. While these methods are computationally efficient and interpretable, they often face significant challenges when applied to complex, noisy, or variable medical images. The central focus of this review is the transformative impact of deep learning on medical image segmentation. We delve into prominent deep learning architectures such as Convolutional Neural Networks (CNNs), Fully Convolutional Networks (FCNs), U-Net, Recurrent Neural Networks (RNNs), Adversarial Networks (GANs), and Autoencoders (AEs). Each architecture is analyzed in terms of its structural foundation and specific application to medical image segmentation, illustrating how these models have enhanced segmentation accuracy across various clinical contexts. Finally, the review examines the integration of deep learning with traditional segmentation methods, addressing the limitations of both approaches. These hybrid strategies offer improved segmentation performance, particularly in challenging scenarios involving weak edges, noise, or inconsistent intensities. By synthesizing recent advancements, this review provides a detailed resource for researchers and practitioners, offering valuable insights into the current landscape and future directions of medical image segmentation.
Collapse
Affiliation(s)
- Yan Xu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Rixiang Quan
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Weiting Xu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Yi Huang
- Bristol Medical School, University of Bristol, Bristol BS8 1UD, UK;
| | - Xiaolong Chen
- Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, Nottingham NG7 2RD, UK;
| | - Fengyuan Liu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| |
Collapse
|
3
|
Correia ETDO, Baydoun A, Li Q, Costa DN, Bittencourt LK. Emerging and anticipated innovations in prostate cancer MRI and their impact on patient care. Abdom Radiol (NY) 2024; 49:3696-3710. [PMID: 38877356 PMCID: PMC11390809 DOI: 10.1007/s00261-024-04423-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 05/27/2024] [Accepted: 05/28/2024] [Indexed: 06/16/2024]
Abstract
Prostate cancer (PCa) remains the leading malignancy affecting men, with over 3 million men living with the disease in the US, and an estimated 288,000 new cases and almost 35,000 deaths in 2023 in the United States alone. Over the last few decades, imaging has been a cornerstone in PCa care, with a crucial role in the detection, staging, and assessment of PCa recurrence or by guiding diagnostic or therapeutic interventions. To improve diagnostic accuracy and outcomes in PCa care, remarkable advancements have been made to different imaging modalities in recent years. This paper focuses on reviewing the main innovations in the field of PCa magnetic resonance imaging, including MRI protocols, MRI-guided procedural interventions, artificial intelligence algorithms and positron emission tomography, which may impact PCa care in the future.
Collapse
Affiliation(s)
| | - Atallah Baydoun
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Qiubai Li
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Daniel N Costa
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Leonardo Kayat Bittencourt
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA.
- Department of Radiology, Case Western Reserve University, 11100 Euclid Ave, Cleveland, OH, 44106, USA.
| |
Collapse
|
4
|
Zeng X, Puonti O, Sayeed A, Herisse R, Mora J, Evancic K, Varadarajan D, Balbastre Y, Costantini I, Scardigli M, Ramazzotti J, DiMeo D, Mazzamuto G, Pesce L, Brady N, Cheli F, Saverio Pavone F, Hof PR, Frost R, Augustinack J, van der Kouwe A, Eugenio Iglesias J, Fischl B. Segmentation of supragranular and infragranular layers in ultra-high-resolution 7T ex vivo MRI of the human cerebral cortex. Cereb Cortex 2024; 34:bhae362. [PMID: 39264753 PMCID: PMC11391621 DOI: 10.1093/cercor/bhae362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 08/06/2024] [Accepted: 08/18/2024] [Indexed: 09/14/2024] Open
Abstract
Accurate labeling of specific layers in the human cerebral cortex is crucial for advancing our understanding of neurodevelopmental and neurodegenerative disorders. Building on recent advancements in ultra-high-resolution ex vivo MRI, we present a novel semi-supervised segmentation model capable of identifying supragranular and infragranular layers in ex vivo MRI with unprecedented precision. On a dataset consisting of 17 whole-hemisphere ex vivo scans at 120 $\mu $m, we propose a Multi-resolution U-Nets framework that integrates global and local structural information, achieving reliable segmentation maps of the entire hemisphere, with Dice scores over 0.8 for supra- and infragranular layers. This enables surface modeling, atlas construction, anomaly detection in disease states, and cross-modality validation while also paving the way for finer layer segmentation. Our approach offers a powerful tool for comprehensive neuroanatomical investigations and holds promise for advancing our mechanistic understanding of progression of neurodegenerative diseases.
Collapse
Affiliation(s)
- Xiangrui Zeng
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Oula Puonti
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital, Blegdamsvej 9, 2100 København, Denmark
| | - Areej Sayeed
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Rogeny Herisse
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Jocelyn Mora
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Kathryn Evancic
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Divya Varadarajan
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Yael Balbastre
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Irene Costantini
- National Institute of Optics (CNR-INO), National Research Council, Largo Enrico Fermi, 6, 50125 Sesto Fiorentino, Italy
- European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy
- Department of Biology, University of Florence, P.za di San Marco, 4, 50121 Firenze FI, Italy
| | - Marina Scardigli
- European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy
| | - Josephine Ramazzotti
- European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy
| | - Danila DiMeo
- European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy
| | - Giacomo Mazzamuto
- National Institute of Optics (CNR-INO), National Research Council, Largo Enrico Fermi, 6, 50125 Sesto Fiorentino, Italy
- European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy
- Department of Physics and Astronomy, University of Florence, P.za di San Marco, 4, 50121 Firenze FI, Italy
| | - Luca Pesce
- European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy
| | - Niamh Brady
- European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy
| | - Franco Cheli
- European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy
| | - Francesco Saverio Pavone
- National Institute of Optics (CNR-INO), National Research Council, Largo Enrico Fermi, 6, 50125 Sesto Fiorentino, Italy
- European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy
- Department of Physics and Astronomy, University of Florence, P.za di San Marco, 4, 50121 Firenze FI, Italy
| | - Patrick R Hof
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, USA
| | - Robert Frost
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Jean Augustinack
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - André van der Kouwe
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Juan Eugenio Iglesias
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Bruce Fischl
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA
- Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| |
Collapse
|
5
|
Du Q, Wang L, Chen H. A mixed Mamba U-net for prostate segmentation in MR images. Sci Rep 2024; 14:19976. [PMID: 39198553 PMCID: PMC11358272 DOI: 10.1038/s41598-024-71045-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 08/23/2024] [Indexed: 09/01/2024] Open
Abstract
The diagnosis of early prostate cancer depends on the accurate segmentation of prostate regions in magnetic resonance imaging (MRI). However, this segmentation task is challenging due to the particularities of prostate MR images themselves and the limitations of existing methods. To address these issues, we propose a U-shaped encoder-decoder network MM-UNet based on Mamba and CNN for prostate segmentation in MR images. Specifically, we first proposed an adaptive feature fusion module based on channel attention guidance to achieve effective fusion between adjacent hierarchical features and suppress the interference of background noise. Secondly, we propose a global context-aware module based on Mamba, which has strong long-range modeling capabilities and linear complexity, to capture global context information in images. Finally, we propose a multi-scale anisotropic convolution module based on the principle of parallel multi-scale anisotropic convolution blocks and 3D convolution decomposition. Experimental results on two public prostate MR image segmentation datasets demonstrate that the proposed method outperforms competing models in terms of prostate segmentation performance and achieves state-of-the-art performance. In future work, we intend to enhance the model's robustness and extend its applicability to additional medical image segmentation tasks.
Collapse
Affiliation(s)
- Qiu Du
- Department of Urology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, 410005, People's Republic of China
| | - Luowu Wang
- Department of Urology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, 410005, People's Republic of China
| | - Hao Chen
- Department of Urology, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, 410005, People's Republic of China.
| |
Collapse
|
6
|
Fassia MK, Balasubramanian A, Woo S, Vargas HA, Hricak H, Konukoglu E, Becker AS. Deep Learning Prostate MRI Segmentation Accuracy and Robustness: A Systematic Review. Radiol Artif Intell 2024; 6:e230138. [PMID: 38568094 PMCID: PMC11294957 DOI: 10.1148/ryai.230138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 02/24/2024] [Accepted: 03/19/2024] [Indexed: 04/28/2024]
Abstract
Purpose To investigate the accuracy and robustness of prostate segmentation using deep learning across various training data sizes, MRI vendors, prostate zones, and testing methods relative to fellowship-trained diagnostic radiologists. Materials and Methods In this systematic review, Embase, PubMed, Scopus, and Web of Science databases were queried for English-language articles using keywords and related terms for prostate MRI segmentation and deep learning algorithms dated to July 31, 2022. A total of 691 articles from the search query were collected and subsequently filtered to 48 on the basis of predefined inclusion and exclusion criteria. Multiple characteristics were extracted from selected studies, such as deep learning algorithm performance, MRI vendor, and training dataset features. The primary outcome was comparison of mean Dice similarity coefficient (DSC) for prostate segmentation for deep learning algorithms versus diagnostic radiologists. Results Forty-eight studies were included. Most published deep learning algorithms for whole prostate gland segmentation (39 of 42 [93%]) had a DSC at or above expert level (DSC ≥ 0.86). The mean DSC was 0.79 ± 0.06 (SD) for peripheral zone, 0.87 ± 0.05 for transition zone, and 0.90 ± 0.04 for whole prostate gland segmentation. For selected studies that used one major MRI vendor, the mean DSCs of each were as follows: General Electric (three of 48 studies), 0.92 ± 0.03; Philips (four of 48 studies), 0.92 ± 0.02; and Siemens (six of 48 studies), 0.91 ± 0.03. Conclusion Deep learning algorithms for prostate MRI segmentation demonstrated accuracy similar to that of expert radiologists despite varying parameters; therefore, future research should shift toward evaluating segmentation robustness and patient outcomes across diverse clinical settings. Keywords: MRI, Genital/Reproductive, Prostate Segmentation, Deep Learning Systematic review registration link: osf.io/nxaev © RSNA, 2024.
Collapse
Affiliation(s)
- Mohammad-Kasim Fassia
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Adithya Balasubramanian
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Sungmin Woo
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hebert Alberto Vargas
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hedvig Hricak
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Ender Konukoglu
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Anton S. Becker
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| |
Collapse
|
7
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
8
|
Johnson LA, Harmon SA, Yilmaz EC, Lin Y, Belue MJ, Merriman KM, Lay NS, Sanford TH, Sarma KV, Arnold CW, Xu Z, Roth HR, Yang D, Tetreault J, Xu D, Patel KR, Gurram S, Wood BJ, Citrin DE, Pinto PA, Choyke PL, Turkbey B. Automated prostate gland segmentation in challenging clinical cases: comparison of three artificial intelligence methods. Abdom Radiol (NY) 2024; 49:1545-1556. [PMID: 38512516 DOI: 10.1007/s00261-024-04242-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 02/05/2024] [Accepted: 02/06/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVE Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.
Collapse
Affiliation(s)
- Latrice A Johnson
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Stephanie A Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Enis C Yilmaz
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Yue Lin
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Mason J Belue
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Katie M Merriman
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nathan S Lay
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Karthik V Sarma
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, USA
| | - Corey W Arnold
- Department of Radiology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Ziyue Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Dong Yang
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Daguang Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | - Krishnan R Patel
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sandeep Gurram
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Bradford J Wood
- Center for Interventional Oncology, National Cancer Institute, NIH, Bethesda, MD, USA
- Department of Radiology, Clinical Center, NIH, Bethesda, MD, USA
| | - Deborah E Citrin
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter A Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA.
- Molecular Imaging Branch (B.T.), National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, MD, 20892, USA.
| |
Collapse
|
9
|
Polymeri E, Johnsson ÅA, Enqvist O, Ulén J, Pettersson N, Nordström F, Kindblom J, Trägårdh E, Edenbrandt L, Kjölhede H. Artificial Intelligence-Based Organ Delineation for Radiation Treatment Planning of Prostate Cancer on Computed Tomography. Adv Radiat Oncol 2024; 9:101383. [PMID: 38495038 PMCID: PMC10943520 DOI: 10.1016/j.adro.2023.101383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/30/2023] [Indexed: 03/19/2024] Open
Abstract
Purpose Meticulous manual delineations of the prostate and the surrounding organs at risk are necessary for prostate cancer radiation therapy to avoid side effects to the latter. This process is time consuming and hampered by inter- and intraobserver variability, all of which could be alleviated by artificial intelligence (AI). This study aimed to evaluate the performance of AI compared with manual organ delineations on computed tomography (CT) scans for radiation treatment planning. Methods and Materials Manual delineations of the prostate, urinary bladder, and rectum of 1530 patients with prostate cancer who received curative radiation therapy from 2006 to 2018 were included. Approximately 50% of those CT scans were used as a training set, 25% as a validation set, and 25% as a test set. Patients with hip prostheses were excluded because of metal artifacts. After training and fine-tuning with the validation set, automated delineations of the prostate and organs at risk were obtained for the test set. Sørensen-Dice similarity coefficient, mean surface distance, and Hausdorff distance were used to evaluate the agreement between the manual and automated delineations. Results The median Sørensen-Dice similarity coefficient between the manual and AI delineations was 0.82, 0.95, and 0.88 for the prostate, urinary bladder, and rectum, respectively. The median mean surface distance and Hausdorff distance were 1.7 and 9.2 mm for the prostate, 0.7 and 6.7 mm for the urinary bladder, and 1.1 and 13.5 mm for the rectum, respectively. Conclusions Automated CT-based organ delineation for prostate cancer radiation treatment planning is feasible and shows good agreement with manually performed contouring.
Collapse
Affiliation(s)
- Eirini Polymeri
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Åse A. Johnsson
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Olof Enqvist
- Department of Electrical Engineering, Region Västra Götaland, Chalmers University of Technology, Gothenburg, Sweden
- Eigenvision AB, Malmö, Sweden
| | | | - Niclas Pettersson
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Medical Physics and Biomedical Engineering, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Fredrik Nordström
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Medical Physics and Biomedical Engineering, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Jon Kindblom
- Department of Oncology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Elin Trägårdh
- Department of Clinical Physiology and Nuclear Medicine, Lund University and Skåne University Hospital, Malmö, Sweden
| | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Henrik Kjölhede
- Department of Urology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Urology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| |
Collapse
|
10
|
Molière S, Hamzaoui D, Granger B, Montagne S, Allera A, Ezziane M, Luzurier A, Quint R, Kalai M, Ayache N, Delingette H, Renard-Penna R. Reference standard for the evaluation of automatic segmentation algorithms: Quantification of inter observer variability of manual delineation of prostate contour on MRI. Diagn Interv Imaging 2024; 105:65-73. [PMID: 37822196 DOI: 10.1016/j.diii.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 10/13/2023]
Abstract
PURPOSE The purpose of this study was to investigate the relationship between inter-reader variability in manual prostate contour segmentation on magnetic resonance imaging (MRI) examinations and determine the optimal number of readers required to establish a reliable reference standard. MATERIALS AND METHODS Seven radiologists with various experiences independently performed manual segmentation of the prostate contour (whole-gland [WG] and transition zone [TZ]) on 40 prostate MRI examinations obtained in 40 patients. Inter-reader variability in prostate contour delineations was estimated using standard metrics (Dice similarity coefficient [DSC], Hausdorff distance and volume-based metrics). The impact of the number of readers (from two to seven) on segmentation variability was assessed using pairwise metrics (consistency) and metrics with respect to a reference segmentation (conformity), obtained either with majority voting or simultaneous truth and performance level estimation (STAPLE) algorithm. RESULTS The average segmentation DSC for two readers in pairwise comparison was 0.919 for WG and 0.876 for TZ. Variability decreased with the number of readers: the interquartile ranges of the DSC were 0.076 (WG) / 0.021 (TZ) for configurations with two readers, 0.005 (WG) / 0.012 (TZ) for configurations with three readers, and 0.002 (WG) / 0.0037 (TZ) for configurations with six readers. The interquartile range decreased slightly faster between two and three readers than between three and six readers. When using consensus methods, variability often reached its minimum with three readers (with STAPLE, DSC = 0.96 [range: 0.945-0.971] for WG and DSC = 0.94 [range: 0.912-0.957] for TZ, and interquartile range was minimal for configurations with three readers. CONCLUSION The number of readers affects the inter-reader variability, in terms of inter-reader consistency and conformity to a reference. Variability is minimal for three readers, or three readers represent a tipping point in the variability evolution, with both pairwise-based metrics or metrics with respect to a reference. Accordingly, three readers may represent an optimal number to determine references for artificial intelligence applications.
Collapse
Affiliation(s)
- Sébastien Molière
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France; Breast and Thyroid Imaging Unit, Institut de Cancérologie Strasbourg Europe, 67200, Strasbourg, France; IGBMC, Institut de Génétique et de Biologie Moléculaire et Cellulaire, 67400, Illkirch, France.
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, 06902, Nice, France
| | - Benjamin Granger
- Sorbonne Université, INSERM, Institut Pierre Louis d'Epidémiologie et de Santé Publique, IPLESP, AP-HP, Hôpital Pitié Salpêtrière, Département de Santé Publique, 75013, Paris, France
| | - Sarah Montagne
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| | - Alexandre Allera
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Malek Ezziane
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Anna Luzurier
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Raphaelle Quint
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Mehdi Kalai
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Nicholas Ayache
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Hervé Delingette
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Raphaële Renard-Penna
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| |
Collapse
|
11
|
Kaneko M, Magoulianitis V, Ramacciotti LS, Raman A, Paralkar D, Chen A, Chu TN, Yang Y, Xue J, Yang J, Liu J, Jadvar DS, Gill K, Cacciamani GE, Nikias CL, Duddalwar V, Jay Kuo CC, Gill IS, Abreu AL. The Novel Green Learning Artificial Intelligence for Prostate Cancer Imaging: A Balanced Alternative to Deep Learning and Radiomics. Urol Clin North Am 2024; 51:1-13. [PMID: 37945095 DOI: 10.1016/j.ucl.2023.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
The application of artificial intelligence (AI) on prostate magnetic resonance imaging (MRI) has shown promising results. Several AI systems have been developed to automatically analyze prostate MRI for segmentation, cancer detection, and region of interest characterization, thereby assisting clinicians in their decision-making process. Deep learning, the current trend in imaging AI, has limitations including the lack of transparency "black box", large data processing, and excessive energy consumption. In this narrative review, the authors provide an overview of the recent advances in AI for prostate cancer diagnosis and introduce their next-generation AI model, Green Learning, as a promising solution.
Collapse
Affiliation(s)
- Masatomo Kaneko
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Vasileios Magoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Alex Raman
- Western University of Health Sciences. Pomona, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Andrew Chen
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Timothy N Chu
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Yijing Yang
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jintang Xue
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jiaxin Yang
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jinyuan Liu
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Donya S Jadvar
- Dornsife School of Letters and Science, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer; Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Chrysostomos L Nikias
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - C-C Jay Kuo
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Inderbir S Gill
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Andre Luis Abreu
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; USC Institute of Urology, Center for Image-Guided Surgery, Focal Therapy and Artificial Intelligence for Prostate Cancer; Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
12
|
Chen X, Liu X, Wu Y, Wang Z, Wang SH. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review. Int J Med Inform 2024; 181:105279. [PMID: 37977054 DOI: 10.1016/j.ijmedinf.2023.105279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/06/2023] [Accepted: 10/29/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Prostate cancer is currently the second most prevalent cancer among men. Accurate diagnosis of prostate cancer can provide effective treatment for patients and greatly reduce mortality. The current medical imaging tools for screening prostate cancer are mainly MRI, CT and ultrasound. In the past 20 years, these medical imaging methods have made great progress with machine learning, especially the rise of deep learning has led to a wider application of artificial intelligence in the use of image-assisted diagnosis of prostate cancer. METHOD This review collected medical image processing methods, prostate and prostate cancer on MR images, CT images, and ultrasound images through search engines such as web of science, PubMed, and Google Scholar, including image pre-processing methods, segmentation of prostate gland on medical images, registration between prostate gland on different modal images, detection of prostate cancer lesions on the prostate. CONCLUSION Through these collated papers, it is found that the current research on the diagnosis and staging of prostate cancer using machine learning and deep learning is in its infancy, and most of the existing studies are on the diagnosis of prostate cancer and classification of lesions, and the accuracy is low, with the best results having an accuracy of less than 0.95. There are fewer studies on staging. The research is mainly focused on MR images and much less on CT images, ultrasound images. DISCUSSION Machine learning and deep learning combined with medical imaging have a broad application prospect for the diagnosis and staging of prostate cancer, but the research in this area still has more room for development.
Collapse
Affiliation(s)
- Xinyi Chen
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Xiang Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Yuke Wu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Zhenglei Wang
- Department of Medical Imaging, Shanghai Electric Power Hospital, Shanghai 201620, China.
| | - Shuo Hong Wang
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
13
|
Jeganathan T, Salgues E, Schick U, Tissot V, Fournier G, Valéri A, Nguyen TA, Bourbonne V. Inter-Rater Variability of Prostate Lesion Segmentation on Multiparametric Prostate MRI. Biomedicines 2023; 11:3309. [PMID: 38137530 PMCID: PMC10741937 DOI: 10.3390/biomedicines11123309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 12/10/2023] [Accepted: 12/12/2023] [Indexed: 12/24/2023] Open
Abstract
INTRODUCTION External radiotherapy is a major treatment for localized prostate cancer (PCa). Dose escalation to the whole prostate gland increases biochemical relapse-free survival but also acute and late toxicities. Dose escalation to the dominant index lesion (DIL) only is of growing interest. It requires a robust delineation of the DIL. In this context, we aimed to evaluate the inter-observer variability of DIL delineation. MATERIAL AND METHODS Two junior radiologists and a senior radiation oncologist delineated DILs on 64 mpMRIs of patients with histologically confirmed PCa. For each mpMRI and each reader, eight individual DIL segmentations were delineated. These delineations were blindly performed from one another and resulted from the individual analysis of the T2, apparent diffusion coefficient (ADC), b2000, and dynamic contrast enhanced (DCE) sequences, as well as the analysis of combined sequences (T2ADC, T2ADCb2000, T2ADCDCE, and T2ADCb2000DCE). Delineation variability was assessed using the DICE coefficient, Jaccard index, Hausdorff distance measure, and mean distance to agreement. RESULTS T2, ADC, T2ADC, b2000, T2 + ADC + b2000, T2 + ADC + DCE, and T2 + ADC + b2000 + DCE sequences obtained DICE coefficients of 0.51, 0.50, 0.54, 0.52, 0.54, 0.55, 0.53, respectively, which are significantly higher than the perfusion sequence alone (0.35, p < 0.001). The analysis of other similarity metrics lead to similar results. The tumor volume and PI-RADS classification were positively correlated with the DICE scores. CONCLUSION Our study showed that the contours of prostatic lesions were more reproducible on certain sequences but confirmed the great variability of prostatic contours with a maximum DICE coefficient calculated at 0.55 (joint analysis of T2, ADC, and perfusion sequences).
Collapse
Affiliation(s)
- Thibaut Jeganathan
- Radiology Department, University Hospital, 29200 Brest, France; (T.J.); (E.S.); (V.T.)
| | - Emile Salgues
- Radiology Department, University Hospital, 29200 Brest, France; (T.J.); (E.S.); (V.T.)
| | - Ulrike Schick
- Radiation Oncology Department, University Hospital, 29200 Brest, France;
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
| | - Valentin Tissot
- Radiology Department, University Hospital, 29200 Brest, France; (T.J.); (E.S.); (V.T.)
| | - Georges Fournier
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
- Urology Department, University Hospital, 29200 Brest, France
| | - Antoine Valéri
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
- Urology Department, University Hospital, 29200 Brest, France
| | - Truong-An Nguyen
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
- Urology Department, University Hospital, 29200 Brest, France
| | - Vincent Bourbonne
- Radiation Oncology Department, University Hospital, 29200 Brest, France;
- INSERM, LaTIM UMR 1101, University of Western Brittany, 29238 Brest, France; (G.F.); (A.V.); (T.-A.N.)
| |
Collapse
|
14
|
Zeng X, Puonti O, Sayeed A, Herisse R, Mora J, Evancic K, Varadarajan D, Balbastre Y, Costantini I, Scardigli M, Ramazzotti J, DiMeo D, Mazzamuto G, Pesce L, Brady N, Cheli F, Pavone FS, Hof PR, Frost R, Augustinack J, van der Kouwe A, Iglesias JE, Fischl B. Segmentation of supragranular and infragranular layers in ultra-high resolution 7T ex vivo MRI of the human cerebral cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.06.570416. [PMID: 38106176 PMCID: PMC10723438 DOI: 10.1101/2023.12.06.570416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Accurate labeling of specific layers in the human cerebral cortex is crucial for advancing our understanding of neurodevelopmental and neurodegenerative disorders. Leveraging recent advancements in ultra-high resolution ex vivo MRI, we present a novel semi-supervised segmentation model capable of identifying supragranular and infragranular layers in ex vivo MRI with unprecedented precision. On a dataset consisting of 17 whole-hemisphere ex vivo scans at 120 μm, we propose a multi-resolution U-Nets framework (MUS) that integrates global and local structural information, achieving reliable segmentation maps of the entire hemisphere, with Dice scores over 0.8 for supra- and infragranular layers. This enables surface modeling, atlas construction, anomaly detection in disease states, and cross-modality validation, while also paving the way for finer layer segmentation. Our approach offers a powerful tool for comprehensive neuroanatomical investigations and holds promise for advancing our mechanistic understanding of progression of neurodegenerative diseases.
Collapse
Affiliation(s)
- Xiangrui Zeng
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Oula Puonti
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital - Amager and Hvidovre, Copenhagen, Denmark
| | - Areej Sayeed
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Rogeny Herisse
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Jocelyn Mora
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Kathryn Evancic
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Divya Varadarajan
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Yael Balbastre
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Irene Costantini
- National Research Council - National Institute of Optics (CNR-INO), Sesto Fiorentino, Italy
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
- Department of Biology, University of Florence, Italy
| | - Marina Scardigli
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | | | - Danila DiMeo
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | - Giacomo Mazzamuto
- National Research Council - National Institute of Optics (CNR-INO), Sesto Fiorentino, Italy
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
- Department of Physics and Astronomy, University of Florence, Italy
| | - Luca Pesce
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | - Niamh Brady
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | - Franco Cheli
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | - Francesco Saverio Pavone
- National Research Council - National Institute of Optics (CNR-INO), Sesto Fiorentino, Italy
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
- Department of Physics and Astronomy, University of Florence, Italy
| | - Patrick R. Hof
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Robert Frost
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Jean Augustinack
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - André van der Kouwe
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Juan Eugenio Iglesias
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Bruce Fischl
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| |
Collapse
|
15
|
Rao S, Glavis-Bloom J, Bui TL, Afzali K, Bansal R, Carbone J, Fateri C, Roth B, Chan W, Kakish D, Cortes G, Wang P, Meraz J, Chantaduly C, Chow DS, Chang PD, Houshyar R. Artificial Intelligence for Improved Hepatosplenomegaly Diagnosis. Curr Probl Diagn Radiol 2023; 52:501-504. [PMID: 37277270 DOI: 10.1067/j.cpradiol.2023.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 04/14/2023] [Accepted: 05/08/2023] [Indexed: 06/07/2023]
Abstract
Hepatosplenomegaly is commonly diagnosed by radiologists based on single dimension measurements and heuristic cut-offs. Volumetric measurements may be more accurate for diagnosing organ enlargement. Artificial intelligence techniques may be able to automatically calculate liver and spleen volume and facilitate more accurate diagnosis. After IRB approval, 2 convolutional neural networks (CNN) were developed to automatically segment the liver and spleen on a training dataset comprised of 500 single-phase, contrast-enhanced CT abdomen and pelvis examinations. A separate dataset of ten thousand sequential examinations at a single institution was segmented with these CNNs. Performance was evaluated on a 1% subset and compared with manual segmentations using Sorensen-Dice coefficients and Pearson correlation coefficients. Radiologist reports were reviewed for diagnosis of hepatomegaly and splenomegaly and compared with calculated volumes. Abnormal enlargement was defined as greater than 2 standard deviations above the mean. Median Dice coefficients for liver and spleen segmentation were 0.988 and 0.981, respectively. Pearson correlation coefficients of CNN-derived estimates of organ volume against the gold-standard manual annotation were 0.999 for the liver and spleen (P < 0.001). Average liver volume was 1556.8 ± 498.7 cc and average spleen volume was 194.6 ± 123.0 cc. There were significant differences in average liver and spleen volumes between male and female patients. Thus, the volume thresholds for ground-truth determination of hepatomegaly and splenomegaly were determined separately for each sex. Radiologist classification of hepatomegaly was 65% sensitive, 91% specific, with a positive predictive value (PPV) of 23% and an negative predictive value (NPV) of 98%. Radiologist classification of splenomegaly was 68% sensitive, 97% specific, with a positive predictive value (PPV) of 50% and a negative predictive value (NPV) of 99%. Convolutional neural networks can accurately segment the liver and spleen and may be helpful to improve radiologist accuracy in the diagnosis of hepatomegaly and splenomegaly.
Collapse
Affiliation(s)
- Sriram Rao
- University of California, Irvine School of Medicine, Irvine, CA
| | - Justin Glavis-Bloom
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Thanh-Lan Bui
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Kasra Afzali
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Riya Bansal
- University of California, Irvine School of Medicine, Irvine, CA
| | - Joseph Carbone
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Cameron Fateri
- University of California, Irvine School of Medicine, Irvine, CA
| | - Bradley Roth
- University of California, Irvine School of Medicine, Irvine, CA
| | - William Chan
- University of California, Irvine School of Medicine, Irvine, CA
| | - David Kakish
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Gillean Cortes
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Peter Wang
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Jeanette Meraz
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Chanon Chantaduly
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Dan S Chow
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Peter D Chang
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA
| | - Roozbeh Houshyar
- Department of Radiological Sciences, University of California, Irvine Medical Center, Orange, CA.
| |
Collapse
|
16
|
Mulliez D, Poncelet E, Ferret L, Hoeffel C, Hamet B, Dang LA, Laurent N, Ramette G. Three-Dimensional Measurement of the Uterus on Magnetic Resonance Images: Development and Performance Analysis of an Automated Deep-Learning Tool. Diagnostics (Basel) 2023; 13:2662. [PMID: 37627920 PMCID: PMC10453745 DOI: 10.3390/diagnostics13162662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
Uterus measurements are useful for assessing both the treatment and follow-ups of gynaecological patients. The aim of our study was to develop a deep learning (DL) tool for fully automated measurement of the three-dimensional size of the uterus on magnetic resonance imaging (MRI). In this single-centre retrospective study, 900 cases were included to train, validate, and test a VGG-16/VGG-11 convolutional neural network (CNN). The ground truth was manual measurement. The performance of the model was evaluated using the objective key point similarity (OKS), the mean difference in millimetres, and coefficient of determination R2. The OKS of our model was 0.92 (validation) and 0.96 (test). The average deviation and R2 coefficient between the AI measurements and the manual ones were, respectively, 3.9 mm and 0.93 for two-point length, 3.7 mm and 0.94 for three-point length, 2.6 mm and 0.93 for width, 4.2 mm and 0.75 for thickness. The inter-radiologist variability was 1.4 mm. A three-dimensional automated measurement was obtained in 1.6 s. In conclusion, our model was able to locate the uterus on MRIs and place measurement points on it to obtain its three-dimensional measurement with a very good correlation compared to manual measurements.
Collapse
Affiliation(s)
- Daphné Mulliez
- Service d’Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France; (E.P.); (B.H.); (L.A.D.); (N.L.); (G.R.)
| | - Edouard Poncelet
- Service d’Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France; (E.P.); (B.H.); (L.A.D.); (N.L.); (G.R.)
| | - Laurie Ferret
- Unité de Recherche Clinique, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France;
| | - Christine Hoeffel
- Service de Radiologie, Hôpital Maison Blanche, Avenue du Général Koenig, 51092 Reims, France;
| | - Blandine Hamet
- Service d’Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France; (E.P.); (B.H.); (L.A.D.); (N.L.); (G.R.)
| | - Lan Anh Dang
- Service d’Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France; (E.P.); (B.H.); (L.A.D.); (N.L.); (G.R.)
| | - Nicolas Laurent
- Service d’Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France; (E.P.); (B.H.); (L.A.D.); (N.L.); (G.R.)
| | - Guillaume Ramette
- Service d’Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France; (E.P.); (B.H.); (L.A.D.); (N.L.); (G.R.)
| |
Collapse
|
17
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
18
|
Zhang H, Zhong X, Li G, Liu W, Liu J, Ji D, Li X, Wu J. BCU-Net: Bridging ConvNeXt and U-Net for medical image segmentation. Comput Biol Med 2023; 159:106960. [PMID: 37099973 DOI: 10.1016/j.compbiomed.2023.106960] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 04/12/2023] [Accepted: 04/17/2023] [Indexed: 04/28/2023]
Abstract
Medical image segmentation enables doctors to observe lesion regions better and make accurate diagnostic decisions. Single-branch models such as U-Net have achieved great progress in this field. However, the complementary local and global pathological semantics of heterogeneous neural networks have not yet been fully explored. The class-imbalance problem remains a serious issue. To alleviate these two problems, we propose a novel model called BCU-Net, which leverages the advantages of ConvNeXt in global interaction and U-Net in local processing. We propose a new multilabel recall loss (MRL) module to relieve the class imbalance problem and facilitate deep-level fusion of local and global pathological semantics between the two heterogeneous branches. Extensive experiments were conducted on six medical image datasets including retinal vessel and polyp images. The qualitative and quantitative results demonstrate the superiority and generalizability of BCU-Net. In particular, BCU-Net can handle diverse medical images with diverse resolutions. It has a flexible structure owing to its plug-and-play characteristics, which promotes its practicality.
Collapse
Affiliation(s)
- Hongbin Zhang
- School of Software, East China Jiaotong University, China.
| | - Xiang Zhong
- School of Software, East China Jiaotong University, China.
| | - Guangli Li
- School of Information Engineering, East China Jiaotong University, China.
| | - Wei Liu
- School of Software, East China Jiaotong University, China.
| | - Jiawei Liu
- School of Software, East China Jiaotong University, China.
| | - Donghong Ji
- School of Cyber Science and Engineering, Wuhan University, China.
| | - Xiong Li
- School of Software, East China Jiaotong University, China.
| | - Jianguo Wu
- The Second Affiliated Hospital of Nanchang University, China.
| |
Collapse
|
19
|
Önder M, Evli C, Türk E, Kazan O, Bayrakdar İŞ, Çelik Ö, Costa ALF, Gomes JPP, Ogawa CM, Jagtap R, Orhan K. Deep-Learning-Based Automatic Segmentation of Parotid Gland on Computed Tomography Images. Diagnostics (Basel) 2023; 13:581. [PMID: 36832069 PMCID: PMC9955422 DOI: 10.3390/diagnostics13040581] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/23/2023] [Accepted: 02/02/2023] [Indexed: 02/08/2023] Open
Abstract
This study aims to develop an algorithm for the automatic segmentation of the parotid gland on CT images of the head and neck using U-Net architecture and to evaluate the model's performance. In this retrospective study, a total of 30 anonymized CT volumes of the head and neck were sliced into 931 axial images of the parotid glands. Ground truth labeling was performed with the CranioCatch Annotation Tool (CranioCatch, Eskisehir, Turkey) by two oral and maxillofacial radiologists. The images were resized to 512 × 512 and split into training (80%), validation (10%), and testing (10%) subgroups. A deep convolutional neural network model was developed using U-net architecture. The automatic segmentation performance was evaluated in terms of the F1-score, precision, sensitivity, and the Area Under Curve (AUC) statistics. The threshold for a successful segmentation was determined by the intersection of over 50% of the pixels with the ground truth. The F1-score, precision, and sensitivity of the AI model in segmenting the parotid glands in the axial CT slices were found to be 1. The AUC value was 0.96. This study has shown that it is possible to use AI models based on deep learning to automatically segment the parotid gland on axial CT images.
Collapse
Affiliation(s)
- Merve Önder
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
| | - Cengiz Evli
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
| | - Ezgi Türk
- Dentomaxillofacial Radiology, Oral and Dental Health Center, Hatay 31040, Turkey
| | - Orhan Kazan
- Health Services Vocational School, Gazi University, Ankara 06560, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir 26040, Turkey
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, Eskişehir 26040, Turkey
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Özer Çelik
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, Eskişehir 26040, Turkey
- Department of Mathematics-Computer, Faculty of Science, Eskisehir Osmangazi University, Eskişehir 26040, Turkey
| | - Andre Luiz Ferreira Costa
- Postgraduate Program in Dentistry, Cruzeiro do Sul University (UNICSUL), São Paulo 01506-000, SP, Brazil
| | - João Pedro Perez Gomes
- Department of Stomatology, Division of General Pathology, School of Dentistry, University of São Paulo (USP), São Paulo 13560-970, SP, Brazil
| | - Celso Massahiro Ogawa
- Postgraduate Program in Dentistry, Cruzeiro do Sul University (UNICSUL), São Paulo 01506-000, SP, Brazil
| | - Rohan Jagtap
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, 20-093 Lublin, Poland
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara 06000, Turkey
| |
Collapse
|
20
|
Li Y, Lin C, Zhang Y, Feng S, Huang M, Bai Z. Automatic segmentation of prostate MRI based on 3D pyramid pooling Unet. Med Phys 2023; 50:906-921. [PMID: 35923153 DOI: 10.1002/mp.15895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 06/23/2022] [Accepted: 06/25/2022] [Indexed: 01/01/2023] Open
Abstract
PURPOSE Automatic segmentation of prostate magnetic resonance (MR) images is crucial for the diagnosis, evaluation, and prognosis of prostate diseases (including prostate cancer). In recent years, the mainstream segmentation method for the prostate has been converted to convolutional neural networks. However, owing to the complexity of the tissue structure in MR images and the limitations of existing methods in spatial context modeling, the segmentation performance should be improved further. METHODS In this study, we proposed a novel 3D pyramid pool Unet that benefits from the pyramid pooling structure embedded in the skip connection (SC) and the deep supervision (DS) in the up-sampling of the 3D Unet. The parallel SC of the conventional 3D Unet network causes low-resolution information to be sent to the feature map repeatedly, resulting in blurred image features. To overcome the shortcomings of the conventional 3D Unet, we merge each decoder layer with the feature map of the same scale as the encoder and the smaller scale feature map of the pyramid pooling encoder. This SC combines the low-level details and high-level semantics at two different levels of feature maps. In addition, pyramid pooling performs multifaceted feature extraction on each image behind the convolutional layer, and DS learns hierarchical representations from comprehensive aggregated feature maps, which can improve the accuracy of the task. RESULTS Experiments on 3D prostate MR images of 78 patients demonstrated that our results were highly correlated with expert manual segmentation. The average relative volume difference and Dice similarity coefficient of the prostate volume area were 2.32% and 91.03%, respectively. CONCLUSION Quantitative experiments demonstrate that, compared with other methods, the results of our method are highly consistent with the expert manual segmentation.
Collapse
Affiliation(s)
- Yuchun Li
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of information and Communication Engineering, Hainan University, Haikou, China
| | - Cong Lin
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of information and Communication Engineering, Hainan University, Haikou, China.,College of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang, China
| | - Yu Zhang
- College of Computer science and Technology, Hainan University, Haikou, China
| | - Siling Feng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of information and Communication Engineering, Hainan University, Haikou, China
| | - Mengxing Huang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of information and Communication Engineering, Hainan University, Haikou, China
| | - Zhiming Bai
- Haikou Municipal People's Hospital and Central South University Xiangya Medical College Affiliated Hospital, Haikou, China
| |
Collapse
|
21
|
Haidey J, Low G, Wilson MP. Radiomics-based approaches outperform visual analysis for differentiating lipoma from atypical lipomatous tumors: a review. Skeletal Radiol 2022; 52:1089-1100. [PMID: 36385583 DOI: 10.1007/s00256-022-04232-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 11/02/2022] [Accepted: 11/06/2022] [Indexed: 11/17/2022]
Abstract
BACKGROUND Differentiating atypical lipomatous tumors (ALTs) and well-differentiated liposarcomas (WDLs) from benign lipomatous lesions is important for guiding clinical management, though conventional visual analysis of these lesions is challenging due to overlap of imaging features. Radiomics-based approaches may serve as a promising alternative and/or supplementary diagnostic approach to conventional imaging. PURPOSE The purpose of this study is to review the practice of radiomics-based imaging and systematically evaluate the literature available for studies evaluating radiomics applied to differentiating ALTs/WDLs from benign lipomas. REVIEW A background review of the radiomic workflow is provided, outlining the steps of image acquisition, segmentation, feature extraction, and model development. Subsequently, a systematic review of MEDLINE, EMBASE, Scopus, the Cochrane Library, and the grey literature was performed from inception to June 2022 to identify size studies using radiomics for differentiating ALTs/WDLs from benign lipomas. Radiomic models were shown to outperform conventional analysis in all but one model with a sensitivity ranging from 68 to 100% and a specificity ranging from 84 to 100%. However, current approaches rely on user input and no studies used a fully automated method for segmentation, contributing to interobserver variability and decreasing time efficiency. CONCLUSION Radiomic models may show improved performance for differentiating ALTs/WDLs from benign lipomas compared to conventional analysis. However, considerable variability between radiomic approaches exists and future studies evaluating a standardized radiomic model with a multi-institutional study design and preferably fully automated segmentation software are needed before clinical application can be more broadly considered.
Collapse
Affiliation(s)
- Jordan Haidey
- Department of Radiology and Diagnostic Imaging, University of Alberta, 2B2.41 WMC, 8440-112 Street NW, Edmonton, Alberta, T6G 2B7, Canada.
| | - Gavin Low
- Department of Radiology and Diagnostic Imaging, University of Alberta, 2B2.41 WMC, 8440-112 Street NW, Edmonton, Alberta, T6G 2B7, Canada
| | - Mitchell P Wilson
- Department of Radiology and Diagnostic Imaging, University of Alberta, 2B2.41 WMC, 8440-112 Street NW, Edmonton, Alberta, T6G 2B7, Canada
| |
Collapse
|
22
|
Novel artificial intelligent transformer U-NET for better identification and management of prostate cancer. Mol Cell Biochem 2022; 478:1439-1445. [DOI: 10.1007/s11010-022-04600-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 10/24/2022] [Indexed: 11/10/2022]
|
23
|
Belue MJ, Turkbey B. Tasks for artificial intelligence in prostate MRI. Eur Radiol Exp 2022; 6:33. [PMID: 35908102 PMCID: PMC9339059 DOI: 10.1186/s41747-022-00287-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 05/18/2022] [Indexed: 11/17/2022] Open
Abstract
The advent of precision medicine, increasing clinical needs, and imaging availability among many other factors in the prostate cancer diagnostic pathway has engendered the utilization of artificial intelligence (AI). AI carries a vast number of potential applications in every step of the prostate cancer diagnostic pathway from classifying/improving prostate multiparametric magnetic resonance image quality, prostate segmentation, anatomically segmenting cancer suspicious foci, detecting and differentiating clinically insignificant cancers from clinically significant cancers on a voxel-level, and classifying entire lesions into Prostate Imaging Reporting and Data System categories/Gleason scores. Multiple studies in all these areas have shown many promising results approximating accuracies of radiologists. Despite this flourishing research, more prospective multicenter studies are needed to uncover the full impact and utility of AI on improving radiologist performance and clinical management of prostate cancer. In this narrative review, we aim to introduce emerging medical imaging AI paper quality metrics such as the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and Field-Weighted Citation Impact (FWCI), dive into some of the top AI models for segmentation, detection, and classification.
Collapse
Affiliation(s)
- Mason J Belue
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health Bethesda, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, MD, 20892-1088, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health Bethesda, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, MD, 20892-1088, USA.
| |
Collapse
|
24
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Comput Biol Med 2022; 148:105817. [PMID: 35841780 DOI: 10.1016/j.compbiomed.2022.105817] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 06/12/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The development of deep learning (DL) models for prostate segmentation on magnetic resonance imaging (MRI) depends on expert-annotated data and reliable baselines, which are often not publicly available. This limits both reproducibility and comparability. METHODS Prostate158 consists of 158 expert annotated biparametric 3T prostate MRIs comprising T2w sequences and diffusion-weighted sequences with apparent diffusion coefficient maps. Two U-ResNets trained for segmentation of anatomy (central gland, peripheral zone) and suspicious lesions for prostate cancer (PCa) with a PI-RADS score of ≥4 served as baseline algorithms. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average surface distance (ASD). The Wilcoxon test with Bonferroni correction was used to evaluate differences in performance. The generalizability of the baseline model was assessed using the open datasets Medical Segmentation Decathlon and PROSTATEx. RESULTS Compared to Reader 1, the models achieved a DSC/HD/ASD of 0.88/18.3/2.2 for the central gland, 0.75/22.8/1.9 for the peripheral zone, and 0.45/36.7/17.4 for PCa. Compared with Reader 2, the DSC/HD/ASD were 0.88/17.5/2.6 for the central gland, 0.73/33.2/1.9 for the peripheral zone, and 0.4/39.5/19.1 for PCa. Interrater agreement measured in DSC/HD/ASD was 0.87/11.1/1.0 for the central gland, 0.75/15.8/0.74 for the peripheral zone, and 0.6/18.8/5.5 for PCa. Segmentation performances on the Medical Segmentation Decathlon and PROSTATEx were 0.82/22.5/3.4; 0.86/18.6/2.5 for the central gland, and 0.64/29.2/4.7; 0.71/26.3/2.2 for the peripheral zone. CONCLUSIONS We provide an openly accessible, expert-annotated 3T dataset of prostate MRI and a reproducible benchmark to foster the development of prostate segmentation algorithms.
Collapse
Affiliation(s)
- Lisa C Adams
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | - Marcus R Makowski
- Technical University of Munich, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Ismaninger Str. 22, 81675, Munich, Germany
| | - Günther Engel
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Institute for Diagnostic and Interventional Radiology, Georg-August University, Göttingen, Germany
| | - Maximilian Rattunde
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Felix Busch
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Patrick Asbach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Stefan M Niehues
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | | | - Geert Litjens
- Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | - Keno K Bressem
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
25
|
Salvi M, De Santi B, Pop B, Bosco M, Giannini V, Regge D, Molinari F, Meiburger KM. Integration of Deep Learning and Active Shape Models for More Accurate Prostate Segmentation in 3D MR Images. J Imaging 2022; 8:133. [PMID: 35621897 PMCID: PMC9146644 DOI: 10.3390/jimaging8050133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 01/27/2023] Open
Abstract
Magnetic resonance imaging (MRI) has a growing role in the clinical workup of prostate cancer. However, manual three-dimensional (3D) segmentation of the prostate is a laborious and time-consuming task. In this scenario, the use of automated algorithms for prostate segmentation allows us to bypass the huge workload of physicians. In this work, we propose a fully automated hybrid approach for prostate gland segmentation in MR images using an initial segmentation of prostate volumes using a custom-made 3D deep network (VNet-T2), followed by refinement using an Active Shape Model (ASM). While the deep network focuses on three-dimensional spatial coherence of the shape, the ASM relies on local image information and this joint effort allows for improved segmentation of the organ contours. Our method is developed and tested on a dataset composed of T2-weighted (T2w) MRI prostatic volumes of 60 male patients. In the test set, the proposed method shows excellent segmentation performance, achieving a mean dice score and Hausdorff distance of 0.851 and 7.55 mm, respectively. In the future, this algorithm could serve as an enabling technology for the development of computer-aided systems for prostate cancer characterization in MR imaging.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Bruno De Santi
- Multi-Modality Medical Imaging (M3I), Technical Medical Centre, University of Twente, PB217, 7500 AE Enschede, The Netherlands;
| | - Bianca Pop
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Martino Bosco
- Department of Pathology, Ospedale Michele e Pietro Ferrero, 12060 Verduno, Italy;
| | - Valentina Giannini
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy; (V.G.); (D.R.)
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy
| | - Daniele Regge
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy; (V.G.); (D.R.)
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy
| | - Filippo Molinari
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Kristen M. Meiburger
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| |
Collapse
|
26
|
Turkbey B, Haider MA. Deep learning-based artificial intelligence applications in prostate MRI: brief summary. Br J Radiol 2022; 95:20210563. [PMID: 34860562 PMCID: PMC8978238 DOI: 10.1259/bjr.20210563] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Prostate cancer (PCa) is the most common cancer type in males in the Western World. MRI has an established role in diagnosis of PCa through guiding biopsies. Due to multistep complex nature of the MRI-guided PCa diagnosis pathway, diagnostic performance has a big variation. Developing artificial intelligence (AI) models using machine learning, particularly deep learning, has an expanding role in radiology. Specifically, for prostate MRI, several AI approaches have been defined in the literature for prostate segmentation, lesion detection and classification with the aim of improving diagnostic performance and interobserver agreement. In this review article, we summarize the use of radiology applications of AI in prostate MRI.
Collapse
Affiliation(s)
- Baris Turkbey
- Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA
| | | |
Collapse
|
27
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
28
|
Sahli H, Ben Slama A, Labidi S. U-Net: A valuable encoder-decoder architecture for liver tumors segmentation in CT images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:45-56. [PMID: 34806644 DOI: 10.3233/xst-210993] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This study proposes a new predictive segmentation method for liver tumors detection using computed tomography (CT) liver images. In the medical imaging field, the exact localization of metastasis lesions after acquisition faces persistent problems both for diagnostic aid and treatment effectiveness. Therefore, the improvement in the diagnostic process is substantially crucial in order to increase the success chance of the management and the therapeutic follow-up. The proposed procedure highlights a computerized approach based on an encoder-decoder structure in order to provide volumetric analysis of pathologic tumors. Specifically, we developed an automatic algorithm for the liver tumors defect segmentation through the Seg-Net and U-Net architectures from metastasis CT images. In this study, we collected a dataset of 200 pathologically confirmed metastasis cancer cases. A total of 8,297 CT image slices of these cases were used developing and optimizing the proposed segmentation architecture. The model was trained and validated using 170 and 30 cases or 85% and 15% of the CT image data, respectively. Study results demonstrate the strength of the proposed approach that reveals the superlative segmentation performance as evaluated using following indices including F1-score = 0.9573, Recall = 0.9520, IOU = 0.9654, Binary cross entropy = 0.0032 and p-value <0.05, respectively. In comparison to state-of-the-art techniques, the proposed method yields a higher precision rate by specifying metastasis tumor position.
Collapse
Affiliation(s)
- Hanene Sahli
- Laboratory of Signal Image and Energy Mastery (SIME), LR13ES03, University of Tunis, ENSIT, 1008, Tunis, Tunisia
| | - Amine Ben Slama
- Laboratory of Biophysics and Medical Technologies, LR13ES07, University of Tunis EL Manar, ISTMT, 1006, Tunis, Tunisia
| | - Salam Labidi
- Laboratory of Biophysics and Medical Technologies, LR13ES07, University of Tunis EL Manar, ISTMT, 1006, Tunis, Tunisia
| |
Collapse
|
29
|
An Optimized Approach for Prostate Image Segmentation Using K-Means Clustering Algorithm with Elbow Method. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:4553832. [PMID: 34819951 PMCID: PMC8608531 DOI: 10.1155/2021/4553832] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Accepted: 10/26/2021] [Indexed: 01/05/2023]
Abstract
Prostate cancer disease is one of the common types that cause men's prostate damage all over the world. Prostate-specific membrane antigen (PSMA) expressed by type-II is an extremely attractive style for imaging-based diagnosis of prostate cancer. Clinically, photodynamic therapy (PDT) is used as noninvasive therapy in treatment of several cancers and some other diseases. This paper aims to segment or cluster and analyze pixels of histological and near-infrared (NIR) prostate cancer images acquired by PSMA-targeting PDT low weight molecular agents. Such agents can provide image guidance to resection of the prostate tumors and permit for the subsequent PDT in order to remove remaining or noneradicable cancer cells. The color prostate image segmentation is accomplished using an optimized image segmentation approach. The optimized approach combines the k-means clustering algorithm with elbow method that can give better clustering of pixels through automatically determining the best number of clusters. Clusters' statistics and ratio results of pixels in the segmented images show the applicability of the proposed approach for giving the optimum number of clusters for prostate cancer analysis and diagnosis.
Collapse
|
30
|
Liu X, Sun Z, Han C, Cui Y, Huang J, Wang X, Zhang X, Wang X. Development and validation of the 3D U-Net algorithm for segmentation of pelvic lymph nodes on diffusion-weighted images. BMC Med Imaging 2021; 21:170. [PMID: 34774001 PMCID: PMC8590773 DOI: 10.1186/s12880-021-00703-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 11/08/2021] [Indexed: 12/16/2022] Open
Abstract
Background The 3D U-Net model has been proved to perform well in the automatic organ segmentation. The aim of this study is to evaluate the feasibility of the 3D U-Net algorithm for the automated detection and segmentation of lymph nodes (LNs) on pelvic diffusion-weighted imaging (DWI) images. Methods A total of 393 DWI images of patients suspected of having prostate cancer (PCa) between January 2019 and December 2020 were collected for model development. Seventy-seven DWI images from another group of PCa patients imaged between January 2021 and April 2021 were collected for temporal validation. Segmentation performance was assessed using the Dice score, positive predictive value (PPV), true positive rate (TPR), and volumetric similarity (VS), Hausdorff distance (HD), the Average distance (AVD), and the Mahalanobis distance (MHD) with manual annotation of pelvic LNs as the reference. The accuracy with which the suspicious metastatic LNs (short diameter > 0.8 cm) were detected was evaluated using the area under the curve (AUC) at the patient level, and the precision, recall, and F1-score were determined at the lesion level. The consistency of LN staging on an hold-out test dataset between the model and radiologist was assessed using Cohen’s kappa coefficient. Results In the testing set used for model development, the Dice score, TPR, PPV, VS, HD, AVD and MHD values for the segmentation of suspicious LNs were 0.85, 0.82, 0.80, 0.86, 2.02 (mm), 2.01 (mm), and 1.54 (mm) respectively. The precision, recall, and F1-score for the detection of suspicious LNs were 0.97, 0.98 and 0.97, respectively. In the temporal validation dataset, the AUC of the model for identifying PCa patients with suspicious LNs was 0.963 (95% CI: 0.892–0.993). High consistency of LN staging (Kappa = 0.922) was achieved between the model and expert radiologist. Conclusion The 3D U-Net algorithm can accurately detect and segment pelvic LNs based on DWI images.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jiahao Huang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
31
|
Bardis M, Houshyar R, Chantaduly C, Tran-Harding K, Ushinsky A, Chahine C, Rupasinghe M, Chow D, Chang P. Segmentation of the Prostate Transition Zone and Peripheral Zone on MR Images with Deep Learning. Radiol Imaging Cancer 2021; 3:e200024. [PMID: 33929265 DOI: 10.1148/rycan.2021200024] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Purpose To develop a deep learning model to delineate the transition zone (TZ) and peripheral zone (PZ) of the prostate on MR images. Materials and Methods This retrospective study was composed of patients who underwent a multiparametric prostate MRI and an MRI/transrectal US fusion biopsy between January 2013 and May 2016. A board-certified abdominal radiologist manually segmented the prostate, TZ, and PZ on the entire data set. Included accessions were split into 60% training, 20% validation, and 20% test data sets for model development. Three convolutional neural networks with a U-Net architecture were trained for automatic recognition of the prostate organ, TZ, and PZ. Model performance for segmentation was assessed using Dice scores and Pearson correlation coefficients. Results A total of 242 patients were included (242 MR images; 6292 total images). Models for prostate organ segmentation, TZ segmentation, and PZ segmentation were trained and validated. Using the test data set, for prostate organ segmentation, the mean Dice score was 0.940 (interquartile range, 0.930-0.961), and the Pearson correlation coefficient for volume was 0.981 (95% CI: 0.966, 0.989). For TZ segmentation, the mean Dice score was 0.910 (interquartile range, 0.894-0.938), and the Pearson correlation coefficient for volume was 0.992 (95% CI: 0.985, 0.995). For PZ segmentation, the mean Dice score was 0.774 (interquartile range, 0.727-0.832), and the Pearson correlation coefficient for volume was 0.927 (95% CI: 0.870, 0.957). Conclusion Deep learning with an architecture composed of three U-Nets can accurately segment the prostate, TZ, and PZ. Keywords: MRI, Genital/Reproductive, Prostate, Neural Networks Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
- Michelle Bardis
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Roozbeh Houshyar
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Chanon Chantaduly
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Karen Tran-Harding
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Alexander Ushinsky
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Chantal Chahine
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Mark Rupasinghe
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Daniel Chow
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Peter Chang
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| |
Collapse
|
32
|
Prostate Cancer Radiogenomics-From Imaging to Molecular Characterization. Int J Mol Sci 2021; 22:ijms22189971. [PMID: 34576134 PMCID: PMC8465891 DOI: 10.3390/ijms22189971] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 09/06/2021] [Accepted: 09/10/2021] [Indexed: 12/24/2022] Open
Abstract
Radiomics and genomics represent two of the most promising fields of cancer research, designed to improve the risk stratification and disease management of patients with prostate cancer (PCa). Radiomics involves a conversion of imaging derivate quantitative features using manual or automated algorithms, enhancing existing data through mathematical analysis. This could increase the clinical value in PCa management. To extract features from imaging methods such as magnetic resonance imaging (MRI), the empiric nature of the analysis using machine learning and artificial intelligence could help make the best clinical decisions. Genomics information can be explained or decoded by radiomics. The development of methodologies can create more-efficient predictive models and can better characterize the molecular features of PCa. Additionally, the identification of new imaging biomarkers can overcome the known heterogeneity of PCa, by non-invasive radiological assessment of the whole specific organ. In the future, the validation of recent findings, in large, randomized cohorts of PCa patients, can establish the role of radiogenomics. Briefly, we aimed to review the current literature of highly quantitative and qualitative results from well-designed studies for the diagnoses, treatment, and follow-up of prostate cancer, based on radiomics, genomics and radiogenomics research.
Collapse
|
33
|
Kurata Y, Nishio M, Moribata Y, Kido A, Himoto Y, Otani S, Fujimoto K, Yakami M, Minamiguchi S, Mandai M, Nakamoto Y. Automatic segmentation of uterine endometrial cancer on multi-sequence MRI using a convolutional neural network. Sci Rep 2021; 11:14440. [PMID: 34262088 PMCID: PMC8280152 DOI: 10.1038/s41598-021-93792-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 06/29/2021] [Indexed: 12/29/2022] Open
Abstract
Endometrial cancer (EC) is the most common gynecological tumor in developed countries, and preoperative risk stratification is essential for personalized medicine. There have been several radiomics studies for noninvasive risk stratification of EC using MRI. Although tumor segmentation is usually necessary for these studies, manual segmentation is not only labor-intensive but may also be subjective. Therefore, our study aimed to perform the automatic segmentation of EC on MRI with a convolutional neural network. The effect of the input image sequence and batch size on the segmentation performance was also investigated. Of 200 patients with EC, 180 patients were used for training the modified U-net model; 20 patients for testing the segmentation performance and the robustness of automatically extracted radiomics features. Using multi-sequence images and larger batch size was effective for improving segmentation accuracy. The mean Dice similarity coefficient, sensitivity, and positive predictive value of our model for the test set were 0.806, 0.816, and 0.834, respectively. The robustness of automatically extracted first-order and shape-based features was high (median ICC = 0.86 and 0.96, respectively). Other high-order features presented moderate-high robustness (median ICC = 0.57-0.93). Our model could automatically segment EC on MRI and extract radiomics features with high reliability.
Collapse
Affiliation(s)
- Yasuhisa Kurata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Mizuho Nishio
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan.
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe, 650-0017, Japan.
| | - Yusaku Moribata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
- Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Aki Kido
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Yuki Himoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Satoshi Otani
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Koji Fujimoto
- Department of Real World Data Research and Development, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Masahiro Yakami
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
- Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Sachiko Minamiguchi
- Department of Diagnostic Pathology, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Masaki Mandai
- Department of Gynecology and Obstetrics, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| |
Collapse
|
34
|
Abstract
PURPOSE OF REVIEW Artificial intelligence has become popular in medical applications, specifically as a clinical support tool for computer-aided diagnosis. These tools are typically employed on medical data (i.e., image, molecular data, clinical variables, etc.) and used the statistical and machine-learning methods to measure the model performance. In this review, we summarized and discussed the most recent radiomic pipeline used for clinical analysis. RECENT FINDINGS Currently, limited management of cancers benefits from artificial intelligence, mostly related to a computer-aided diagnosis that avoids a biopsy analysis that presents additional risks and costs. Most artificial intelligence tools are based on imaging features, known as radiomic analysis that can be refined into predictive models in noninvasively acquired imaging data. This review explores the progress of artificial intelligence-based radiomic tools for clinical applications with a brief description of necessary technical steps. Explaining new radiomic approaches based on deep-learning techniques will explain how the new radiomic models (deep radiomic analysis) can benefit from deep convolutional neural networks and be applied on limited data sets. SUMMARY To consider the radiomic algorithms, further investigations are recommended to involve deep learning in radiomic models with additional validation steps on various cancer types.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
| | - Yousef Katib
- Department of Radiology, Taibah University, Al-Madinah, Saudi Arabia
| | - Lama Hassan
- School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
| |
Collapse
|
35
|
Abstract
PURPOSE OF REVIEW The purpose of this review was to identify the most recent lines of research focusing on the application of artificial intelligence (AI) in the diagnosis and staging of prostate cancer (PCa) with imaging. RECENT FINDINGS The majority of studies focused on the improvement in the interpretation of bi-parametric and multiparametric magnetic resonance imaging, and in the planning of image guided biopsy. These initial studies showed that AI methods based on convolutional neural networks could achieve a diagnostic performance close to that of radiologists. In addition, these methods could improve segmentation and reduce inter-reader variability. Methods based on both clinical and imaging findings could help in the identification of high-grade PCa and more aggressive disease, thus guiding treatment decisions. Though these initial results are promising, only few studies addressed the repeatability and reproducibility of the investigated AI tools. Further, large-scale validation studies are missing and no diagnostic phase III or higher studies proving improved outcomes regarding clinical decision making have been conducted. SUMMARY AI techniques have the potential to significantly improve and simplify diagnosis, risk stratification and staging of PCa. Larger studies with a focus on quality standards are needed to allow a widespread introduction of AI in clinical practice.
Collapse
Affiliation(s)
- Pascal A T Baltzer
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | | |
Collapse
|
36
|
Artificial Intelligence and Machine Learning in Prostate Cancer Patient Management-Current Trends and Future Perspectives. Diagnostics (Basel) 2021; 11:diagnostics11020354. [PMID: 33672608 PMCID: PMC7924061 DOI: 10.3390/diagnostics11020354] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 02/16/2021] [Accepted: 02/17/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) is the field of computer science that aims to build smart devices performing tasks that currently require human intelligence. Through machine learning (ML), the deep learning (DL) model is teaching computers to learn by example, something that human beings are doing naturally. AI is revolutionizing healthcare. Digital pathology is becoming highly assisted by AI to help researchers in analyzing larger data sets and providing faster and more accurate diagnoses of prostate cancer lesions. When applied to diagnostic imaging, AI has shown excellent accuracy in the detection of prostate lesions as well as in the prediction of patient outcomes in terms of survival and treatment response. The enormous quantity of data coming from the prostate tumor genome requires fast, reliable and accurate computing power provided by machine learning algorithms. Radiotherapy is an essential part of the treatment of prostate cancer and it is often difficult to predict its toxicity for the patients. Artificial intelligence could have a future potential role in predicting how a patient will react to the therapy side effects. These technologies could provide doctors with better insights on how to plan radiotherapy treatment. The extension of the capabilities of surgical robots for more autonomous tasks will allow them to use information from the surgical field, recognize issues and implement the proper actions without the need for human intervention.
Collapse
|
37
|
Magnetic Resonance Imaging Based Radiomic Models of Prostate Cancer: A Narrative Review. Cancers (Basel) 2021; 13:cancers13030552. [PMID: 33535569 PMCID: PMC7867056 DOI: 10.3390/cancers13030552] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 01/18/2021] [Accepted: 01/27/2021] [Indexed: 12/11/2022] Open
Abstract
Simple Summary The increasing interest in implementing artificial intelligence in radiomic models has occurred alongside advancement in the tools used for computer-aided diagnosis. Such tools typically apply both statistical and machine learning methodologies to assess the various modalities used in medical image analysis. Specific to prostate cancer, the radiomics pipeline has multiple facets that are amenable to improvement. This review discusses the steps of a magnetic resonance imaging based radiomics pipeline. Present successes, existing opportunities for refinement, and the most pertinent pending steps leading to clinical validation are highlighted. Abstract The management of prostate cancer (PCa) is dependent on biomarkers of biological aggression. This includes an invasive biopsy to facilitate a histopathological assessment of the tumor’s grade. This review explores the technical processes of applying magnetic resonance imaging based radiomic models to the evaluation of PCa. By exploring how a deep radiomics approach further optimizes the prediction of a PCa’s grade group, it will be clear how this integration of artificial intelligence mitigates existing major technological challenges faced by a traditional radiomic model: image acquisition, small data sets, image processing, labeling/segmentation, informative features, predicting molecular features and incorporating predictive models. Other potential impacts of artificial intelligence on the personalized treatment of PCa will also be discussed. The role of deep radiomics analysis-a deep texture analysis, which extracts features from convolutional neural networks layers, will be highlighted. Existing clinical work and upcoming clinical trials will be reviewed, directing investigators to pertinent future directions in the field. For future progress to result in clinical translation, the field will likely require multi-institutional collaboration in producing prospectively populated and expertly labeled imaging libraries.
Collapse
|
38
|
Saunders SL, Leng E, Spilseth B, Wasserman N, Metzger GJ, Bolan PJ. Training Convolutional Networks for Prostate Segmentation With Limited Data. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:109214-109223. [PMID: 34527506 PMCID: PMC8438764 DOI: 10.1109/access.2021.3100585] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Multi-zonal segmentation is a critical component of computer-aided diagnostic systems for detecting and staging prostate cancer. Previously, convolutional neural networks such as the U-Net have been used to produce fully automatic multi-zonal prostate segmentation on magnetic resonance images (MRIs) with performance comparable to human experts, but these often require large amounts of manually segmented training data to produce acceptable results. For institutions that have limited amounts of labeled MRI exams, it is not clear how much data is needed to train a segmentation model, and which training strategy should be used to maximize the value of the available data. This work compares how the strategies of transfer learning and aggregated training using publicly available external data can improve segmentation performance on internal, site-specific prostate MR images, and evaluates how the performance varies with the amount of internal data used for training. Cross training experiments were performed to show that differences between internal and external data were impactful. Using a standard U-Net architecture, optimizations were performed to select between 2D and 3D variants, and to determine the depth of fine-tuning required for optimal transfer learning. With the optimized architecture, the performance of transfer learning and aggregated training were compared for a range of 5-40 internal datasets. The results show that both strategies consistently improve performance and produced segmentation results that are comparable to that of human experts with approximately 20 site-specific MRI datasets. These findings can help guide the development of site-specific prostate segmentation models for both clinical and research applications.
Collapse
Affiliation(s)
- Sara L Saunders
- Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Ethan Leng
- Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Benjamin Spilseth
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Neil Wasserman
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Gregory J Metzger
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Patrick J Bolan
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|