1
|
Horasan A, Güneş A. Advancing Prostate Cancer Diagnosis: A Deep Learning Approach for Enhanced Detection in MRI Images. Diagnostics (Basel) 2024; 14:1871. [PMID: 39272656 PMCID: PMC11393904 DOI: 10.3390/diagnostics14171871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 08/04/2024] [Accepted: 08/16/2024] [Indexed: 09/15/2024] Open
Abstract
Prostate cancer remains a leading cause of mortality among men globally, necessitating advancements in diagnostic methodologies to improve detection and treatment outcomes. Magnetic Resonance Imaging has emerged as a crucial technique for the detection of prostate cancer, with current research focusing on the integration of deep learning frameworks to refine this diagnostic process. This study employs a comprehensive approach using multiple deep learning models, including a three-dimensional (3D) Convolutional Neural Network, a Residual Network, and an Inception Network to enhance the accuracy and robustness of prostate cancer detection. By leveraging the complementary strengths of these models through an ensemble method and soft voting technique, the study aims to achieve superior diagnostic performance. The proposed methodology demonstrates state-of-the-art results, with the ensemble model achieving an overall accuracy of 91.3%, a sensitivity of 90.2%, a specificity of 92.1%, a precision of 89.8%, and an F1 score of 90.0% when applied to MRI images from the SPIE-AAPM-NCI PROSTATEx dataset. Evaluation of the models involved meticulous pre-processing, data augmentation, and the use of advanced deep-learning architectures to analyze the whole MRI slices and volumes. The findings highlight the potential of using an ensemble approach to significantly improve prostate cancer diagnostics, offering a robust and precise tool for clinical applications.
Collapse
Affiliation(s)
- Alparslan Horasan
- Computer Engineering Department, Istanbul Aydin University, 34150 Istanbul, Turkey
| | - Ali Güneş
- Computer Engineering Department, Istanbul Aydin University, 34150 Istanbul, Turkey
| |
Collapse
|
2
|
Morelli L, Paganelli C, Marvaso G, Parrella G, Annunziata S, Vicini MG, Zaffaroni M, Pepa M, Summers PE, De Cobelli O, Petralia G, Jereczek-Fossa BA, Baroni G. Addressing intra- and inter-institution variability of a radiomic framework based on Apparent Diffusion Coefficient in prostate cancer. Med Phys 2024. [PMID: 39172115 DOI: 10.1002/mp.17355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 06/27/2024] [Accepted: 08/05/2024] [Indexed: 08/23/2024] Open
Abstract
BACKGROUND Prostate cancer (PCa) is a highly heterogeneous disease, making tailored treatment approaches challenging. Magnetic resonance imaging (MRI), notably diffusion-weighted imaging (DWI) and the derived Apparent Diffusion Coefficient (ADC) maps, plays a crucial role in PCa characterization. In this context, radiomics is a very promising approach able to disclose insights from MRI data. However, the sensitivity of radiomic features to MRI settings, encompassing DWI protocols and multicenter variations, requires the development of robust and generalizable models. PURPOSE To develop a comprehensive radiomics framework for noninvasive PCa characterization using ADC maps, focusing on identifying reliable imaging biomarkers against intra- and inter-institution variations. MATERIALS AND METHODS Two patient cohorts, including an internal cohort (118 PCa patients) used for both training (75%) and hold-out testing (25%), and an external cohort (50 PCa patients) for independent testing, were employed in the study. DWI images were acquired with three different DWI protocols on two different MRI scanners: two DWI protocols acquired on a 1.5-T scanner for the internal cohort, and one DWI protocol acquired on a 3-T scanner for the external cohort. One hundred and seven radiomics features (i.e., shape, first order, texture) were extracted from ADC maps of the whole prostate gland. To address variations in DWI protocols and multicenter variability, a dedicated pipeline, including two-way ANOVA, sequential-feature-selection (SFS), and ComBat features harmonization was implemented. Mann-Whitney U-tests (α = 0.05) were performed to find statistically significant features dividing patients with different tumor characteristics in terms of Gleason score (GS) and T-stage. Support-Vector-Machine models were then developed to predict GS and T-stage, and the performance was assessed through the area under the curve (AUC) of receiver-operating-characteristic curves. RESULTS Downstream of ANOVA, two subsets of 38 and 41 features stable against DWI protocol were identified for GS and T-stage, respectively. Among these, SFS revealed the most predictive features, yielding an AUC of 0.75 (GS) and 0.70 (T-stage) in the hold-out test. Employing ComBat harmonization improved the external-test performance of the GS model, raising AUC from 0.72 to 0.78. CONCLUSION By incorporating stable features with a harmonization procedure and validating the model on an external dataset, model robustness, and generalizability were assessed, highlighting the potential of ADC and radiomics for PCa characterization.
Collapse
Affiliation(s)
- Letizia Morelli
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milan, Italy
| | - Chiara Paganelli
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milan, Italy
| | - Giulia Marvaso
- Department of Radiation Oncology, European Institute of Oncology (IEO), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Giovanni Parrella
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milan, Italy
| | - Simone Annunziata
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milan, Italy
| | - Maria Giulia Vicini
- Department of Radiation Oncology, European Institute of Oncology (IEO), Milan, Italy
| | - Mattia Zaffaroni
- Department of Radiation Oncology, European Institute of Oncology (IEO), Milan, Italy
| | - Matteo Pepa
- Department of Radiation Oncology, European Institute of Oncology (IEO), Milan, Italy
| | | | - Ottavio De Cobelli
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
- Department of Urology, European Institute of Oncology (IEO), Milan, Italy
| | - Giuseppe Petralia
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
- Department of Radiology, European Institute of Oncology (IEO), Milan, Italy
| | - Barbara Alicja Jereczek-Fossa
- Department of Radiation Oncology, European Institute of Oncology (IEO), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milan, Italy
| |
Collapse
|
3
|
Fassia MK, Balasubramanian A, Woo S, Vargas HA, Hricak H, Konukoglu E, Becker AS. Deep Learning Prostate MRI Segmentation Accuracy and Robustness: A Systematic Review. Radiol Artif Intell 2024; 6:e230138. [PMID: 38568094 PMCID: PMC11294957 DOI: 10.1148/ryai.230138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 02/24/2024] [Accepted: 03/19/2024] [Indexed: 04/28/2024]
Abstract
Purpose To investigate the accuracy and robustness of prostate segmentation using deep learning across various training data sizes, MRI vendors, prostate zones, and testing methods relative to fellowship-trained diagnostic radiologists. Materials and Methods In this systematic review, Embase, PubMed, Scopus, and Web of Science databases were queried for English-language articles using keywords and related terms for prostate MRI segmentation and deep learning algorithms dated to July 31, 2022. A total of 691 articles from the search query were collected and subsequently filtered to 48 on the basis of predefined inclusion and exclusion criteria. Multiple characteristics were extracted from selected studies, such as deep learning algorithm performance, MRI vendor, and training dataset features. The primary outcome was comparison of mean Dice similarity coefficient (DSC) for prostate segmentation for deep learning algorithms versus diagnostic radiologists. Results Forty-eight studies were included. Most published deep learning algorithms for whole prostate gland segmentation (39 of 42 [93%]) had a DSC at or above expert level (DSC ≥ 0.86). The mean DSC was 0.79 ± 0.06 (SD) for peripheral zone, 0.87 ± 0.05 for transition zone, and 0.90 ± 0.04 for whole prostate gland segmentation. For selected studies that used one major MRI vendor, the mean DSCs of each were as follows: General Electric (three of 48 studies), 0.92 ± 0.03; Philips (four of 48 studies), 0.92 ± 0.02; and Siemens (six of 48 studies), 0.91 ± 0.03. Conclusion Deep learning algorithms for prostate MRI segmentation demonstrated accuracy similar to that of expert radiologists despite varying parameters; therefore, future research should shift toward evaluating segmentation robustness and patient outcomes across diverse clinical settings. Keywords: MRI, Genital/Reproductive, Prostate Segmentation, Deep Learning Systematic review registration link: osf.io/nxaev © RSNA, 2024.
Collapse
Affiliation(s)
- Mohammad-Kasim Fassia
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Adithya Balasubramanian
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Sungmin Woo
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hebert Alberto Vargas
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hedvig Hricak
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Ender Konukoglu
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Anton S. Becker
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| |
Collapse
|
4
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
5
|
Ilesanmi AE, Ilesanmi TO, Ajayi BO. Reviewing 3D convolutional neural network approaches for medical image segmentation. Heliyon 2024; 10:e27398. [PMID: 38496891 PMCID: PMC10944240 DOI: 10.1016/j.heliyon.2024.e27398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/19/2024] Open
Abstract
Background Convolutional neural networks (CNNs) assume pivotal roles in aiding clinicians in diagnosis and treatment decisions. The rapid evolution of imaging technology has established three-dimensional (3D) CNNs as a formidable framework for delineating organs and anomalies in medical images. The prominence of 3D CNN frameworks is steadily growing within medical image segmentation and classification. Thus, our proposition entails a comprehensive review, encapsulating diverse 3D CNN algorithms for the segmentation of medical image anomalies and organs. Methods This study systematically presents an exhaustive review of recent 3D CNN methodologies. Rigorous screening of abstracts and titles were carried out to establish their relevance. Research papers disseminated across academic repositories were meticulously chosen, analyzed, and appraised against specific criteria. Insights into the realm of anomalies and organ segmentation were derived, encompassing details such as network architecture and achieved accuracies. Results This paper offers an all-encompassing analysis, unveiling the prevailing trends in 3D CNN segmentation. In-depth elucidations encompass essential insights, constraints, observations, and avenues for future exploration. A discerning examination indicates the preponderance of the encoder-decoder network in segmentation tasks. The encoder-decoder framework affords a coherent methodology for the segmentation of medical images. Conclusion The findings of this study are poised to find application in clinical diagnosis and therapeutic interventions. Despite inherent limitations, CNN algorithms showcase commendable accuracy levels, solidifying their potential in medical image segmentation and classification endeavors.
Collapse
Affiliation(s)
- Ademola E. Ilesanmi
- University of Pennsylvania, 3710 Hamilton Walk, 6th Floor, Philadelphia, PA, 19104, United States
| | | | - Babatunde O. Ajayi
- National Astronomical Research Institute of Thailand, Chiang Mai 50180, Thailand
| |
Collapse
|
6
|
Wang H, Hu Z, Jiang D, Lin R, Zhao C, Zhao X, Zhou Y, Zhu Y, Zeng H, Liang D, Liao J, Li Z. Predicting Antiseizure Medication Treatment in Children with Rare Tuberous Sclerosis Complex-Related Epilepsy Using Deep Learning. AJNR Am J Neuroradiol 2023; 44:1373-1383. [PMID: 38081677 PMCID: PMC10714846 DOI: 10.3174/ajnr.a8053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 10/03/2023] [Indexed: 12/18/2023]
Abstract
BACKGROUND AND PURPOSE Tuberous sclerosis complex disease is a rare, multisystem genetic disease, but appropriate drug treatment allows many pediatric patients to have positive outcomes. The purpose of this study was to predict the effectiveness of antiseizure medication treatment in children with tuberous sclerosis complex-related epilepsy. MATERIALS AND METHODS We conducted a retrospective study involving 300 children with tuberous sclerosis complex-related epilepsy. The study included the analysis of clinical data and T2WI and FLAIR images. The clinical data consisted of sex, age of onset, age at imaging, infantile spasms, and antiseizure medication numbers. To forecast antiseizure medication treatment, we developed a multitechnique deep learning method called WAE-Net. This method used multicontrast MR imaging and clinical data. The T2WI and FLAIR images were combined as FLAIR3 to enhance the contrast between tuberous sclerosis complex lesions and normal brain tissues. We trained a clinical data-based model using a fully connected network with the above-mentioned variables. After that, a weighted-average ensemble network built from the ResNet3D architecture was created as the final model. RESULTS The experiments had shown that age of onset, age at imaging, infantile spasms, and antiseizure medication numbers were significantly different between the 2 drug-treatment outcomes (P < .05). The hybrid technique of FLAIR3 could accurately localize tuberous sclerosis complex lesions, and the proposed method achieved the best performance (area under the curve = 0.908 and accuracy of 0.847) in the testing cohort among the compared methods. CONCLUSIONS The proposed method could predict antiseizure medication treatment of children with rare tuberous sclerosis complex-related epilepsy and could be a strong baseline for future studies.
Collapse
Affiliation(s)
- Haifeng Wang
- From the Research Center for Medical Artificial Intelligence (H.W., D.J., Y. Zhou, D.L., Z.L.), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
- Shenzhen College of Advanced Technology (H.W., D.J., Y.Zhu, D.L., Z.L.), University of Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Zhanqi Hu
- Department of Neurology (Z.H., R.L., X.Z., J.L.), Shenzhen Children's Hospital, Shenzhen, Guangdong, China
- Department of Pediatric Neurology (Z.H.), Boston Children's Hospital, Boston, Massachusetts
| | - Dian Jiang
- From the Research Center for Medical Artificial Intelligence (H.W., D.J., Y. Zhou, D.L., Z.L.), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
- Shenzhen College of Advanced Technology (H.W., D.J., Y.Zhu, D.L., Z.L.), University of Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Rongbo Lin
- Department of Neurology (Z.H., R.L., X.Z., J.L.), Shenzhen Children's Hospital, Shenzhen, Guangdong, China
| | - Cailei Zhao
- Department of Radiology (C.Z., H.Z.), Shenzhen Children's Hospital, Shenzhen, Guangdong, China
| | - Xia Zhao
- Department of Neurology (Z.H., R.L., X.Z., J.L.), Shenzhen Children's Hospital, Shenzhen, Guangdong, China
| | - Yihang Zhou
- From the Research Center for Medical Artificial Intelligence (H.W., D.J., Y. Zhou, D.L., Z.L.), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
- Research Department (Y. Zhou), Hong Kong Sanatorium and Hospital, Hong Kong, China
| | - Yanjie Zhu
- Shenzhen College of Advanced Technology (H.W., D.J., Y.Zhu, D.L., Z.L.), University of Chinese Academy of Sciences, Shenzhen, Guangdong, China
- Paul C. Lauterbur Research Center for Biomedical Imaging (Y.Zhu, D.L.), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Hongwu Zeng
- Department of Radiology (C.Z., H.Z.), Shenzhen Children's Hospital, Shenzhen, Guangdong, China
| | - Dong Liang
- From the Research Center for Medical Artificial Intelligence (H.W., D.J., Y. Zhou, D.L., Z.L.), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
- Shenzhen College of Advanced Technology (H.W., D.J., Y.Zhu, D.L., Z.L.), University of Chinese Academy of Sciences, Shenzhen, Guangdong, China
- Paul C. Lauterbur Research Center for Biomedical Imaging (Y.Zhu, D.L.), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Jianxiang Liao
- Department of Neurology (Z.H., R.L., X.Z., J.L.), Shenzhen Children's Hospital, Shenzhen, Guangdong, China
| | - Zhicheng Li
- From the Research Center for Medical Artificial Intelligence (H.W., D.J., Y. Zhou, D.L., Z.L.), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
- Shenzhen College of Advanced Technology (H.W., D.J., Y.Zhu, D.L., Z.L.), University of Chinese Academy of Sciences, Shenzhen, Guangdong, China
| |
Collapse
|
7
|
Weng X, Song F, Tang M, Wang K, Zhang Y, Miao Y, Chan LWC, Lei P, Hu Z, Yang F. MDM-U-Net: A novel network for renal cancer structure segmentation. Comput Med Imaging Graph 2023; 109:102301. [PMID: 37738774 DOI: 10.1016/j.compmedimag.2023.102301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 07/27/2023] [Accepted: 09/08/2023] [Indexed: 09/24/2023]
Abstract
Accurate segmentation of the renal cancer structure, including the kidney, renal tumors, veins, and arteries, has great clinical significance, which can assist clinicians in diagnosing and treating renal cancer. For accurate segmentation of the renal cancer structure in contrast-enhanced computed tomography (CT) images, we proposed a novel encoder-decoder structure segmentation network named MDM-U-Net comprising a multi-scale anisotropic convolution block, dual activation attention block, and multi-scale deep supervision mechanism. The multi-scale anisotropic convolution block was used to improve the feature extraction ability of the network, the dual activation attention block as a channel-wise mechanism was used to guide the network to exploit important information, and the multi-scale deep supervision mechanism was used to supervise the layers of the decoder part for improving segmentation performance. In this study, we developed a feasible and generalizable MDM-U-Net model for renal cancer structure segmentation, trained the model from the public KiPA22 dataset, and tested it on the KiPA22 dataset and an in-house dataset. For the KiPA22 dataset, our method ranked first in renal cancer structure segmentation, achieving state-of-the-art (SOTA) performance in terms of 6 of 12 evaluation metrics (3 metrics per structure). For the in-house dataset, our method achieves SOTA performance in terms of 9 of 12 evaluation metrics (3 metrics per structure), demonstrating its superiority and generalization ability over the compared networks in renal structure segmentation from contrast-enhanced CT scans.
Collapse
Affiliation(s)
- Xin Weng
- School of Biology & Engineering (School of Modern Industry for Health and Medicine), Guizhou Medical University, Guiyang, Guizhou, China
| | - Fasong Song
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, Guizhou, China
| | - Maowen Tang
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, Guizhou, China
| | - Kansui Wang
- Department of Radiology, The First Affiliated Hospital of Guizhou University of Traditional Chinese Medicine, Guiyang, Guizhou, China
| | - Yusui Zhang
- Department of Radiology, The First Affiliated Hospital of Guizhou University of Traditional Chinese Medicine, Guiyang, Guizhou, China
| | - Yuehong Miao
- School of Biology & Engineering (School of Modern Industry for Health and Medicine), Guizhou Medical University, Guiyang, Guizhou, China
| | - Lawrence Wing-Chi Chan
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Pinggui Lei
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, Guizhou, China; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China.
| | - Zuquan Hu
- School of Biology & Engineering (School of Modern Industry for Health and Medicine), Guizhou Medical University, Guiyang, Guizhou, China; Immune Cells and Antibody Engineering Research Center in University of Guizhou Province, Key Laboratory of Biology and Medical Engineering, Guizhou Medical University, Guiyang, Guizhou, China.
| | - Fan Yang
- School of Biology & Engineering (School of Modern Industry for Health and Medicine), Guizhou Medical University, Guiyang, Guizhou, China.
| |
Collapse
|
8
|
Song G, Zhou J, Wang K, Yao D, Chen S, Shi Y. Segmentation of multi-regional skeletal muscle in abdominal CT image for cirrhotic sarcopenia diagnosis. Front Neurosci 2023; 17:1203823. [PMID: 37360174 PMCID: PMC10289291 DOI: 10.3389/fnins.2023.1203823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 05/12/2023] [Indexed: 06/28/2023] Open
Abstract
Background Sarcopenia is generally diagnosed by the total area of skeletal muscle in the CT axial slice located in the third lumbar (L3) vertebra. However, patients with severe liver cirrhosis cannot accurately obtain the corresponding total skeletal muscle because their abdominal muscles are squeezed, which affects the diagnosis of sarcopenia. Purpose This study proposes a novel lumbar skeletal muscle network to automatically segment multi-regional skeletal muscle from CT images, and explores the relationship between cirrhotic sarcopenia and each skeletal muscle region. Methods This study utilizes the skeletal muscle characteristics of different spatial regions to improve the 2.5D U-Net enhanced by residual structure. Specifically, a 3D texture attention enhancement block is proposed to tackle the issue of blurred edges with similar intensities and poor segmentation between different skeletal muscle regions, which contains skeletal muscle shape and muscle fibre texture to spatially constrain the integrity of skeletal muscle region and alleviate the difficulty of identifying muscle boundaries in axial slices. Subsequentially, a 3D encoding branch is constructed in conjunction with a 2.5D U-Net, which segments the lumbar skeletal muscle in multiple L3-related axial CT slices into four regions. Furthermore, the diagnostic cut-off values of the L3 skeletal muscle index (L3SMI) are investigated for identifying cirrhotic sarcopenia in four muscle regions segmented from CT images of 98 patients with liver cirrhosis. Results Our method is evaluated on 317 CT images using the five-fold cross-validation method. For the four skeletal muscle regions segmented in the images from the independent test set, the avg. DSC is 0.937 and the avg. surface distance is 0.558 mm. For sarcopenia diagnosis in 98 patients with liver cirrhosis, the cut-off values of Rectus Abdominis, Right Psoas, Left Psoas, and Paravertebral are 16.67, 4.14, 3.76, and 13.20 cm2/m2 in females, and 22.51, 5.84, 6.10, and 17.28 cm2/m2 in males, respectively. Conclusion The proposed method can segment four skeletal muscle regions related to the L3 vertebra with high accuracy. Furthermore, the analysis shows that the Rectus Abdominis region can be used to assist in the diagnosis of sarcopenia when the total muscle is not available.
Collapse
Affiliation(s)
- Genshen Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Ji Zhou
- Department of Gastroenterology and Hepatology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Kang Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Demin Yao
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Shiyao Chen
- Department of Gastroenterology and Hepatology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
- Academy for Engineering & Technology, Fudan University, Shanghai, China
| |
Collapse
|
9
|
A two-stage CNN method for MRI image segmentation of prostate with lesion. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
10
|
Wu C, Montagne S, Hamzaoui D, Ayache N, Delingette H, Renard-Penna R. Automatic segmentation of prostate zonal anatomy on MRI: a systematic review of the literature. Insights Imaging 2022; 13:202. [PMID: 36543901 PMCID: PMC9772373 DOI: 10.1186/s13244-022-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVES Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.
Collapse
Affiliation(s)
- Carine Wu
- Sorbonne Université, Paris, France.
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France.
| | - Sarah Montagne
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Nicholas Ayache
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Hervé Delingette
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| |
Collapse
|
11
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Dataset of prostate MRI annotated for anatomical zones and cancer. Data Brief 2022; 45:108739. [DOI: 10.1016/j.dib.2022.108739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/03/2022] [Accepted: 11/04/2022] [Indexed: 11/11/2022] Open
|
12
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Comput Biol Med 2022; 148:105817. [PMID: 35841780 DOI: 10.1016/j.compbiomed.2022.105817] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 06/12/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The development of deep learning (DL) models for prostate segmentation on magnetic resonance imaging (MRI) depends on expert-annotated data and reliable baselines, which are often not publicly available. This limits both reproducibility and comparability. METHODS Prostate158 consists of 158 expert annotated biparametric 3T prostate MRIs comprising T2w sequences and diffusion-weighted sequences with apparent diffusion coefficient maps. Two U-ResNets trained for segmentation of anatomy (central gland, peripheral zone) and suspicious lesions for prostate cancer (PCa) with a PI-RADS score of ≥4 served as baseline algorithms. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average surface distance (ASD). The Wilcoxon test with Bonferroni correction was used to evaluate differences in performance. The generalizability of the baseline model was assessed using the open datasets Medical Segmentation Decathlon and PROSTATEx. RESULTS Compared to Reader 1, the models achieved a DSC/HD/ASD of 0.88/18.3/2.2 for the central gland, 0.75/22.8/1.9 for the peripheral zone, and 0.45/36.7/17.4 for PCa. Compared with Reader 2, the DSC/HD/ASD were 0.88/17.5/2.6 for the central gland, 0.73/33.2/1.9 for the peripheral zone, and 0.4/39.5/19.1 for PCa. Interrater agreement measured in DSC/HD/ASD was 0.87/11.1/1.0 for the central gland, 0.75/15.8/0.74 for the peripheral zone, and 0.6/18.8/5.5 for PCa. Segmentation performances on the Medical Segmentation Decathlon and PROSTATEx were 0.82/22.5/3.4; 0.86/18.6/2.5 for the central gland, and 0.64/29.2/4.7; 0.71/26.3/2.2 for the peripheral zone. CONCLUSIONS We provide an openly accessible, expert-annotated 3T dataset of prostate MRI and a reproducible benchmark to foster the development of prostate segmentation algorithms.
Collapse
Affiliation(s)
- Lisa C Adams
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | - Marcus R Makowski
- Technical University of Munich, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Ismaninger Str. 22, 81675, Munich, Germany
| | - Günther Engel
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Institute for Diagnostic and Interventional Radiology, Georg-August University, Göttingen, Germany
| | - Maximilian Rattunde
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Felix Busch
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Patrick Asbach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Stefan M Niehues
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | | | - Geert Litjens
- Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | - Keno K Bressem
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
13
|
Salvi M, De Santi B, Pop B, Bosco M, Giannini V, Regge D, Molinari F, Meiburger KM. Integration of Deep Learning and Active Shape Models for More Accurate Prostate Segmentation in 3D MR Images. J Imaging 2022; 8:133. [PMID: 35621897 PMCID: PMC9146644 DOI: 10.3390/jimaging8050133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 01/27/2023] Open
Abstract
Magnetic resonance imaging (MRI) has a growing role in the clinical workup of prostate cancer. However, manual three-dimensional (3D) segmentation of the prostate is a laborious and time-consuming task. In this scenario, the use of automated algorithms for prostate segmentation allows us to bypass the huge workload of physicians. In this work, we propose a fully automated hybrid approach for prostate gland segmentation in MR images using an initial segmentation of prostate volumes using a custom-made 3D deep network (VNet-T2), followed by refinement using an Active Shape Model (ASM). While the deep network focuses on three-dimensional spatial coherence of the shape, the ASM relies on local image information and this joint effort allows for improved segmentation of the organ contours. Our method is developed and tested on a dataset composed of T2-weighted (T2w) MRI prostatic volumes of 60 male patients. In the test set, the proposed method shows excellent segmentation performance, achieving a mean dice score and Hausdorff distance of 0.851 and 7.55 mm, respectively. In the future, this algorithm could serve as an enabling technology for the development of computer-aided systems for prostate cancer characterization in MR imaging.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Bruno De Santi
- Multi-Modality Medical Imaging (M3I), Technical Medical Centre, University of Twente, PB217, 7500 AE Enschede, The Netherlands;
| | - Bianca Pop
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Martino Bosco
- Department of Pathology, Ospedale Michele e Pietro Ferrero, 12060 Verduno, Italy;
| | - Valentina Giannini
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy; (V.G.); (D.R.)
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy
| | - Daniele Regge
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy; (V.G.); (D.R.)
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy
| | - Filippo Molinari
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Kristen M. Meiburger
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| |
Collapse
|
14
|
Cho HH, Kim CK, Park H. Overview of radiomics in prostate imaging and future directions. Br J Radiol 2022; 95:20210539. [PMID: 34797688 PMCID: PMC8978251 DOI: 10.1259/bjr.20210539] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Recent advancements in imaging technology and analysis methods have led to an analytic framework known as radiomics. This framework extracts comprehensive high-dimensional features from imaging data and performs data mining to build analytical models for improved decision-support. Its features include many categories spanning texture and shape; thus, it can provide abundant information for precision medicine. Many studies of prostate radiomics have shown promising results in the assessment of pathological features, prediction of treatment response, and stratification of risk groups. Herein, we aimed to provide a general overview of radiomics procedures, discuss technical issues, explain various clinical applications, and suggest future research directions, especially for prostate imaging.
Collapse
Affiliation(s)
- Hwan-Ho Cho
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Korea
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Korea.,School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Korea
| |
Collapse
|
15
|
Hamzaoui D, Montagne S, Renard-Penna R, Ayache N, Delingette H. Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use. J Med Imaging (Bellingham) 2022; 9:024001. [PMID: 35300345 PMCID: PMC8920492 DOI: 10.1117/1.jmi.9.2.024001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/23/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 ± 2.85 for the whole gland (WG), 91.00 ± 4.34 for the transition zone (TZ), and 79.08 ± 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p - value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 ± 2.94 for WG, 86.84 ± 4.33 for TZ, and 78.40 ± 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Collapse
Affiliation(s)
- Dimitri Hamzaoui
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Sarah Montagne
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Hervé Delingette
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| |
Collapse
|
16
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
17
|
Prostate Segmentation via Dynamic Fusion Model. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-021-06502-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
18
|
Deep Learning-Based Post-Processing of Real-Time MRI to Assess and Quantify Dynamic Wrist Movement in Health and Disease. Diagnostics (Basel) 2021; 11:diagnostics11061077. [PMID: 34208361 PMCID: PMC8231139 DOI: 10.3390/diagnostics11061077] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/06/2021] [Accepted: 06/09/2021] [Indexed: 12/20/2022] Open
Abstract
While morphologic magnetic resonance imaging (MRI) is the imaging modality of choice for the evaluation of ligamentous wrist injuries, it is merely static and incapable of diagnosing dynamic wrist instability. Based on real-time MRI and algorithm-based image post-processing in terms of convolutional neural networks (CNNs), this study aims to develop and validate an automatic technique to quantify wrist movement. A total of 56 bilateral wrists (28 healthy volunteers) were imaged during continuous and alternating maximum ulnar and radial abduction. Following CNN-based automatic segmentations of carpal bone contours, scapholunate and lunotriquetral gap widths were quantified based on dedicated algorithms and as a function of wrist position. Automatic segmentations were in excellent agreement with manual reference segmentations performed by two radiologists as indicated by Dice similarity coefficients of 0.96 ± 0.02 and consistent and unskewed Bland–Altman plots. Clinical applicability of the framework was assessed in a patient with diagnosed scapholunate ligament injury. Considerable increases in scapholunate gap widths across the range-of-motion were found. In conclusion, the combination of real-time wrist MRI and the present framework provides a powerful diagnostic tool for dynamic assessment of wrist function and, if confirmed in clinical trials, dynamic carpal instability that may elude static assessment using clinical-standard imaging modalities.
Collapse
|
19
|
Meyer A, Mehrtash A, Rak M, Bashkanov O, Langbein B, Ziaei A, Kibel AS, Tempany CM, Hansen C, Tokuda J. Domain adaptation for segmentation of critical structures for prostate cancer therapy. Sci Rep 2021; 11:11480. [PMID: 34075061 PMCID: PMC8169882 DOI: 10.1038/s41598-021-90294-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 05/04/2021] [Indexed: 11/23/2022] Open
Abstract
Preoperative assessment of the proximity of critical structures to the tumors is crucial in avoiding unnecessary damage during prostate cancer treatment. A patient-specific 3D anatomical model of those structures, namely the neurovascular bundles (NVB) and the external urethral sphincters (EUS), can enable physicians to perform such assessments intuitively. As a crucial step to generate a patient-specific anatomical model from preoperative MRI in a clinical routine, we propose a multi-class automatic segmentation based on an anisotropic convolutional network. Our specific challenge is to train the network model on a unique source dataset only available at a single clinical site and deploy it to another target site without sharing the original images or labels. As network models trained on data from a single source suffer from quality loss due to the domain shift, we propose a semi-supervised domain adaptation (DA) method to refine the model's performance in the target domain. Our DA method combines transfer learning and uncertainty guided self-learning based on deep ensembles. Experiments on the segmentation of the prostate, NVB, and EUS, show significant performance gain with the combination of those techniques compared to pure TL and the combination of TL with simple self-learning ([Formula: see text] for all structures using a Wilcoxon's signed-rank test). Results on a different task and data (Pancreas CT segmentation) demonstrate our method's generic application capabilities. Our method has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies. This makes our method suitable for clinical applications, where the sharing of patient data is restricted.
Collapse
Affiliation(s)
- Anneke Meyer
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany.
| | - Alireza Mehrtash
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Marko Rak
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Oleksii Bashkanov
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Bjoern Langbein
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alireza Ziaei
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Adam S Kibel
- Division of Urology, Department of Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clare M Tempany
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Christian Hansen
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Junichi Tokuda
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|