1
|
Ilesanmi AE, Ilesanmi TO, Ajayi BO. Reviewing 3D convolutional neural network approaches for medical image segmentation. Heliyon 2024; 10:e27398. [PMID: 38496891 PMCID: PMC10944240 DOI: 10.1016/j.heliyon.2024.e27398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/19/2024] Open
Abstract
Background Convolutional neural networks (CNNs) assume pivotal roles in aiding clinicians in diagnosis and treatment decisions. The rapid evolution of imaging technology has established three-dimensional (3D) CNNs as a formidable framework for delineating organs and anomalies in medical images. The prominence of 3D CNN frameworks is steadily growing within medical image segmentation and classification. Thus, our proposition entails a comprehensive review, encapsulating diverse 3D CNN algorithms for the segmentation of medical image anomalies and organs. Methods This study systematically presents an exhaustive review of recent 3D CNN methodologies. Rigorous screening of abstracts and titles were carried out to establish their relevance. Research papers disseminated across academic repositories were meticulously chosen, analyzed, and appraised against specific criteria. Insights into the realm of anomalies and organ segmentation were derived, encompassing details such as network architecture and achieved accuracies. Results This paper offers an all-encompassing analysis, unveiling the prevailing trends in 3D CNN segmentation. In-depth elucidations encompass essential insights, constraints, observations, and avenues for future exploration. A discerning examination indicates the preponderance of the encoder-decoder network in segmentation tasks. The encoder-decoder framework affords a coherent methodology for the segmentation of medical images. Conclusion The findings of this study are poised to find application in clinical diagnosis and therapeutic interventions. Despite inherent limitations, CNN algorithms showcase commendable accuracy levels, solidifying their potential in medical image segmentation and classification endeavors.
Collapse
Affiliation(s)
- Ademola E. Ilesanmi
- University of Pennsylvania, 3710 Hamilton Walk, 6th Floor, Philadelphia, PA, 19104, United States
| | | | - Babatunde O. Ajayi
- National Astronomical Research Institute of Thailand, Chiang Mai 50180, Thailand
| |
Collapse
|
2
|
Xu L, Zhang G, Zhang D, Zhang J, Zhang X, Bai X, Chen L, Peng Q, Jin R, Mao L, Li X, Jin Z, Sun H. Development and clinical utility analysis of a prostate zonal segmentation model on T2-weighted imaging: a multicenter study. Insights Imaging 2023; 14:44. [PMID: 36928683 PMCID: PMC10020392 DOI: 10.1186/s13244-023-01394-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 02/19/2023] [Indexed: 03/18/2023] Open
Abstract
OBJECTIVES To automatically segment prostate central gland (CG) and peripheral zone (PZ) on T2-weighted imaging using deep learning and assess the model's clinical utility by comparing it with a radiologist annotation and analyzing relevant influencing factors, especially the prostate zonal volume. METHODS A 3D U-Net-based model was trained with 223 patients from one institution and tested using one internal testing group (n = 93) and two external testing datasets, including one public dataset (ETDpub, n = 141) and one private dataset from two centers (ETDpri, n = 59). The Dice similarity coefficients (DSCs), 95th Hausdorff distance (95HD), and average boundary distance (ABD) were calculated to evaluate the model's performance and further compared with a junior radiologist's performance in ETDpub. To investigate factors influencing the model performance, patients' clinical characteristics, prostate morphology, and image parameters in ETDpri were collected and analyzed using beta regression. RESULTS The DSCs in the internal testing group, ETDpub, and ETDpri were 0.909, 0.889, and 0.869 for CG, and 0.844, 0.755, and 0.764 for PZ, respectively. The mean 95HD and ABD were less than 7.0 and 1.3 for both zones. The U-Net model outperformed the junior radiologist, having a higher DSC (0.769 vs. 0.706) and higher intraclass correlation coefficient for volume estimation in PZ (0.836 vs. 0.668). CG volume and Magnetic Resonance (MR) vendor were significant influencing factors for CG and PZ segmentation. CONCLUSIONS The 3D U-Net model showed good performance for CG and PZ auto-segmentation in all the testing groups and outperformed the junior radiologist for PZ segmentation. The model performance was susceptible to prostate morphology and MR scanner parameters.
Collapse
Affiliation(s)
- Lili Xu
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China.,National Center for Quality Control of Radiology, Beijing, China
| | - Gumuyang Zhang
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Daming Zhang
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Jiahui Zhang
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Xiaoxiao Zhang
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Xin Bai
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Li Chen
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Qianyu Peng
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Ru Jin
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Li Mao
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Xiuli Li
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Zhengyu Jin
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China. .,National Center for Quality Control of Radiology, Beijing, China.
| | - Hao Sun
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China. .,National Center for Quality Control of Radiology, Beijing, China.
| |
Collapse
|
3
|
Wu C, Montagne S, Hamzaoui D, Ayache N, Delingette H, Renard-Penna R. Automatic segmentation of prostate zonal anatomy on MRI: a systematic review of the literature. Insights Imaging 2022; 13:202. [PMID: 36543901 PMCID: PMC9772373 DOI: 10.1186/s13244-022-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVES Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.
Collapse
Affiliation(s)
- Carine Wu
- Sorbonne Université, Paris, France.
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France.
| | - Sarah Montagne
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Nicholas Ayache
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Hervé Delingette
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| |
Collapse
|
4
|
Liu Y, Zhu Y, Wang W, Zheng B, Qin X, Wang P. Multi-scale discriminative network for prostate cancer lesion segmentation in multiparametric MR images. Med Phys 2022; 49:7001-7015. [PMID: 35851482 DOI: 10.1002/mp.15861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 06/30/2022] [Accepted: 07/03/2022] [Indexed: 01/01/2023] Open
Abstract
PURPOSE The accurate and reliable segmentation of prostate cancer (PCa) lesions using multiparametric magnetic resonance imaging (mpMRI) sequences, is crucial to the image-guided intervention and treatment of prostate disease. For PCa lesion segmentation, it is essential to reliably combine local and global information to retain the features of small targets at multiple scales. Therefore, this study proposes a multi-scale segmentation network with a cascading pyramid convolution module (CPCM) and a double-input channel attention module (DCAM) for the automated and accurate segmentation of PCa lesions using mpMRI. METHODS First, the region of interest was extracted from the data by clipping to enlarge the target region and reduce the background noise interference. Next, four CPCMs with large convolution kernels in their skip connection paths were designed to improve the feature extraction capability of the network for small targets. At the same time, a convolution decomposition was applied to reduce the computational complexity. Finally, the DCAM was adopted in the decoder to provide bottom-up semantic discriminative guidance; it can use the semantic information of the network's deep features to guide the shallow output of features with a higher discriminant ability. A residual refinement module (RRM) was also designed to strengthen the recognition ability of each stage. The feature maps of the skip connection and the decoder all go through the RRM. RESULTS For the Initiative for Collaborative Computer Vision Benchmarking (I2CVB) dataset, our proposed model achieved a Dice similarity coefficient (DSC) of 79.31% and an average boundary distance (ABD) of 4.15 mm. For the Prostate Multiparametric MRI (PROMM) dataset, our method greatly improved the DSC to 82.11% and obtained an ABD of 3.64 mm. CONCLUSIONS The experimental results of two different mpMRI prostate datasets demonstrate that our model is more accurate and reliable on small targets. In addition, it outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Yatong Liu
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Yu Zhu
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
- Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai, P. R. China
| | - Wei Wang
- Department of Radiology, Tongji Hospital, Tongji University School of Medicine, Shanghai, P. R. China
| | - Bingbing Zheng
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Xiangxiang Qin
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Peijun Wang
- Department of Radiology, Tongji Hospital, Tongji University School of Medicine, Shanghai, P. R. China
| |
Collapse
|
5
|
Kumaraswamy AK, Patil CM. Automatic prostate segmentation of magnetic resonance imaging using Res-Net. MAGMA (NEW YORK, N.Y.) 2022; 35:621-630. [PMID: 34890013 DOI: 10.1007/s10334-021-00979-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 09/18/2021] [Accepted: 11/17/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES Segmenting the prostate from magnetic resonance images plays an important role in prostate cancer diagnosis and in evaluating the treatment response. However, the lack of a clear prostate boundary, heterogeneity of prostate tissue, large variety of prostate shape and scarcity of annotated training data makes automatic segmentation a very challenging task. In this work, we proposed a novel two stage segmentation method to automatically segment prostate to support accurate and reproducible results with multisite and multivendor dataset. In the proposed method, we use the combination U-Net with residual blocks. METHODS The proposed method comprises two stage neural network, first is 2D U-Net, used find the approximate location of prostate, the second is the combination of U-Net and Res-Net used for accurate segmentation of prostate. The network was trained on 116 patient datasets from three publicly available data sources. 80% of data is used for training, 10% for validation, and 10% for testing. The commonly used segmentation evaluation metrics Dice similarity coefficient (DSC), Sensitivity, and Specificity are used for quantitative evaluation of the network. RESULTS With the proposed method average DSC value of 93.8%, Sensitivity value of 94.6% and Specificity of 99.3% was achieved on test datasets. CONCLUSIONS Our experimental results show that the segmentation accuracy can be improved significantly using two stage neural networks.
Collapse
Affiliation(s)
- Asha Kuppe Kumaraswamy
- Department of Electronics and Communication, Vidyavardhaka College of Engineering, Mysuru, India.
| | - Chandrashekar M Patil
- Department of Electronics and Communication, Vidyavardhaka College of Engineering, Mysuru, India
| |
Collapse
|
6
|
Pellicer-Valero OJ, Marenco Jiménez JL, Gonzalez-Perez V, Casanova Ramón-Borja JL, Martín García I, Barrios Benito M, Pelechano Gómez P, Rubio-Briones J, Rupérez MJ, Martín-Guerrero JD. Deep learning for fully automatic detection, segmentation, and Gleason grade estimation of prostate cancer in multiparametric magnetic resonance images. Sci Rep 2022; 12:2975. [PMID: 35194056 PMCID: PMC8864013 DOI: 10.1038/s41598-022-06730-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 02/03/2022] [Indexed: 02/07/2023] Open
Abstract
Although the emergence of multi-parametric magnetic resonance imaging (mpMRI) has had a profound impact on the diagnosis of prostate cancers (PCa), analyzing these images remains still complex even for experts. This paper proposes a fully automatic system based on Deep Learning that performs localization, segmentation and Gleason grade group (GGG) estimation of PCa lesions from prostate mpMRIs. It uses 490 mpMRIs for training/validation and 75 for testing from two different datasets: ProstateX and Valencian Oncology Institute Foundation. In the test set, it achieves an excellent lesion-level AUC/sensitivity/specificity for the GGG[Formula: see text]2 significance criterion of 0.96/1.00/0.79 for the ProstateX dataset, and 0.95/1.00/0.80 for the IVO dataset. At a patient level, the results are 0.87/1.00/0.375 in ProstateX, and 0.91/1.00/0.762 in IVO. Furthermore, on the online ProstateX grand challenge, the model obtained an AUC of 0.85 (0.87 when trained only on the ProstateX data, tying up with the original winner of the challenge). For expert comparison, IVO radiologist's PI-RADS 4 sensitivity/specificity were 0.88/0.56 at a lesion level, and 0.85/0.58 at a patient level. The full code for the ProstateX-trained model is openly available at https://github.com/OscarPellicer/prostate_lesion_detection . We hope that this will represent a landmark for future research to use, compare and improve upon.
Collapse
Affiliation(s)
- Oscar J Pellicer-Valero
- Intelligent Data Analysis Laboratory, Department of Electronic Engineering, ETSE (Engineering School), Universitat de València (UV), Av. Universitat, sn, 46100, Bujassot, Valencia, Spain.
| | - José L Marenco Jiménez
- Department of Urology, Fundación Instituto Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - Victor Gonzalez-Perez
- Department of Medical Physics, Fundación Instituto, Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | | | - Isabel Martín García
- Department of Radiodiagnosis, Fundación Instituto, Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - María Barrios Benito
- Department of Radiodiagnosis, Fundación Instituto, Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - Paula Pelechano Gómez
- Department of Radiodiagnosis, Fundación Instituto, Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - José Rubio-Briones
- Department of Urology, Fundación Instituto Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - María José Rupérez
- Instituto de Ingeniería Mecánica y Biomecánica, Universitat Politècnica de València (UPV), Camino de Vera, sn, 46022, Valencia, Spain
| | - José D Martín-Guerrero
- Intelligent Data Analysis Laboratory, Department of Electronic Engineering, ETSE (Engineering School), Universitat de València (UV), Av. Universitat, sn, 46100, Bujassot, Valencia, Spain
| |
Collapse
|
7
|
|
8
|
Ma C, Xu Q, Wang X, Jin B, Zhang X, Wang Y, Zhang Y. Boundary-Aware Supervoxel-Level Iteratively Refined Interactive 3D Image Segmentation With Multi-Agent Reinforcement Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2563-2574. [PMID: 33382649 DOI: 10.1109/tmi.2020.3048477] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Interactive segmentation has recently been explored to effectively and efficiently harvest high-quality segmentation masks by iteratively incorporating user hints. While iterative in nature, most existing interactive segmentation methods tend to ignore the dynamics of successive interactions and take each interaction independently. We here propose to model iterative interactive image segmentation with a Markov decision process (MDP) and solve it with reinforcement learning (RL) where each voxel is treated as an agent. Considering the large exploration space for voxel-wise prediction and the dependence among neighboring voxels for the segmentation tasks, multi-agent reinforcement learning is adopted, where the voxel-level policy is shared among agents. Considering that boundary voxels are more important for segmentation, we further introduce a boundary-aware reward, which consists of a global reward in the form of relative cross-entropy gain, to update the policy in a constrained direction, and a boundary reward in the form of relative weight, to emphasize the correctness of boundary predictions. To combine the advantages of different types of interactions, i. e., simple and efficient for point-clicking, and stable and robust for scribbles, we propose a supervoxel-clicking based interaction design. Experimental results on four benchmark datasets have shown that the proposed method significantly outperforms the state-of-the-arts, with the advantage of fewer interactions, higher accuracy, and enhanced robustness.
Collapse
|