1
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
2
|
Ramacciotti LS, Hershenhouse JS, Mokhtar D, Paralkar D, Kaneko M, Eppler M, Gill K, Mogoulianitis V, Duddalwar V, Abreu AL, Gill I, Cacciamani GE. Comprehensive Assessment of MRI-based Artificial Intelligence Frameworks Performance in the Detection, Segmentation, and Classification of Prostate Lesions Using Open-Source Databases. Urol Clin North Am 2024; 51:131-161. [PMID: 37945098 DOI: 10.1016/j.ucl.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Numerous MRI-based artificial intelligence (AI) frameworks have been designed for prostate cancer lesion detection, segmentation, and classification via MRI as a result of intrareader and interreader variability that is inherent to traditional interpretation. Open-source data sets have been released with the intention of providing freely available MRIs for the testing of diverse AI frameworks in automated or semiautomated tasks. Here, an in-depth assessment of the performance of MRI-based AI frameworks for detecting, segmenting, and classifying prostate lesions using open-source databases was performed. Among 17 data sets, 12 were specific to prostate cancer detection/classification, with 52 studies meeting the inclusion criteria.
Collapse
Affiliation(s)
- Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Jacob S Hershenhouse
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Daniel Mokhtar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Masatomo Kaneko
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Michael Eppler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Vasileios Mogoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Andre L Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
3
|
Gao S, Yang J, Chen D, Min X, Fan C, Zhang P, Wang Q, Li Z, Cai W. Noninvasive Prediction of Sperm Retrieval Using Diffusion Tensor Imaging in Patients with Nonobstructive Azoospermia. J Imaging 2023; 9:182. [PMID: 37754946 PMCID: PMC10532242 DOI: 10.3390/jimaging9090182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 09/04/2023] [Accepted: 09/06/2023] [Indexed: 09/28/2023] Open
Abstract
Microdissection testicular sperm extraction (mTESE) is the first-line treatment plan for nonobstructive azoospermia (NOA). However, studies reported that the overall sperm retrieval rate (SRR) was 43% to 63% among men with NOA, implying that nearly half of the patients fail sperm retrieval. This study aimed to evaluate the diagnostic performance of parameters derived from diffusion tensor imaging (DTI) in predicting SRR in patients with NOA. Seventy patients diagnosed with NOA were enrolled and classified into two groups based on the outcome of sperm retrieval during mTESE: success (29 patients) and failure (41 patients). Scrotal magnetic resonance imaging was performed, and the DTI parameters, including mean diffusivity and fractional anisotropy, were analyzed between groups. The results showed that there was a significant difference in mean diffusivity values between the two groups, and the area under the curve for mean diffusivity was calculated as 0.865, with a sensitivity of 72.2% and a specificity of 97.5%. No statistically significant difference was observed in fractional anisotropy values and sex hormone levels between the two groups. This study demonstrated that the mean diffusivity value might serve as a useful noninvasive imaging marker for predicting the SRR of NOA patients undergoing mTESE.
Collapse
Affiliation(s)
- Sikang Gao
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Jun Yang
- Department of Urology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China;
| | - Dong Chen
- Department of Pathology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China;
| | - Xiangde Min
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Chanyuan Fan
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Peipei Zhang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Qiuxia Wang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Zhen Li
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Wei Cai
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| |
Collapse
|
4
|
Wang J, Li X, Cheng Y. Towards an extended EfficientNet-based U-Net framework for joint optic disc and cup segmentation in the fundus image. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
5
|
|
6
|
Estimation of the Prostate Volume from Abdominal Ultrasound Images by Image-Patch Voting. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031390] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Estimation of the prostate volume with ultrasound offers many advantages such as portability, low cost, harmlessness, and suitability for real-time operation. Abdominal Ultrasound (AUS) is a practical procedure that deserves more attention in automated prostate-volume-estimation studies. As the experts usually consider automatic end-to-end volume-estimation procedures as non-transparent and uninterpretable systems, we proposed an expert-in-the-loop automatic system that follows the classical prostate-volume-estimation procedures. Our system directly estimates the diameter parameters of the standard ellipsoid formula to produce the prostate volume. To obtain the diameters, our system detects four diameter endpoints from the transverse and two diameter endpoints from the sagittal AUS images as defined by the classical procedure. These endpoints are estimated using a new image-patch voting method to address characteristic problems of AUS images. We formed a novel prostate AUS data set from 305 patients with both transverse and sagittal planes. The data set includes MRI images for 75 of these patients. At least one expert manually marked all the data. Extensive experiments performed on this data set showed that the proposed system results ranged among experts’ volume estimations, and our system can be used in clinical practice.
Collapse
|
7
|
Brunese L, Brunese MC, Carbone M, Ciccone V, Mercaldo F, Santone A. Automatic PI-RADS assignment by means of formal methods. Radiol Med 2021; 127:83-89. [PMID: 34822102 DOI: 10.1007/s11547-021-01431-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 11/08/2021] [Indexed: 11/26/2022]
Abstract
INTRODUCTION AND OBJECTIVES The Prostate Imaging Reporting and Data System (PI-RADS) version 2 emerged as standard in prostate magnetic resonance imaging examination. The Pi-RADS scores are assigned by radiologists and indicate the likelihood of a clinically significant cancer. The aim of this paper is to propose a methodology to automatically mark a magnetic resonance imaging with its related PI-RADS. MATERIALS AND METHODS We collected a dataset from two different institutions composed by DWI ADC MRI for 91 patients marked by expert radiologists with different PI-RADS score. A formal model is generated starting from a prostate magnetic resonance imaging, and a set of properties related to the different PI-RADS scores are formulated with the help of expert radiologists and pathologists. RESULTS Our methodology relies on the adoption of formal methods and radiomic features, and in the experimental analysis, we obtain a specificity and sensitivity equal to 1. Q CONCLUSIONS The proposed methodology is able to assign the PI-RADS score by analyzing prostate magnetic resonance imaging with a very high accuracy.
Collapse
Affiliation(s)
- Luca Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Maria Chiara Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Mattia Carbone
- Dipartimento Diagnostico per Immagini U.O.C. di Radiologia, Ospedale San Giovanni di Dio e Ruggi d'Aragona, Salerno, Italy
| | - Vincenzo Ciccone
- Dipartimento Diagnostico per Immagini U.O.C. di Radiologia, Ospedale San Giovanni di Dio e Ruggi d'Aragona, Salerno, Italy
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy.
| | - Antonella Santone
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| |
Collapse
|
8
|
A Combined Radiomics and Machine Learning Approach to Distinguish Clinically Significant Prostate Lesions on a Publicly Available MRI Dataset. J Imaging 2021; 7:jimaging7100215. [PMID: 34677301 PMCID: PMC8540196 DOI: 10.3390/jimaging7100215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/01/2021] [Accepted: 10/13/2021] [Indexed: 12/14/2022] Open
Abstract
Although prostate cancer is one of the most common causes of mortality and morbidity in advancing-age males, early diagnosis improves prognosis and modifies the therapy of choice. The aim of this study was the evaluation of a combined radiomics and machine learning approach on a publicly available dataset in order to distinguish a clinically significant from a clinically non-significant prostate lesion. A total of 299 prostate lesions were included in the analysis. A univariate statistical analysis was performed to prove the goodness of the 60 extracted radiomic features in distinguishing prostate lesions. Then, a 10-fold cross-validation was used to train and test some models and the evaluation metrics were calculated; finally, a hold-out was performed and a wrapper feature selection was applied. The employed algorithms were Naïve bayes, K nearest neighbour and some tree-based ones. The tree-based algorithms achieved the highest evaluation metrics, with accuracies over 80%, and area-under-the-curve receiver-operating characteristics below 0.80. Combined machine learning algorithms and radiomics based on clinical, routine, multiparametric, magnetic-resonance imaging were demonstrated to be a useful tool in prostate cancer stratification.
Collapse
|
9
|
Du X, Wang X, Xu F, Zhang J, Huo Y, Ni G, Hao R, Liu J, Liu L. Morphological Components Detection for Super-Depth-of-Field Bio-Micrograph Based on Deep Learning. Microscopy (Oxf) 2021; 71:50-59. [PMID: 34417804 PMCID: PMC8799896 DOI: 10.1093/jmicro/dfab033] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 08/06/2021] [Accepted: 08/20/2021] [Indexed: 11/30/2022] Open
Abstract
Accompanied with the clinical routine examination demand increase sharply, the efficiency and accuracy are the first priority. However, automatic classification and localization of cells in microscopic images in super depth of Field (SDoF) system remains great challenges. In this paper, we advance an object detection algorithm for cells in the SDoF micrograph based on Retinanet model. Compared with the current mainstream algorithm, the mean average precision (mAP) index is significantly improved. In the experiment of leucorrhea samples and fecal samples, mAP indexes are 83.1% and 88.1%, respectively, with an average increase of 10%. The object detection model proposed in this paper can be applied to feces and leucorrhea detection equipment, and significantly improve the detection efficiency and accuracy.
Collapse
Affiliation(s)
- Xiaohui Du
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Xiangzhou Wang
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Fan Xu
- Department of public health, Chengdu medical college
| | - Jing Zhang
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Yibo Huo
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Guangmin Ni
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Ruqian Hao
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Juanxiu Liu
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Lin Liu
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| |
Collapse
|
10
|
Boundary Loss-Based 2.5D Fully Convolutional Neural Networks Approach for Segmentation: A Case Study of the Liver and Tumor on Computed Tomography. ALGORITHMS 2021. [DOI: 10.3390/a14050144] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Image segmentation plays an important role in the field of image processing, helping to understand images and recognize objects. However, most existing methods are often unable to effectively explore the spatial information in 3D image segmentation, and they neglect the information from the contours and boundaries of the observed objects. In addition, shape boundaries can help to locate the positions of the observed objects, but most of the existing loss functions neglect the information from the boundaries. To overcome these shortcomings, this paper presents a new cascaded 2.5D fully convolutional networks (FCNs) learning framework to segment 3D medical images. A new boundary loss that incorporates distance, area, and boundary information is also proposed for the cascaded FCNs to learning more boundary and contour features from the 3D medical images. Moreover, an effective post-processing method is developed to further improve the segmentation accuracy. We verified the proposed method on LITS and 3DIRCADb datasets that include the liver and tumors. The experimental results show that the performance of the proposed method is better than existing methods with a Dice Per Case score of 74.5% for tumor segmentation, indicating the effectiveness of the proposed method.
Collapse
|
11
|
Magnetic Resonance Imaging Based Radiomic Models of Prostate Cancer: A Narrative Review. Cancers (Basel) 2021; 13:cancers13030552. [PMID: 33535569 PMCID: PMC7867056 DOI: 10.3390/cancers13030552] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 01/18/2021] [Accepted: 01/27/2021] [Indexed: 12/11/2022] Open
Abstract
Simple Summary The increasing interest in implementing artificial intelligence in radiomic models has occurred alongside advancement in the tools used for computer-aided diagnosis. Such tools typically apply both statistical and machine learning methodologies to assess the various modalities used in medical image analysis. Specific to prostate cancer, the radiomics pipeline has multiple facets that are amenable to improvement. This review discusses the steps of a magnetic resonance imaging based radiomics pipeline. Present successes, existing opportunities for refinement, and the most pertinent pending steps leading to clinical validation are highlighted. Abstract The management of prostate cancer (PCa) is dependent on biomarkers of biological aggression. This includes an invasive biopsy to facilitate a histopathological assessment of the tumor’s grade. This review explores the technical processes of applying magnetic resonance imaging based radiomic models to the evaluation of PCa. By exploring how a deep radiomics approach further optimizes the prediction of a PCa’s grade group, it will be clear how this integration of artificial intelligence mitigates existing major technological challenges faced by a traditional radiomic model: image acquisition, small data sets, image processing, labeling/segmentation, informative features, predicting molecular features and incorporating predictive models. Other potential impacts of artificial intelligence on the personalized treatment of PCa will also be discussed. The role of deep radiomics analysis-a deep texture analysis, which extracts features from convolutional neural networks layers, will be highlighted. Existing clinical work and upcoming clinical trials will be reviewed, directing investigators to pertinent future directions in the field. For future progress to result in clinical translation, the field will likely require multi-institutional collaboration in producing prospectively populated and expertly labeled imaging libraries.
Collapse
|
12
|
3D multi-scale discriminative network with multi-directional edge loss for prostate zonal segmentation in bi-parametric MR images. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.116] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
13
|
Optimisation of 2D U-Net Model Components for Automatic Prostate Segmentation on MRI. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10072601] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In this paper, we develop an optimised state-of-the-art 2D U-Net model by studying the effects of the individual deep learning model components in performing prostate segmentation. We found that for upsampling, the combination of interpolation and convolution is better than the use of transposed convolution. For combining feature maps in each convolution block, it is only beneficial if a skip connection with concatenation is used. With respect to pooling, average pooling is better than strided-convolution, max, RMS or L2 pooling. Introducing a batch normalisation layer before the activation layer gives further performance improvement. The optimisation is based on a private dataset as it has a fixed 2D resolution and voxel size for every image which mitigates the need of a resizing operation in the data preparation process. Non-enhancing data preprocessing was applied and five-fold cross-validation was used to evaluate the fully automatic segmentation approach. We show it outperforms the traditional methods that were previously applied on the private dataset, as well as outperforming other comparable state-of-the-art 2D models on the public dataset PROMISE12.
Collapse
|