1
|
Magoulianitis V, Yang J, Yang Y, Xue J, Kaneko M, Cacciamani G, Abreu A, Duddalwar V, Kuo CCJ, Gill IS, Nikias C. PCa-RadHop: A transparent and lightweight feed-forward method for clinically significant prostate cancer segmentation. Comput Med Imaging Graph 2024; 116:102408. [PMID: 38908295 DOI: 10.1016/j.compmedimag.2024.102408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 05/30/2024] [Accepted: 05/31/2024] [Indexed: 06/24/2024]
Abstract
Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as "black-boxes" in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.
Collapse
Affiliation(s)
- Vasileios Magoulianitis
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA.
| | - Jiaxin Yang
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Yijing Yang
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Jintang Xue
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Masatomo Kaneko
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Giovanni Cacciamani
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Andre Abreu
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Vinay Duddalwar
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA; Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - C-C Jay Kuo
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Inderbir S Gill
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Chrysostomos Nikias
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| |
Collapse
|
2
|
Gunashekar DD, Bielak L, Oerther B, Benndorf M, Nedelcu A, Hickey S, Zamboglou C, Grosu AL, Bock M. Comparison of data fusion strategies for automated prostate lesion detection using mpMRI correlated with whole mount histology. Radiat Oncol 2024; 19:96. [PMID: 39080735 PMCID: PMC11287985 DOI: 10.1186/s13014-024-02471-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/14/2024] [Indexed: 08/03/2024] Open
Abstract
BACKGROUND In this work, we compare input level, feature level and decision level data fusion techniques for automatic detection of clinically significant prostate lesions (csPCa). METHODS Multiple deep learning CNN architectures were developed using the Unet as the baseline. The CNNs use both multiparametric MRI images (T2W, ADC, and High b-value) and quantitative clinical data (prostate specific antigen (PSA), PSA density (PSAD), prostate gland volume & gross tumor volume (GTV)), and only mp-MRI images (n = 118), as input. In addition, co-registered ground truth data from whole mount histopathology images (n = 22) were used as a test set for evaluation. RESULTS The CNNs achieved for early/intermediate / late level fusion a precision of 0.41/0.51/0.61, recall value of 0.18/0.22/0.25, an average precision of 0.13 / 0.19 / 0.27, and F scores of 0.55/0.67/ 0.76. Dice Sorensen Coefficient (DSC) was used to evaluate the influence of combining mpMRI with parametric clinical data for the detection of csPCa. We compared the DSC between the predictions of CNN's trained with mpMRI and parametric clinical and the CNN's trained with only mpMRI images as input with the ground truth. We obtained a DSC of data 0.30/0.34/0.36 and 0.26/0.33/0.34 respectively. Additionally, we evaluated the influence of each mpMRI input channel for the task of csPCa detection and obtained a DSC of 0.14 / 0.25 / 0.28. CONCLUSION The results show that the decision level fusion network performs better for the task of prostate lesion detection. Combining mpMRI data with quantitative clinical data does not show significant differences between these networks (p = 0.26/0.62/0.85). The results show that CNNs trained with all mpMRI data outperform CNNs with less input channels which is consistent with current clinical protocols where the same input is used for PI-RADS lesion scoring. TRIAL REGISTRATION The trial was registered retrospectively at the German Register for Clinical Studies (DRKS) under proposal number Nr. 476/14 & 476/19.
Collapse
Affiliation(s)
- Deepa Darshini Gunashekar
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany.
| | - Lars Bielak
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Benedict Oerther
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Andrea Nedelcu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Samantha Hickey
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Constantinos Zamboglou
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Anca-Ligia Grosu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Michael Bock
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| |
Collapse
|
3
|
Wang G, Hu J, Zhang Y, Xiao Z, Huang M, He Z, Chen J, Bai Z. A modified U-Net convolutional neural network for segmenting periprostatic adipose tissue based on contour feature learning. Heliyon 2024; 10:e25030. [PMID: 38318024 PMCID: PMC10839980 DOI: 10.1016/j.heliyon.2024.e25030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 01/16/2024] [Accepted: 01/18/2024] [Indexed: 02/07/2024] Open
Abstract
Objective This study trains a U-shaped fully convolutional neural network (U-Net) model based on peripheral contour measures to achieve rapid, accurate, automated identification and segmentation of periprostatic adipose tissue (PPAT). Methods Currently, no studies are using deep learning methods to discriminate and segment periprostatic adipose tissue. This paper proposes a novel and modified, U-shaped convolutional neural network contour control points on a small number of datasets of MRI T2W images of PPAT combined with its gradient images as a feature learning method to reduce feature ambiguity caused by the differences in PPAT contours of different patients. This paper adopts a supervised learning method on the labeled dataset, combining the probability and spatial distribution of control points, and proposes a weighted loss function to optimize the neural network's convergence speed and detection performance. Based on high-precision detection of control points, this paper uses a convex curve fitting to obtain the final PPAT contour. The imaging segmentation results were compared with those of a fully convolutional network (FCN), U-Net, and semantic segmentation convolutional network (SegNet) on three evaluation metrics: Dice similarity coefficient (DSC), Hausdorff distance (HD), and intersection over union ratio (IoU). Results Cropped images with a 270 × 270-pixel matrix had DSC, HD, and IoU values of 70.1%, 27 mm, and 56.1%, respectively; downscaled images with a 256 × 256-pixel matrix had 68.7%, 26.7 mm, and 54.1%. A U-Net network based on peripheral contour characteristics predicted the complete periprostatic adipose tissue contours on T2W images at different levels. FCN, U-Net, and SegNet could not completely predict them. Conclusion This U-Net convolutional neural network based on peripheral contour features can identify and segment periprostatic adipose tissue quite well. Cropped images with a 270 × 270-pixel matrix are more appropriate for use with the U-Net convolutional neural network based on contour features; reducing the resolution of the original image will lower the accuracy of the U-Net convolutional neural network. FCN and SegNet are not appropriate for identifying PPAT on T2 sequence MR images. Our method can automatically segment PPAT rapidly and accurately, laying a foundation for PPAT image analysis.
Collapse
Affiliation(s)
- Gang Wang
- Department of Urology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| | - Jinyue Hu
- Department of Radiology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| | - Yu Zhang
- College of Computer Science and Cyberspace Security, Hainan University, Haikou, 570228, China
| | - Zhaolin Xiao
- College of Computer Science, Xi'an University of Technology, Xi'an, 710048, China
| | - Mengxing Huang
- College of Information and Communication Engineering, Hainan University, Haikou, 70208, China
| | - Zhanping He
- Department of Radiology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| | - Jing Chen
- Department of Radiology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| | - Zhiming Bai
- Department of Urology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| |
Collapse
|
4
|
Meglič J, Sunoqrot MRS, Bathen TF, Elschot M. Label-set impact on deep learning-based prostate segmentation on MRI. Insights Imaging 2023; 14:157. [PMID: 37749333 PMCID: PMC10519913 DOI: 10.1186/s13244-023-01502-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 08/12/2023] [Indexed: 09/27/2023] Open
Abstract
BACKGROUND Prostate segmentation is an essential step in computer-aided detection and diagnosis systems for prostate cancer. Deep learning (DL)-based methods provide good performance for prostate gland and zones segmentation, but little is known about the impact of manual segmentation (that is, label) selection on their performance. In this work, we investigated these effects by obtaining two different expert label-sets for the PROSTATEx I challenge training dataset (n = 198) and using them, in addition to an in-house dataset (n = 233), to assess the effect on segmentation performance. The automatic segmentation method we used was nnU-Net. RESULTS The selection of training/testing label-set had a significant (p < 0.001) impact on model performance. Furthermore, it was found that model performance was significantly (p < 0.001) higher when the model was trained and tested with the same label-set. Moreover, the results showed that agreement between automatic segmentations was significantly (p < 0.0001) higher than agreement between manual segmentations and that the models were able to outperform the human label-sets used to train them. CONCLUSIONS We investigated the impact of label-set selection on the performance of a DL-based prostate segmentation model. We found that the use of different sets of manual prostate gland and zone segmentations has a measurable impact on model performance. Nevertheless, DL-based segmentation appeared to have a greater inter-reader agreement than manual segmentation. More thought should be given to the label-set, with a focus on multicenter manual segmentation and agreement on common procedures. CRITICAL RELEVANCE STATEMENT Label-set selection significantly impacts the performance of a deep learning-based prostate segmentation model. Models using different label-set showed higher agreement than manual segmentations. KEY POINTS • Label-set selection has a significant impact on the performance of automatic segmentation models. • Deep learning-based models demonstrated true learning rather than simply mimicking the label-set. • Automatic segmentation appears to have a greater inter-reader agreement than manual segmentation.
Collapse
Affiliation(s)
- Jakob Meglič
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway.
- Faculty of Medicine, University of Ljubljana, 1000, Ljubljana, Slovenia.
| | - Mohammed R S Sunoqrot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Tone Frost Bathen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway.
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway.
| |
Collapse
|
5
|
Gibala S, Obuchowicz R, Lasek J, Schneider Z, Piorkowski A, Pociask E, Nurzynska K. Textural Features of MR Images Correlate with an Increased Risk of Clinically Significant Cancer in Patients with High PSA Levels. J Clin Med 2023; 12:jcm12082836. [PMID: 37109173 PMCID: PMC10146387 DOI: 10.3390/jcm12082836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 04/06/2023] [Accepted: 04/11/2023] [Indexed: 04/29/2023] Open
Abstract
BACKGROUND Prostate cancer, which is associated with gland biology and also with environmental risks, is a serious clinical problem in the male population worldwide. Important progress has been made in the diagnostic and clinical setups designed for the detection of prostate cancer, with a multiparametric magnetic resonance diagnostic process based on the PIRADS protocol playing a key role. This method relies on image evaluation by an imaging specialist. The medical community has expressed its desire for image analysis techniques that can detect important image features that may indicate cancer risk. METHODS Anonymized scans of 41 patients with laboratory diagnosed PSA levels who were routinely scanned for prostate cancer were used. The peripheral and central zones of the prostate were depicted manually with demarcation of suspected tumor foci under medical supervision. More than 7000 textural features in the marked regions were calculated using MaZda software. Then, these 7000 features were used to perform region parameterization. Statistical analyses were performed to find correlations with PSA-level-based diagnosis that might be used to distinguish suspected (different) lesions. Further multiparametrical analysis using MIL-SVM machine learning was used to obtain greater accuracy. RESULTS Multiparametric classification using MIL-SVM allowed us to reach 92% accuracy. CONCLUSIONS There is an important correlation between the textural parameters of MRI prostate images made using the PIRADS MR protocol with PSA levels > 4 mg/mL. The correlations found express dependence between image features with high cancer markers and hence the cancer risk.
Collapse
Affiliation(s)
- Sebastian Gibala
- Urology Department, Ultragen Medical Center, 31-572 Krakow, Poland
| | - Rafal Obuchowicz
- Department of Diagnostic Imaging, Jagiellonian University Medical College, 31-501 Krakow, Poland
| | - Julia Lasek
- Faculty of Geology, Geophysics and Environmental Protection, AGH University of Science and Technology, 30-059 Krakow, Poland
| | - Zofia Schneider
- Faculty of Geology, Geophysics and Environmental Protection, AGH University of Science and Technology, 30-059 Krakow, Poland
| | - Adam Piorkowski
- Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, 30-059 Krakow, Poland
| | - Elżbieta Pociask
- Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, 30-059 Krakow, Poland
| | - Karolina Nurzynska
- Department of Algorithmics and Software, Silesian University of Technology, 44-100 Gliwice, Poland
| |
Collapse
|
6
|
Anush A, Rohini G, Nicola S, WalaaEldin EM, Eranga U. Deep-learning-based ensemble method for fully automated detection of renal masses on magnetic resonance images. J Med Imaging (Bellingham) 2023; 10:024501. [PMID: 36950139 PMCID: PMC10026851 DOI: 10.1117/1.jmi.10.2.024501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 02/22/2023] [Indexed: 03/24/2023] Open
Abstract
Purpose Accurate detection of small renal masses (SRM) is a fundamental step for automated classification of benign and malignant or indolent and aggressive renal tumors. Magnetic resonance image (MRI) may outperform computed tomography (CT) for SRM subtype differentiation due to improved tissue characterization, but is less explored compared to CT. The objective of this study is to autonomously detect SRM on contrast-enhanced magnetic resonance images (CE-MRI). Approach In this paper, we described a novel, fully automated methodology for accurate detection and localization of SRM on CE-MRI. We first determine the kidney boundaries using a U-Net convolutional neural network. We then search for SRM within the localized kidney regions using a mixture-of-experts ensemble model based on the U-Net architecture. Our dataset contained CE-MRI scans of 118 patients with different solid kidney tumor subtypes including renal cell carcinomas, oncocytomas, and fat-poor renal angiomyolipoma. We evaluated the proposed model on the entire CE-MRI dataset using 5-fold cross validation. Results The developed algorithm reported a Dice similarity coefficient of 91.20 ± 5.41 % (mean ± standard deviation) for kidney segmentation from 118 volumes consisting of 25,025 slices. Our proposed ensemble model for SRM detection yielded a recall and precision of 86.2% and 83.3% on the entire CE-MRI dataset, respectively. Conclusions We described a deep-learning-based method for fully automated SRM detection using CE-MR images, which has not been studied previously. The results are clinically important as SRM localization is a pre-step for fully automated diagnosis of SRM subtypes.
Collapse
Affiliation(s)
- Agarwal Anush
- University of Guelph, School of Engineering, Guelph, Ontario, Canada
| | - Gaikar Rohini
- University of Guelph, School of Engineering, Guelph, Ontario, Canada
| | - Schieda Nicola
- University of Ottawa, Department of Radiology, Ottawa, Ontario, Canada
| | | | - Ukwatta Eranga
- University of Guelph, School of Engineering, Guelph, Ontario, Canada
| |
Collapse
|
7
|
Zhu L, Gao G, Zhu Y, Han C, Liu X, Li D, Liu W, Wang X, Zhang J, Zhang X, Wang X. Fully automated detection and localization of clinically significant prostate cancer on MR images using a cascaded convolutional neural network. Front Oncol 2022; 12:958065. [PMID: 36249048 PMCID: PMC9558117 DOI: 10.3389/fonc.2022.958065] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose To develop a cascaded deep learning model trained with apparent diffusion coefficient (ADC) and T2-weighted imaging (T2WI) for fully automated detection and localization of clinically significant prostate cancer (csPCa). Methods This retrospective study included 347 consecutive patients (235 csPCa, 112 non-csPCa) with high-quality prostate MRI data, which were randomly selected for training, validation, and testing. The ground truth was obtained using manual csPCa lesion segmentation, according to pathological results. The proposed cascaded model based on Res-UNet takes prostate MR images (T2WI+ADC or only ADC) as inputs and automatically segments the whole prostate gland, the anatomic zones, and the csPCa region step by step. The performance of the models was evaluated and compared with PI-RADS (version 2.1) assessment using sensitivity, specificity, accuracy, and Dice similarity coefficient (DSC) in the held-out test set. Results In the test set, the per-lesion sensitivity of the biparametric (ADC + T2WI) model, ADC model, and PI-RADS assessment were 95.5% (84/88), 94.3% (83/88), and 94.3% (83/88) respectively (all p > 0.05). Additionally, the mean DSC based on the csPCa lesions were 0.64 ± 0.24 and 0.66 ± 0.23 for the biparametric model and ADC model, respectively. The sensitivity, specificity, and accuracy of the biparametric model were 95.6% (108/113), 91.5% (665/727), and 92.0% (773/840) based on sextant, and were 98.6% (68/69), 64.8% (46/71), and 81.4% (114/140) based on patients. The biparametric model had a similar performance to PI-RADS assessment (p > 0.05) and had higher specificity than the ADC model (86.8% [631/727], p< 0.001) based on sextant. Conclusion The cascaded deep learning model trained with ADC and T2WI achieves good performance for automated csPCa detection and localization.
Collapse
Affiliation(s)
- Lina Zhu
- Department of Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Ge Gao
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Yi Zhu
- Department of Clinical & Technical Support, Philips Healthcare, Beijing, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiang Liu
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Derun Li
- Department of Urology, Peking University First Hospital, Beijing, China
| | - Weipeng Liu
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiangpeng Wang
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Jingyuan Zhang
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
- *Correspondence: Xiaoying Wang,
| |
Collapse
|
8
|
Impact of measurement method on interobserver variability of apparent diffusion coefficient of lesions in prostate MRI. PLoS One 2022; 17:e0268829. [PMID: 35604891 PMCID: PMC9126398 DOI: 10.1371/journal.pone.0268829] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 05/09/2022] [Indexed: 11/28/2022] Open
Abstract
Purpose To compare the inter-observer variability of apparent diffusion coefficient (ADC) values of prostate lesions measured by 2D-region of interest (ROI) with and without specific measurement instruction. Methods Forty lesions in 40 patients who underwent prostate MR followed by targeted prostate biopsy were evaluated. A multi-reader study (10 readers) was performed to assess the agreement of ADC values between 2D-ROI without specific instruction and 2D-ROI with specific instruction to place a 9-pixel size 2D-ROI covering the lowest ADC area. The computer script generated multiple overlapping 9-pixel 2D-ROIs within a 3D-ROI encompassing the entire lesion placed by a single reader. The lowest mean ADC values from each 2D-small-ROI were used as reference values. Inter-observer agreement was assessed using the Bland-Altman plot. Intraclass correlation coefficient (ICC) was assessed between ADC values measured by 10 readers and the computer-calculated reference values. Results Ten lesions were benign, 6 were Gleason score 6 prostate carcinoma (PCa), and 24 were clinically significant PCa. The mean±SD ADC reference value by 9-pixel-ROI was 733 ± 186 (10−6 mm2/s). The 95% limits of agreement of ADC values among readers were better with specific instruction (±112) than those without (±205). ICC between reader-measured ADC values and computer-calculated reference values ranged from 0.736–0.949 with specific instruction and 0.349–0.919 without specific instruction. Conclusion Interobserver agreement of ADC values can be improved by indicating a measurement method (use of a specific ROI size covering the lowest ADC area).
Collapse
|