1
|
Chen P, Turco S, Wang Y, Jager A, Daures G, Wijkstra H, Zwart W, Huang P, Mischi M. Can 3D Multiparametric Ultrasound Imaging Predict Prostate Biopsy Outcome? ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1194-1202. [PMID: 38734528 DOI: 10.1016/j.ultrasmedbio.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 03/16/2024] [Accepted: 04/14/2024] [Indexed: 05/13/2024]
Abstract
OBJECTIVES To assess the value of 3D multiparametric ultrasound imaging, combining hemodynamic and tissue stiffness quantifications by machine learning, for the prediction of prostate biopsy outcomes. METHODS After signing informed consent, 54 biopsy-naïve patients underwent a 3D dynamic contrast-enhanced ultrasound (DCE-US) recording, a multi-plane 2D shear-wave elastography (SWE) scan with manual sweeping from base to apex of the prostate, and received 12-core systematic biopsies (SBx). 3D maps of 18 hemodynamic parameters were extracted from the 3D DCE-US quantification and a 3D SWE elasticity map was reconstructed based on the multi-plane 2D SWE acquisitions. Subsequently, all the 3D maps were segmented and subdivided into 12 regions corresponding to the SBx locations. Per region, the set of 19 computed parameters was further extended by derivation of eight radiomic features per parameter. Based on this feature set, a multiparametric ultrasound approach was implemented using five different classifiers together with a sequential floating forward selection method and hyperparameter tuning. The classification accuracy with respect to the biopsy reference was assessed by a group-k-fold cross-validation procedure, and the performance was evaluated by the Area Under the Receiver Operating Characteristics Curve (AUC). RESULTS Of the 54 patients, 20 were found with clinically significant prostate cancer (csPCa) based on SBx. The 18 hemodynamic parameters showed mean AUC values varying from 0.63 to 0.75, and SWE elasticity showed an AUC of 0.66. The multiparametric approach using radiomic features derived from hemodynamic parameters only produced an AUC of 0.81, while the combination of hemodynamic and tissue-stiffness quantifications yielded a significantly improved AUC of 0.85 for csPCa detection (p-value < 0.05) using the Gradient Boosting classifier. CONCLUSIONS Our results suggest 3D multiparametric ultrasound imaging combining hemodynamic and tissue-stiffness features to represent a promising diagnostic tool for biopsy outcome prediction, aiding in csPCa localization.
Collapse
Affiliation(s)
- Peiran Chen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands.
| | - Simona Turco
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Yao Wang
- Department of Ultrasound in Medicine, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Auke Jager
- Department of Urology, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Gautier Daures
- Angiogenesis Analytics, JADS Venture Campus, Netherlands
| | - Hessel Wijkstra
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands; Department of Urology, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Wim Zwart
- Angiogenesis Analytics, JADS Venture Campus, Netherlands
| | - Pintong Huang
- Department of Ultrasound in Medicine, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Massimo Mischi
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
2
|
Wang Y, Gao X, Wang J. Functional Proteomic Profiling Analysis in Four Major Types of Gastrointestinal Cancers. Biomolecules 2023; 13:biom13040701. [PMID: 37189448 DOI: 10.3390/biom13040701] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/05/2023] [Accepted: 04/18/2023] [Indexed: 05/17/2023] Open
Abstract
Gastrointestinal (GI) cancer accounts for one in four cancer cases and one in three cancer-related deaths globally. A deeper understanding of cancer development mechanisms can be applied to cancer medicine. Comprehensive sequencing applications have revealed the genomic landscapes of the common types of human cancer, and proteomics technology has identified protein targets and signalling pathways related to cancer growth and progression. This study aimed to explore the functional proteomic profiles of four major types of GI tract cancer based on The Cancer Proteome Atlas (TCPA). We provided an overview of functional proteomic heterogeneity by performing several approaches, including principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), t-stochastic neighbour embedding (t-SNE) analysis, and hierarchical clustering analysis in oesophageal carcinoma (ESCA), stomach adenocarcinoma (STAD), colon adenocarcinoma (COAD), and rectum adenocarcinoma (READ) tumours, to gain a system-wide understanding of the four types of GI cancer. The feature selection approach, mutual information feature selection (MIFS) method, was conducted to screen candidate protein signature subsets to better distinguish different cancer types. The potential clinical implications of candidate proteins in terms of tumour progression and prognosis were also evaluated based on TCPA and The Cancer Genome Atlas (TCGA) databases. The results suggested that functional proteomic profiling can identify different patterns among the four types of GI cancers and provide candidate proteins for clinical diagnosis and prognosis evaluation. We also highlighted the application of feature selection approaches in high-dimensional biological data analysis. Overall, this study could improve the understanding of the complexity of cancer phenotypes and genotypes and thus be applied to cancer medicine.
Collapse
Affiliation(s)
- Yangyang Wang
- School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710129, China
| | - Xiaoguang Gao
- School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710129, China
| | - Jihan Wang
- Institute of Medical Research, Northwestern Polytechnical University, Xi'an 710072, China
| |
Collapse
|
3
|
Yi Z, Hu S, Lin X, Zou Q, Zou M, Zhang Z, Xu L, Jiang N, Zhang Y. Machine learning-based prediction of invisible intraprostatic prostate cancer lesions on 68 Ga-PSMA-11 PET/CT in patients with primary prostate cancer. Eur J Nucl Med Mol Imaging 2021; 49:1523-1534. [PMID: 34845536 DOI: 10.1007/s00259-021-05631-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 11/20/2021] [Indexed: 12/23/2022]
Abstract
PURPOSE 68 Ga-PSMA PET/CT has high specificity and sensitivity for the detection of both intraprostatic tumor focal lesions and metastasis. However, approximately 10% of primary prostate cancer are invisible on PSMA-PET (exhibit no or minimal uptake). In this work, we investigated whether machine learning-based radiomics models derived from PSMA-PET images could predict invisible intraprostatic lesions on 68 Ga-PSMA-11 PET in patients with primary prostate cancer. METHODS In this retrospective study, patients with or without prostate cancer who underwent 68 Ga-PSMA PET/CT and presented negative on PSMA-PET image at either of two different institutions were included: institution 1 (between 2017 and 2020) for the training set and institution 2 (between 2019 and 2020) for the external test set. Three random forest (RF) models were built using selected features extracted from standard PET images, delayed PET images, and both standard and delayed PET images. Then, subsequent tenfold cross-validation was performed. In the test phase, the three RF models and PSA density (PSAD, cut-off value: 0.15 ng/ml/ml) were tested with the external test set. The area under the receiver operating characteristic curve (AUC) was calculated for the models and PSAD. The AUCs of the radiomics model and PSAD were compared. RESULTS A total of 64 patients (39 with prostate cancer and 25 with benign prostate disease) were in the training set, and 36 (21 with prostate cancer and 15 with benign prostate disease) were in the test set. The average AUCs of the three RF models from tenfold cross-validation were 0.87 (95% CI: 0.72, 1.00), 0.86 (95% CI: 0.63, 1.00), and 0.91 (95% CI: 0.69, 1.00), respectively. In the test set, the AUCs of the three trained RF models and PSAD were 0.903 (95% CI: 0.830, 0.975), 0.856 (95% CI: 0.748, 0.964), 0.925 (95% CI:0.838, 1.00), and 0.662 (95% CI: 0.510, 0.813). The AUCs of the three radiomics models were higher than that of PSAD (0.903, 0.856, and 0.925 vs. 0.662, respectively; P = .007, P = .045, and P = .005, respectively). CONCLUSION Random forest models developed by 68 Ga-PSMA-11 PET-based radiomics features were proven useful for accurate prediction of invisible intraprostatic lesion on 68 Ga-PSMA-11 PET in patients with primary prostate cancer and showed better diagnostic performance compared with PSAD.
Collapse
Affiliation(s)
- Zhilong Yi
- Department of Nuclear Medicine, The Seventh Affiliated Hospital, Sun Yat-Sen University, Shenzhen, China.,Department of Nuclear Medicine, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Siqi Hu
- Department of Nuclear Medicine, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xiaofeng Lin
- Department of Nuclear Medicine, The Seventh Affiliated Hospital, Sun Yat-Sen University, Shenzhen, China
| | - Qiong Zou
- Department of Nuclear Medicine, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - MinHong Zou
- Department of Ultrasound, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Zhanlei Zhang
- Department of Nuclear Medicine, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Lei Xu
- Department of Nuclear Medicine, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Ningyi Jiang
- Department of Nuclear Medicine, The Seventh Affiliated Hospital, Sun Yat-Sen University, Shenzhen, China. .,Department of Nuclear Medicine, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China.
| | - Yong Zhang
- Department of Nuclear Medicine, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
4
|
Wang H, Udupa JK, Odhner D, Tong Y, Zhao L, Torigian DA. Automatic anatomy recognition in whole-body PET/CT images. Med Phys 2016; 43:613. [PMID: 26745953 DOI: 10.1118/1.4939127] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity of anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., "Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images," Med. Image Anal. 18, 752-771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. METHODS The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image. RESULTS Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible. CONCLUSIONS The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.
Collapse
Affiliation(s)
- Huiqian Wang
- College of Optoelectronic Engineering, Chongqing University, Chongqing 400044, China and Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Jayaram K Udupa
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Dewey Odhner
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Yubing Tong
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Liming Zhao
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 and Research Center of Intelligent System and Robotics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Drew A Torigian
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| |
Collapse
|
5
|
Zhang D, Xu M, Quan L, Yang Y, Qin Q, Zhu W. Segmentation of tumor ultrasound image in HIFU therapy based on texture and boundary encoding. Phys Med Biol 2015; 60:1807-30. [PMID: 25658334 DOI: 10.1088/0031-9155/60/5/1807] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
It is crucial in high intensity focused ultrasound (HIFU) therapy to detect the tumor precisely with less manual intervention for enhancing the therapy efficiency. Ultrasound image segmentation becomes a difficult task due to signal attenuation, speckle effect and shadows. This paper presents an unsupervised approach based on texture and boundary encoding customized for ultrasound image segmentation in HIFU therapy. The approach oversegments the ultrasound image into some small regions, which are merged by using the principle of minimum description length (MDL) afterwards. Small regions belonging to the same tumor are clustered as they preserve similar texture features. The mergence is completed by obtaining the shortest coding length from encoding textures and boundaries of these regions in the clustering process. The tumor region is finally selected from merged regions by a proposed algorithm without manual interaction. The performance of the method is tested on 50 uterine fibroid ultrasound images from HIFU guiding transducers. The segmentations are compared with manual delineations to verify its feasibility. The quantitative evaluation with HIFU images shows that the mean true positive of the approach is 93.53%, the mean false positive is 4.06%, the mean similarity is 89.92%, the mean norm Hausdorff distance is 3.62% and the mean norm maximum average distance is 0.57%. The experiments validate that the proposed method can achieve favorable segmentation without manual initialization and effectively handle the poor quality of the ultrasound guidance image in HIFU therapy, which indicates that the approach is applicable in HIFU therapy.
Collapse
Affiliation(s)
- Dong Zhang
- School of Physics and Technology, Wuhan University, Wuhan, 430072, People's Republic of China
| | | | | | | | | | | |
Collapse
|
6
|
Xu W, Liu Y, Lu Z, Jin ZD, Hu YH, Yu JG, Li ZS. A new endoscopic ultrasonography image processing method to evaluate the prognosis for pancreatic cancer treated with interstitial brachytherapy. World J Gastroenterol 2013; 19:6479-6484. [PMID: 24151368 PMCID: PMC3798413 DOI: 10.3748/wjg.v19.i38.6479] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Revised: 08/28/2013] [Accepted: 09/05/2013] [Indexed: 02/06/2023] Open
Abstract
AIM: To develop a fuzzy classification method to score the texture features of pancreatic cancer in endoscopic ultrasonography (EUS) images and evaluate its utility in making prognosis judgments for patients with unresectable pancreatic cancer treated by EUS-guided interstitial brachytherapy.
METHODS: EUS images from our retrospective database were analyzed. The regions of interest were drawn, and texture features were extracted, selected, and scored with a fuzzy classification method using a C++ program. Then, patients with unresectable pancreatic cancer were enrolled to receive EUS-guided iodine 125 radioactive seed implantation. Their fuzzy classification scores, tumor volumes, and carbohydrate antigen 199 (CA199) levels before and after the brachytherapy were recorded. The association between the changes in these parameters and overall survival was analyzed statistically.
RESULTS: EUS images of 153 patients with pancreatic cancer and 63 non-cancer patients were analyzed. A total of 25 consecutive patients were enrolled, and they tolerated the brachytherapy well without any complications. There was a correlation between the change in the fuzzy classification score and overall survival (Spearman test, r = 0.616, P = 0.001), whereas no correlation was found to be significant between the change in tumor volume (P = 0.663), CA199 level (P = 0.659), and overall survival. There were 15 patients with a decrease in their fuzzy classification score after brachytherapy, whereas the fuzzy classification score increased in another 10 patients. There was a significant difference in overall survival between the two groups (67 d vs 151 d, P = 0.001), but not in the change of tumor volume and CA199 level.
CONCLUSION: Using the fuzzy classification method to analyze EUS images of pancreatic cancer is feasible, and the method can be used to make prognosis judgments for patients with unresectable pancreatic cancer treated by interstitial brachytherapy.
Collapse
|
7
|
Mohamed SS, Li JM, Salama MMA, Freeman GH, Tizhoosh HR, Fenster A, Rizkalla K. An automated neural-fuzzy approach to malignant tumor localization in 2D ultrasonic images of the prostate. J Digit Imaging 2011; 24:411-23. [PMID: 20532587 PMCID: PMC3092054 DOI: 10.1007/s10278-010-9301-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
In this paper, a new neural-fuzzy approach is proposed for automated region segmentation in transrectal ultrasound images of the prostate. The goal of region segmentation is to identify suspicious regions in the prostate in order to provide decision support for the diagnosis of prostate cancer. The new automated region segmentation system uses expert knowledge as well as both textural and spatial features in the image to accomplish the segmentation. The textural information is extracted by two recurrent random pulsed neural networks trained by two sets of data (a suspicious tissues' data set and a normal tissues' data set). Spatial information is captured by the atlas-based reference approach and is represented as fuzzy membership functions. The textural and spatial features are synthesized by a fuzzy inference system, which provides a binary classification of the region to be evaluated.
Collapse
Affiliation(s)
- Samar Samir Mohamed
- Department of Electrical and Computer Engineering, University of Waterloo, 619 Honeywood Place, Waterloo, ON, Canada.
| | | | | | | | | | | | | |
Collapse
|
8
|
Differential diagnosis of pancreatic cancer from normal tissue with digital imaging processing and pattern recognition based on a support vector machine of EUS images. Gastrointest Endosc 2010; 72:978-85. [PMID: 20855062 DOI: 10.1016/j.gie.2010.06.042] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2010] [Accepted: 06/23/2010] [Indexed: 02/07/2023]
Abstract
BACKGROUND EUS can detect morphologic abnormalities of pancreatic cancer with high sensitivity but with limited specificity. OBJECTIVE To develop a classification model for differential diagnosis of pancreatic cancer by using a digital imaging processing (DIP) technique to analyze EUS images of the pancreas. DESIGN A retrospective, controlled, single-center design was used. SETTING The study took place at the Second Military Medical University, Shanghai, China. PATIENTS There were 153 pancreatic cancer and 63 noncancer patients in this study. INTERVENTION All patients underwent EUS-guided FNA and pathologic analysis. MAIN OUTCOME MEASUREMENTS EUS images were obtained and correlated with cytologic findings after FNA. Texture features were extracted from the region of interest, and multifractal dimension vectors were introduced in the feature selection to the frame of the M-band wavelet transform. The sequential forward selection process was used for a better combination of features. By using the area under the receiver operating characteristic curve and other texture features based on separability criteria, a predictive model was built, trained, and validated according to the support vector machine theory. RESULTS From 67 frequently used texture features, 20 better features were selected, resulting in a classification accuracy of 99.07% after being added to 9 other features. A predictive model was then built and trained. After 50 random tests, the average accuracy, sensitivity, specificity, positive predictive value, and negative predictive value for the diagnosis of pancreatic cancer were 97.98 ± 1.23%, 94.32 ± 0.03%, 99.45 ± 0.01%, 98.65 ± 0.02%, and 97.77 ± 0.01%, respectively. LIMITATIONS The limitations of this study include the small sample size and that the support vector machine was not performed in real time. CONCLUSION The classification of EUS images for differentiating pancreatic cancer from normal tissue by DIP is quite useful. Further refinements of such a model could increase the accuracy of EUS diagnosis of tumors.
Collapse
|
9
|
Candefjord S, Ramser K, Lindahl OA. Technologies for localization and diagnosis of prostate cancer. J Med Eng Technol 2010; 33:585-603. [PMID: 19848851 DOI: 10.3109/03091900903111966] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
The gold standard for detecting prostate cancer (PCa), systematic biopsy, lacks sensitivity as well as grading accuracy. PSA screening leads to over-treatment of many men, and it is unclear whether screening reduces PCa mortality. This review provides an understanding of the difficulties of localizing and diagnosing PCa. It summarizes recent developments of ultrasound (including elastography) and MRI, and discusses some alternative experimental techniques, such as resonance sensor technology and vibrational spectroscopy. A comparison between the different methods is presented. It is concluded that new ultrasound techniques are promising for targeted biopsy procedures, in order to detect more clinically significant cancers while reducing the number of cores. MRI advances are very promising, but MRI remains expensive and MR-guided biopsy is complex. Resonance sensor technology and vibrational spectroscopy have shown promising results in vitro. There is a need for large prospective multicentre trials that unambiguously prove the clinical benefits of these new techniques.
Collapse
Affiliation(s)
- S Candefjord
- Department of Computer Science and Electrical Engineering, Luleå University of Technology, Luleå, Sweden.
| | | | | |
Collapse
|
10
|
Yu H, Caldwell C, Mah K, Mozeg D. Coregistered FDG PET/CT-based textural characterization of head and neck cancer for radiation treatment planning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2009; 28:374-383. [PMID: 19244009 DOI: 10.1109/tmi.2008.2004425] [Citation(s) in RCA: 123] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Coregistered fluoro-deoxy-glucose (FDG) positron emission tomography/computed tomography (PET/CT) has shown potential to improve the accuracy of radiation targeting of head and neck cancer (HNC) when compared to the use of CT simulation alone. The objective of this study was to identify textural features useful in distinguishing tumor from normal tissue in head and neck via quantitative texture analysis of coregistered 18F-FDG PET and CT images. Abnormal and typical normal tissues were manually segmented from PET/CT images of 20 patients with HNC and 20 patients with lung cancer. Texture features including some derived from spatial grey-level dependence matrices (SGLDM) and neighborhood gray-tone-difference matrices (NGTDM) were selected for characterization of these segmented regions of interest (ROIs). Both K nearest neighbors (KNNs) and decision tree (DT)-based KNN classifiers were employed to discriminate images of abnormal and normal tissues. The area under the curve (AZ) of receiver operating characteristics (ROC) was used to evaluate the discrimination performance of features in comparison to an expert observer. The leave-one-out and bootstrap techniques were used to validate the results. The AZ of DT-based KNN classifier was 0.95. Sensitivity and specificity for normal and abnormal tissue classification were 89% and 99%, respectively. In summary, NGTDM features such as PET Coarseness, PET Contrast, and CT Coarseness extracted from FDG PET/CT images provided good discrimination performance. The clinical use of such features may lead to improvement in the accuracy of radiation targeting of HNC.
Collapse
Affiliation(s)
- Huan Yu
- Department of Medical Biophysics, University of Toronto, Toronto, ON, M4N 3M5 Canada.
| | | | | | | |
Collapse
|
11
|
Mohamed SS, Li J, Salama MMA, Freeman G. Prostate tissue texture feature extraction for suspicious regions identification on TRUS images. J Digit Imaging 2008; 22:503-18. [PMID: 18473140 DOI: 10.1007/s10278-008-9124-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2008] [Revised: 03/18/2008] [Accepted: 03/24/2008] [Indexed: 11/30/2022] Open
Abstract
In this work, two different approaches are proposed for region of interest (ROI) segmentation using transrectal ultrasound (TRUS) images. The two methods aim to extract informative features that are able to characterize suspicious regions in the TRUS images. Both proposed methods are based on multi-resolution analysis that is characterized by its high localization in both the frequency and the spatial domains. Being highly localized in both domains, the proposed methods are expected to accurately identify the suspicious ROIs. On one hand, the first method depends on a Gabor filter that captures the high frequency changes in the image regions. On the other hand, the second method depends on classifying the wavelet coefficients of the image. It is shown in this paper that both methods reveal details in the ROIs which correlate with their pathological representations. It was found that there is a good match between the regions identified using the two methods, a result that supports the ability of each of the proposed methods to mimic the radiologist's decision in identifying suspicious regions. Studying two ROI segmentation methods is important since the only available dataset is the radiologist's suspicious regions, and there is a need to support the results obtained by either one of the proposed methods. This work is mainly a preliminary proof of concept study that will ultimately be expanded to a larger scale study whose aim will be introducing an assisting tool to help the radiologist identify the suspicious regions.
Collapse
Affiliation(s)
- S S Mohamed
- ECE Department, University of Waterloo, Waterloo, ON, Canada, N2T 1X5.
| | | | | | | |
Collapse
|
12
|
Mohamed SS, Salama MA. Prostate cancer spectral multifeature analysis using TRUS images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:548-556. [PMID: 18390351 DOI: 10.1109/tmi.2007.911547] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
This paper focuses on extracting and analyzing different spectral features from transrectal ultrasound (TRUS) images for prostate cancer recognition. First, the information about the images' frequency domain features and spatial domain features are combined using a Gabor filter and then integrated with the expert radiologist's information to identify the highly suspicious regions of interest (ROIs). The next stage of the proposed algorithm is to scan each identified region in order to generate the corresponding 1-D signal that represents each region. For each ROI, possible spectral feature sets are constructed using different new geometrical features extracted from the power spectrum density (PSD) of each region's signal. Next, a classifier-based algorithm for feature selection using particle swarm optimization (PSO) is adopted and used to select the optimal feature subset from the constructed feature sets. A new spectral feature set for the TRUS images using estimation of signal parameters via rotational invariance technique (ESPRIT) is also constructed, and its ability to represent tissue texture is compared to the PSD-based spectral feature sets using the support vector machines (SVMs) classifier. The accuracy obtained ranges from 72.2% to 94.4%, with the best accuracy achieved by the ESPRIT feature set.
Collapse
Affiliation(s)
- S S Mohamed
- University of Waterloo, Waterloo, ON, N2T 1X5 Canada.
| | | |
Collapse
|
13
|
Michail G, Karahaliou A, Skiadopoulos S, Kalogeropoulou C, Terzis G, Boniatis I, Costaridou L, Kourounis G, Panayiotakis G. Texture analysis of perimenopausal and post-menopausal endometrial tissue in grayscale transvaginal ultrasonography. Br J Radiol 2007; 80:609-16. [PMID: 17681990 DOI: 10.1259/bjr/13992649] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
The aim of this study was to investigate the feasibility of texture analysis in characterizing endometrial tissue as depicted in two-dimensional (2D) grayscale transvaginal ultrasonography. Digital transvaginal ultrasound endometrial images were acquired from 65 perimenopausal and post-menopausal women prior to gynaecological operations; histology revealed 15 malignant and 50 benign cases. Images were processed with a wavelet-based contrast enhancement technique. Three regions of interest (ROIs) were identified (endometrium, endometrium plus adjacent myometrium, layer containing endometrial-myometrial interface) on each original and processed image. 32 textural features were extracted from each ROI employing first and second order statistics texture analysis algorithms. Textural features-based models were generated for differentiating benign from malignant endometrial tissue using stepwise logistic regression analysis. Models' performance was evaluated by means of receiver operating characteristic (ROC) analysis. The best logistic regression model comprised seven textural features extracted from the ROIs determined on the processed images; three features were extracted from the endometrium, while four features were extracted from the layer containing the endometrial-myometrial interface. The area under the ROC curve (A(z)) was 0.956+/-0.038, providing 86.0% specificity at 93.3% sensitivity using the cut-off level of 0.5 for probability of malignancy. Texture analysis of 2D grayscale transvaginal ultrasound images can effectively differentiate malignant from benign endometrial tissue and may contribute to computer-aided diagnosis of endometrial cancer.
Collapse
Affiliation(s)
- G Michail
- Department of Obstetrics and Gynecology, School of Medicine, University of Patras, 265 00 Patras, Greece
| | | | | | | | | | | | | | | | | |
Collapse
|
14
|
Prostate boundary detection and volume estimation using TRUS images for brachytherapy applications. Int J Comput Assist Radiol Surg 2007. [DOI: 10.1007/s11548-007-0120-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
15
|
Abstract
Background Identifying the location and the volume of the prostate is important for ultrasound-guided prostate brachytherapy. Prostate volume is also important for prostate cancer diagnosis. Manual outlining of the prostate border is able to determine the prostate volume accurately, however, it is time consuming and tedious. Therefore, a number of investigations have been devoted to designing algorithms that are suitable for segmenting the prostate boundary in ultrasound images. The most popular method is the deformable model (snakes), a method that involves designing an energy function and then optimizing this function. The snakes algorithm usually requires either an initial contour or some points on the prostate boundary to be estimated close enough to the original boundary which is considered a drawback to this powerful method. Methods The proposed spectral clustering segmentation algorithm is built on a totally different foundation that doesn't involve any function design or optimization. It also doesn't need any contour or any points on the boundary to be estimated. The proposed algorithm depends mainly on graph theory techniques. Results Spectral clustering is used in this paper for both prostate gland segmentation from the background and internal gland segmentation. The obtained segmented images were compared to the expert radiologist segmented images. The proposed algorithm obtained excellent gland segmentation results with 93% average overlap areas. It is also able to internally segment the gland where the segmentation showed consistency with the cancerous regions identified by the expert radiologist. Conclusion The proposed spectral clustering segmentation algorithm obtained fast excellent estimates that can give rough prostate volume and location as well as internal gland segmentation without any user interaction.
Collapse
|
16
|
Abstract
Ultrasound imaging is now in very widespread clinical use. The most important underpinning technologies include transducers, beam forming, pulse compression, tissue harmonic imaging, contrast agents, techniques for measuring blood flow and tissue motion, and three-dimensional imaging. Specialized and emerging technologies include tissue characterization and image segmentation, microscanning and intravascular scanning, elasticity imaging, reflex transmission imaging, computed tomography, Doppler tomography, photoacoustics and thermoacoustics. Phantoms and quality assurance are necessary to maintain imaging performance. Contemporary ultrasonic imaging procedures seem to be safe but studies of bioeffects are continuing. It is concluded that advances in ultrasonic imaging have primarily been pushed by the application of physics and innovations in engineering, rather than being pulled by the identification of specific clinical objectives in need of scientific solutions. Moreover, the opportunities for innovation to continue into the future are both challenging and exciting.
Collapse
Affiliation(s)
- P N T Wells
- Institute of Medical Engineering and Medical Physics, School of Engineering, Cardiff University, Queen's Buildings, The Parade, Cardiff CF24 3AA, UK.
| |
Collapse
|