1
|
Arab M, Fallah A, Rashidi S, Dastjerdi MM, Ahmadinejad N. Computer-Aided Classification of Breast Lesions Based on US RF Time Series Using a Novel Machine Learning Approach. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2024; 43:2129-2145. [PMID: 39140240 DOI: 10.1002/jum.16542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 06/25/2024] [Accepted: 07/21/2024] [Indexed: 08/15/2024]
Abstract
OBJECTIVES One of the most promising adjuncts for screening breast cancer is ultrasound (US) radio-frequency (RF) time series. It has the superiority of not requiring any supplementary equipment over other methods. This research aimed to propound a machine learning (ML) approach for automatically classifying benign, probably benign, suspicious, and malignant breast lesions based on the features extracted from the accumulated US RF time series. METHODS In this article, 220 data of the aforementioned categories, recorded from 118 patients, were analyzed. The dataset, named RFTSBU, was registered by a SuperSonic Imagine Aixplorer medical/research system equipped with a linear transducer. The regions of interest (ROIs) of the B-mode images were manually selected by an expert radiologist before computing the suggested features. Regarding time, frequency, and time-frequency domains, 291 various features were extracted from each ROI. Finally, the features were classified by a pioneering technique named the reference classification method (RCM). Furthermore, the Lee filter was applied to evaluate the effectiveness of reducing speckle noise on the outcomes. RESULTS The accuracy of two-class, three-class, and four-class classifications were respectively calculated 98.59 ± 0.71%, 98.13 ± 0.69%, and 96.10 ± 0.66% (considering 10 repetitions) while support vector machine (SVM) and K-nearest neighbor (KNN) classifiers with 5-fold cross-validation were utilized. CONCLUSIONS This article represented the proposed approach, named CCRFML, to distinguish between breast lesions based on registered in vivo RF time series employing an ML framework. The proposed method's impressive level of classification accuracy attests to its capability of effectively assisting medical professionals in the noninvasive differentiation of breast lesions.
Collapse
Affiliation(s)
- Mahsa Arab
- Faculty of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Ali Fallah
- Faculty of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Saeid Rashidi
- Faculty of Medical Sciences & Technologies, Science & Research Branch, Islamic Azad University, Tehran, Iran
| | | | - Nasrin Ahmadinejad
- Radiology-Medical Imaging Center, Cancer Research Institute, Imam Khomeini Hospital Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Sciences (TUMS), Tehran, Iran
| |
Collapse
|
2
|
Norouzi Ghehi E, Fallah A, Rashidi S, Mehdizadeh Dastjerdi M. Evaluating the effect of tissue stimulation at different frequencies on breast lesion classification based on nonlinear features using a novel radio frequency time series approach. Heliyon 2024; 10:e33133. [PMID: 39027586 PMCID: PMC11255572 DOI: 10.1016/j.heliyon.2024.e33133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 07/20/2024] Open
Abstract
Objective Radio Frequency Time Series (RF TS) is a cutting-edge ultrasound approach in tissue typing. The RF TS does not provide dynamic insights into the propagation medium; when the tissue and probe are fixed. We previously proposed the innovative RFTSDP method in which the RF data are recorded while stimulating the tissue. Applying stimulation can unveil the mechanical characteristics of the tissue in RF echo. Materials and methods In this study, an apparatus was developed to induce vibrations at different frequencies to the medium. Data were collected from four PVA phantoms simulating the nonlinear behaviors of healthy, fibroadenoma, cyst, and cancerous breast tissues. Raw focused, raw, and beamformed ultrafast data were collected under conditions of no stimulation, constant force, and various vibrational stimulations using the Supersonic Imagine Aixplorer clinical/research ultrasound imaging system. Time domain (TD), spectral, and nonlinear features were extracted from each RF TS. Support Vector Machine (SVM), Random Forest, and Decision Tree algorithms were employed for classification. Results The optimal outcome was achieved using the SVM classifier considering 19 features extracted from beamformed ultrafast data recorded while applying vibration at the frequency of 65 Hz. The classification accuracy, specificity, and precision were 98.44 ± 0.20 %, 99.49 ± 0.01 %, and 98.53 ± 0.04 %, respectively. Applying RFTSDP, a notable 24.45 % improvement in accuracy was observed compared to the case of fixed probe assessing the recorded raw focused data. Conclusions External vibration at an appropriate frequency, as applied in RFTSDP, incorporates beneficial information about the medium and its dynamic characteristics into the RF TS, which can improve tissue characterization.
Collapse
Affiliation(s)
- Elaheh Norouzi Ghehi
- Faculty of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Ali Fallah
- Faculty of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Saeid Rashidi
- Faculty of Medical Sciences and Technologies, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | | |
Collapse
|
3
|
Huang TL, Lu NH, Huang YH, Twan WH, Yeh LR, Liu KY, Chen TB. Transfer learning with CNNs for efficient prostate cancer and BPH detection in transrectal ultrasound images. Sci Rep 2023; 13:21849. [PMID: 38071254 PMCID: PMC10710441 DOI: 10.1038/s41598-023-49159-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
Early detection of prostate cancer (PCa) and benign prostatic hyperplasia (BPH) is crucial for maintaining the health and well-being of aging male populations. This study aims to evaluate the performance of transfer learning with convolutional neural networks (CNNs) for efficient classification of PCa and BPH in transrectal ultrasound (TRUS) images. A retrospective experimental design was employed in this study, with 1380 TRUS images for PCa and 1530 for BPH. Seven state-of-the-art deep learning (DL) methods were employed as classifiers with transfer learning applied to popular CNN architectures. Performance indices, including sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), Kappa value, and Hindex (Youden's index), were used to assess the feasibility and efficacy of the CNN methods. The CNN methods with transfer learning demonstrated a high classification performance for TRUS images, with all accuracy, specificity, sensitivity, PPV, NPV, Kappa, and Hindex values surpassing 0.9400. The optimal accuracy, sensitivity, and specificity reached 0.9987, 0.9980, and 0.9980, respectively, as evaluated using twofold cross-validation. The investigated CNN methods with transfer learning showcased their efficiency and ability for the classification of PCa and BPH in TRUS images. Notably, the EfficientNetV2 with transfer learning displayed a high degree of effectiveness in distinguishing between PCa and BPH, making it a promising tool for future diagnostic applications.
Collapse
Affiliation(s)
- Te-Li Huang
- Department of Radiology, Kaohsiung Veterans General Hospital, No. 386, Dazhong 1st Rd., Zuoying Dist., Kaohsiung, 81362, Taiwan
| | - Nan-Han Lu
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan.
- Department of Pharmacy, Tajen University, No.20, Weixin Rd., Yanpu Township, Pingtung, 90741, Taiwan.
- Department of Radiology, E-DA Hospital, I-Shou University, No.1, Yida Rd., Jiao-Su Village, Yan-Chao District, Kaohsiung, 82445, Taiwan.
| | - Yung-Hui Huang
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan
| | - Wen-Hung Twan
- Department of Life Sciences, National Taitung University, No.369, Sec. 2, University Rd., Taitung, 95092, Taiwan
| | - Li-Ren Yeh
- Department of Anesthesiology, E-DA Cancer Hospital, I-Shou University, No.1, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Hospital, I-Shou University, No.1, Yida Rd., Jiao-Su Village, Yan-Chao District, Kaohsiung, 82445, Taiwan
| | - Tai-Been Chen
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan.
- Institute of Statistics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu, 30010, Taiwan.
| |
Collapse
|
4
|
Fooladgar F, Nguyen Nhat to M, Javadi G, Sojoudi S, Eshumani W, Chang S, Black P, Mousavi P, Abolmaesumi P. Semi-supervised learning from coarse histopathology labels. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2154275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- Fahimeh Fooladgar
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Minh Nguyen Nhat to
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Golara Javadi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Samira Sojoudi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | | | - Silvia Chang
- Vancouver General Hospital, Vancouver, BC, Canada
| | - Peter Black
- Vancouver General Hospital, Vancouver, BC, Canada
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON, Canada
| | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
5
|
Hajiasgari M, Setarehdan SK, Rangraz P. Subcutaneous adipose tissue thickness determination using ultrasound signals processing: A phantom study. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
6
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
7
|
Towards targeted ultrasound-guided prostate biopsy by incorporating model and label uncertainty in cancer detection. Int J Comput Assist Radiol Surg 2021; 17:121-128. [PMID: 34783976 DOI: 10.1007/s11548-021-02485-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 08/16/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE Systematic prostate biopsy is widely used for cancer diagnosis. The procedure is blind to underlying prostate tissue micro-structure; hence, it can lead to a high rate of false negatives. Development of a machine-learning model that can reliably identify suspicious cancer regions is highly desirable. However, the models proposed to-date do not consider the uncertainty present in their output or the data to benefit clinical decision making for targeting biopsy. METHODS We propose a deep network for improved detection of prostate cancer in systematic biopsy considering both the label and model uncertainty. The architecture of our model is based on U-Net, trained with temporal enhanced ultrasound (TeUS) data. We estimate cancer detection uncertainty using test-time augmentation and test-time dropout. We then use uncertainty metrics to report the cancer probability for regions with high confidence to help the clinical decision making during the biopsy procedure. RESULTS Experiments for prostate cancer classification includes data from 183 prostate biopsy cores of 41 patients. We achieve an area under the curve, sensitivity, specificity and balanced accuracy of 0.79, 0.78, 0.71 and 0.75, respectively. CONCLUSION Our key contribution is to automatically estimate model and label uncertainty towards enabling targeted ultrasound-guided prostate biopsy. We anticipate that such information about uncertainty can decrease the number of unnecessary biopsy with a higher rate of cancer yield.
Collapse
|
8
|
Shao Y, Wang J, Wodlinger B, Salcudean SE. Improving Prostate Cancer (PCa) Classification Performance by Using Three-Player Minimax Game to Reduce Data Source Heterogeneity. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3148-3158. [PMID: 32305907 DOI: 10.1109/tmi.2020.2988198] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PCa is a disease with a wide range of tissue patterns and this adds to its classification difficulty. Moreover, the data source heterogeneity, i.e. inconsistent data collected using different machines, under different conditions, by different operators, from patients of different ethnic groups, etc., further hinders the effectiveness of training a generalized PCa classifier. In this paper, for the first time, a Generative Adversarial Network (GAN)-based three-player minimax game framework is used to tackle data source heterogeneity and to improve PCa classification performance, where a proposed modified U-Net is used as the encoder. Our dataset consists of novel high-frequency ExactVu ultrasound (US) data collected from 693 patients at five data centers. Gleason Scores (GSs) are assigned to the 12 prostatic regions of each patient. Two classification tasks: benign vs. malignant and low- vs. high-grade, are conducted and the classification results of different prostatic regions are compared. For benign vs. malignant classification, the three-player minimax game framework achieves an Area Under the Receiver Operating Characteristic (AUC) of 93.4%, a sensitivity of 95.1% and a specificity of 87.7%, respectively, representing significant improvements of 5.0%, 3.9%, and 6.0% compared to those of using heterogeneous data, which confirms its effectiveness in terms of PCa classification.
Collapse
|
9
|
Stochastic Sequential Modeling: Toward Improved Prostate Cancer Diagnosis Through Temporal-Ultrasound. Ann Biomed Eng 2020; 49:573-584. [PMID: 32779056 PMCID: PMC7851024 DOI: 10.1007/s10439-020-02585-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Accepted: 07/27/2020] [Indexed: 11/26/2022]
Abstract
Prostate cancer (PCa) is a common, serious form of cancer in men that is still prevalent despite ongoing developments in diagnostic oncology. Current detection methods lead to high rates of inaccurate diagnosis. We present a method to directly model and exploit temporal aspects of temporal enhanced ultrasound (TeUS) for tissue characterization, which improves malignancy prediction. We employ a probabilistic-temporal framework, namely, hidden Markov models (HMMs), for modeling TeUS data obtained from PCa patients. We distinguish malignant from benign tissue by comparing the respective log-likelihood estimates generated by the HMMs. We analyze 1100 TeUS signals acquired from 12 patients. Our results show improved malignancy identification compared to previous results, demonstrating over 85% accuracy and AUC of 0.95. Incorporating temporal information directly into the models leads to improved tissue differentiation in PCa. We expect our method to generalize and be applied to other types of cancer in which temporal-ultrasound can be recorded.
Collapse
|
10
|
Sedghi A, Mehrtash A, Jamzad A, Amalou A, Wells WM, Kapur T, Kwak JT, Turkbey B, Choyke P, Pinto P, Wood B, Xu S, Abolmaesumi P, Mousavi P. Improving detection of prostate cancer foci via information fusion of MRI and temporal enhanced ultrasound. Int J Comput Assist Radiol Surg 2020; 15:1215-1223. [PMID: 32372384 PMCID: PMC8975142 DOI: 10.1007/s11548-020-02172-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 04/16/2020] [Indexed: 11/28/2022]
Abstract
PURPOSE The detection of clinically significant prostate cancer (PCa) is shown to greatly benefit from MRI-ultrasound fusion biopsy, which involves overlaying pre-biopsy MRI volumes (or targets) with real-time ultrasound images. In previous literature, machine learning models trained on either MRI or ultrasound data have been proposed to improve biopsy guidance and PCa detection. However, quantitative fusion of information from MRI and ultrasound has not been explored in depth in a large study. This paper investigates information fusion approaches between MRI and ultrasound to improve targeting of PCa foci in biopsies. METHODS We build models of fully convolutional networks (FCN) using data from a newly proposed ultrasound modality, temporal enhanced ultrasound (TeUS), and apparent diffusion coefficient (ADC) from 107 patients with 145 biopsy cores. The architecture of our models is based on U-Net and U-Net with attention gates. Models are built using joint training through intermediate and late fusion of the data. We also build models with data from each modality, separately, to use as baseline. The performance is evaluated based on the area under the curve (AUC) for predicting clinically significant PCa. RESULTS Using our proposed deep learning framework and intermediate fusion, integration of TeUS and ADC outperforms the individual modalities for cancer detection. We achieve an AUC of 0.76 for detection of all PCa foci, and 0.89 for PCa with larger foci. Results indicate a shared representation between multiple modalities outperforms the average unimodal predictions. CONCLUSION We demonstrate the significant potential of multimodality integration of information from MRI and TeUS to improve PCa detection, which is essential for accurate targeting of cancer foci during biopsy. By using FCNs as the architecture of choice, we are able to predict the presence of clinically significant PCa in entire imaging planes immediately, without the need for region-based analysis. This reduces the overall computational time and enables future intra-operative deployment of this technology.
Collapse
Affiliation(s)
| | - Alireza Mehrtash
- The University of British Columbia, Vancouver, BC, Canada
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | | | - Amel Amalou
- The National Institutes of Health Research Center, Baltimore, MD, USA
| | - William M. Wells
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Tina Kapur
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | | | - Baris Turkbey
- The National Institutes of Health Research Center, Baltimore, MD, USA
| | - Peter Choyke
- The National Institutes of Health Research Center, Baltimore, MD, USA
| | - Peter Pinto
- The National Institutes of Health Research Center, Baltimore, MD, USA
| | - Bradford Wood
- The National Institutes of Health Research Center, Baltimore, MD, USA
| | - Sheng Xu
- The National Institutes of Health Research Center, Baltimore, MD, USA
| | | | | |
Collapse
|
11
|
Javadi G, Samadi S, Bayat S, Pesteie M, Jafari MH, Sojoudi S, Kesch C, Hurtado A, Chang S, Mousavi P, Black P, Abolmaesumi P. Multiple instance learning combined with label invariant synthetic data for guiding systematic prostate biopsy: a feasibility study. Int J Comput Assist Radiol Surg 2020; 15:1023-1031. [PMID: 32356095 DOI: 10.1007/s11548-020-02168-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 04/10/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE Ultrasound imaging is routinely used in prostate biopsy, which involves obtaining prostate tissue samples using a systematic, yet, non-targeted approach. This approach is blinded to individual patient intraprostatic pathology, and unfortunately, has a high rate of false negatives. METHODS In this paper, we propose a deep network for improved detection of prostate cancer in systematic biopsy. We address several challenges associated with training such network: (1) Statistical labels: Since biopsy core's pathology report only represents a statistical distribution of cancer within the core, we use multiple instance learning (MIL) networks to enable learning from ultrasound image regions associated with those data; (2) Limited labels: The number of biopsy cores are limited to at most 12 per patient. As a result, the number of samples available for training a deep network is limited. We alleviate this issue by effectively combining Independent Conditional Variational Auto Encoders (ICVAE) with MIL. We train ICVAE to learn label-invariant features of RF data, which is subsequently used to generate synthetic data for improved training of the MIL network. RESULTS Our in vivo study includes data from 339 prostate biopsy cores of 70 patients. We achieve an area under the curve, sensitivity, specificity, and balanced accuracy of 0.68, 0.77, 0.55 and 0.66, respectively. CONCLUSION The proposed approach is generic and can be applied to several other scenarios where unlabeled data and noisy labels in training samples are present.
Collapse
Affiliation(s)
- Golara Javadi
- The University of British Columbia, Vancouver, BC, Canada.
| | - Samareh Samadi
- The University of British Columbia, Vancouver, BC, Canada
| | - Sharareh Bayat
- The University of British Columbia, Vancouver, BC, Canada
| | - Mehran Pesteie
- The University of British Columbia, Vancouver, BC, Canada
| | | | - Samira Sojoudi
- The University of British Columbia, Vancouver, BC, Canada
| | | | | | - Silvia Chang
- Vancouver General Hospital, Vancouver, BC, Canada
| | | | - Peter Black
- Vancouver General Hospital, Vancouver, BC, Canada
| | | |
Collapse
|
12
|
Liu C, Xie L, Kong W, Lu X, Zhang D, Wu M, Zhang L, Yang B. Prediction of suspicious thyroid nodule using artificial neural network based on radiofrequency ultrasound and conventional ultrasound: A preliminary study. ULTRASONICS 2019; 99:105951. [PMID: 31323562 DOI: 10.1016/j.ultras.2019.105951] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 06/18/2019] [Accepted: 06/23/2019] [Indexed: 06/10/2023]
Abstract
This study explored the use of backscattered radiofrequency ultrasound signals combined with artificial neural network (ANN) technology to differentiate benign and malignant thyroid nodules, in comparison with conventional ultrasound techniques. The proposed method uses the gray level co-occurrence matrix algorithm and principal component analysis to identify principal characteristics for use as inputs in the ANN. The dataset consisted of 131 ultrasound images, of which 59 were benign and 72 were malignant, as determined by subsequent surgeries. The nodules were divided randomly into training, validation, and testing groups. Receiver operating characteristic curves (ROC) were drawn to compare the diagnostic efficiency of the ANN when applied to radiofrequency and conventional ultrasound images. The sensitivity, specificity, and accuracy of the ANN in predicting malignancy from the radiofrequency ultrasound images were 100, 91.5, and 96.2%, respectively; from conventional ultrasound, the corresponding values were 94.4, 93.2, and 93.9%, respectively. The area under the receiver operating characteristic curve (AUC) was also higher for radiofrequency than conventional ultrasound (AUC = 0.945 vs. 0.917, 95% confidence interval = 0.901-0.998 vs. 0.854-0.979, using a P-value of 0.26). We then classified each nodule into new risk categories according to the output of each sample generated by the proposed method. The malignancy risks in the proposed Categories 3, 4, and 5 were 0, 18.8, and 94.5%, respectively, compared with 0, 55.1, and 88.2% using the American College of Radiology's Thyroid Imaging Reporting and Data System. Thus, this preliminary study initially indicated that the proposed method of using radiofrequency ultrasound and the ANN was more accurate at predicting malignancy and stratifying thyroid nodules than conventional ultrasound methods, thus offering significant potential to reduce the number of unnecessary thyroid biopsies.
Collapse
Affiliation(s)
- Chunrui Liu
- Department of Ultrasound, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, China
| | - Linzhou Xie
- Key Laboratory of Modern Acoustics (MOE), Department of Physics, Collaborative Innovation Center of Advanced Microstructure, Nanjing University, Nanjing 210093, China
| | - Wentao Kong
- Department of Ultrasound, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, China
| | - Xiaoling Lu
- Department of Ultrasound, Jinling Hospital, Medical School of Nanjing University, Nanjing 210002, China
| | - Dong Zhang
- Key Laboratory of Modern Acoustics (MOE), Department of Physics, Collaborative Innovation Center of Advanced Microstructure, Nanjing University, Nanjing 210093, China
| | - Min Wu
- Department of Ultrasound, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, China
| | - Lijuan Zhang
- Department of Ultrasound, Nanjing Pukou Hospital, Nanjing 210031, China
| | - Bin Yang
- Department of Ultrasound, Jinling Hospital, Medical School of Nanjing University, Nanjing 210002, China.
| |
Collapse
|
13
|
Deep neural maps for unsupervised visualization of high-grade cancer in prostate biopsies. Int J Comput Assist Radiol Surg 2019; 14:1009-1016. [PMID: 30905010 DOI: 10.1007/s11548-019-01950-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 03/15/2019] [Indexed: 12/29/2022]
Abstract
Prostate cancer (PCa) is the most frequent noncutaneous cancer in men. Early detection of PCa is essential for clinical decision making, and reducing metastasis and mortality rates. The current approach for PCa diagnosis is histopathologic analysis of core biopsies taken under transrectal ultrasound guidance (TRUS-guided). Both TRUS-guided systematic biopsy and MR-TRUS-guided fusion biopsy have limitations in accurately identifying PCa, intraoperatively. There is a need to augment this process by visualizing highly probable areas of PCa. Temporal enhanced ultrasound (TeUS) has emerged as a promising modality for PCa detection. Prior work focused on supervised classification of PCa verified by gold standard pathology. Pathology labels are noisy, and data from an entire core have a single label even when significantly heterogeneous. Additionally, supervised methods are limited by data from cores with known pathology, and a significant portion of prostate data is discarded without being used. We provide an end-to-end unsupervised solution to map PCa distribution from TeUS data using an innovative representation learning method, deep neural maps. TeUS data are transformed to a topologically arranged hyper-lattice, where similar samples are closer together in the lattice. Therefore, similar regions of malignant and benign tissue in the prostate are clustered together. Our proposed method increases the number of training samples by several orders of magnitude. Data from biopsy cores with known labels are used to associate the clusters with PCa. Cancer probability maps generated using the unsupervised clustering of TeUS data help intuitively visualize the distribution of abnormal tissue for augmenting TRUS-guided biopsies.
Collapse
|
14
|
Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, Allison T, Arnaout O, Abbosh C, Dunn IF, Mak RH, Tamimi RM, Tempany CM, Swanton C, Hoffmann U, Schwartz LH, Gillies RJ, Huang RY, Aerts HJWL. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J Clin 2019; 69:127-157. [PMID: 30720861 PMCID: PMC6403009 DOI: 10.3322/caac.21552] [Citation(s) in RCA: 714] [Impact Index Per Article: 119.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Collapse
Affiliation(s)
- Wenya Linda Bi
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Ahmed Hosny
- Research Scientist, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Matthew B. Schabath
- Associate Member, Department of Cancer EpidemiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Maryellen L. Giger
- Professor of Radiology, Department of RadiologyUniversity of ChicagoChicagoIL
| | - Nicolai J. Birkbak
- Research Associate, The Francis Crick InstituteLondonUnited Kingdom
- Research Associate, University College London Cancer InstituteLondonUnited Kingdom
| | - Alireza Mehrtash
- Research Assistant, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Research Assistant, Department of Electrical and Computer EngineeringUniversity of British ColumbiaVancouverBCCanada
| | - Tavis Allison
- Research Assistant, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Research Assistant, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Omar Arnaout
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Christopher Abbosh
- Research Fellow, The Francis Crick InstituteLondonUnited Kingdom
- Research Fellow, University College London Cancer InstituteLondonUnited Kingdom
| | - Ian F. Dunn
- Associate Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Raymond H. Mak
- Associate Professor, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Rulla M. Tamimi
- Associate Professor, Department of MedicineBrigham and Women’s Hospital, Dana‐Farber Cancer Institute, Harvard Medical SchoolBostonMA
| | - Clare M. Tempany
- Professor of Radiology, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Charles Swanton
- Professor, The Francis Crick InstituteLondonUnited Kingdom
- Professor, University College London Cancer InstituteLondonUnited Kingdom
| | - Udo Hoffmann
- Professor of Radiology, Department of RadiologyMassachusetts General Hospital and Harvard Medical SchoolBostonMA
| | - Lawrence H. Schwartz
- Professor of Radiology, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Chair, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Robert J. Gillies
- Professor of Radiology, Department of Cancer PhysiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Raymond Y. Huang
- Assistant Professor, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Hugo J. W. L. Aerts
- Associate Professor, Departments of Radiation Oncology and Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Professor in AI in Medicine, Radiology and Nuclear Medicine, GROWMaastricht University Medical Centre (MUMC+)MaastrichtThe Netherlands
| |
Collapse
|
15
|
Azizi S, Bayat S, Yan P, Tahmasebi A, Kwak JT, Xu S, Turkbey B, Choyke P, Pinto P, Wood B, Mousavi P, Abolmaesumi P. Deep Recurrent Neural Networks for Prostate Cancer Detection: Analysis of Temporal Enhanced Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2695-2703. [PMID: 29994471 PMCID: PMC7983161 DOI: 10.1109/tmi.2018.2849959] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Temporal enhanced ultrasound (TeUS), comprising the analysis of variations in backscattered signals from a tissue over a sequence of ultrasound frames, has been previously proposed as a new paradigm for tissue characterization. In this paper, we propose to use deep recurrent neural networks (RNN) to explicitly model the temporal information in TeUS. By investigating several RNN models, we demonstrate that long short-term memory (LSTM) networks achieve the highest accuracy in separating cancer from benign tissue in the prostate. We also present algorithms for in-depth analysis of LSTM networks. Our in vivo study includes data from 255 prostate biopsy cores of 157 patients. We achieve area under the curve, sensitivity, specificity, and accuracy of 0.96, 0.76, 0.98, and 0.93, respectively. Our result suggests that temporal modeling of TeUS using RNN can significantly improve cancer detection accuracy over previously presented works.
Collapse
|
16
|
Azizi S, Van Woudenberg N, Sojoudi S, Li M, Xu S, Abu Anas EM, Yan P, Tahmasebi A, Kwak JT, Turkbey B, Choyke P, Pinto P, Wood B, Mousavi P, Abolmaesumi P. Toward a real-time system for temporal enhanced ultrasound-guided prostate biopsy. Int J Comput Assist Radiol Surg 2018; 13:1201-1209. [PMID: 29589258 DOI: 10.1007/s11548-018-1749-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 03/21/2018] [Indexed: 01/17/2023]
Abstract
PURPOSE We have previously proposed temporal enhanced ultrasound (TeUS) as a new paradigm for tissue characterization. TeUS is based on analyzing a sequence of ultrasound data with deep learning and has been demonstrated to be successful for detection of cancer in ultrasound-guided prostate biopsy. Our aim is to enable the dissemination of this technology to the community for large-scale clinical validation. METHODS In this paper, we present a unified software framework demonstrating near-real-time analysis of ultrasound data stream using a deep learning solution. The system integrates ultrasound imaging hardware, visualization and a deep learning back-end to build an accessible, flexible and robust platform. A client-server approach is used in order to run computationally expensive algorithms in parallel. We demonstrate the efficacy of the framework using two applications as case studies. First, we show that prostate cancer detection using near-real-time analysis of RF and B-mode TeUS data and deep learning is feasible. Second, we present real-time segmentation of ultrasound prostate data using an integrated deep learning solution. RESULTS The system is evaluated for cancer detection accuracy on ultrasound data obtained from a large clinical study with 255 biopsy cores from 157 subjects. It is further assessed with an independent dataset with 21 biopsy targets from six subjects. In the first study, we achieve area under the curve, sensitivity, specificity and accuracy of 0.94, 0.77, 0.94 and 0.92, respectively, for the detection of prostate cancer. In the second study, we achieve an AUC of 0.85. CONCLUSION Our results suggest that TeUS-guided biopsy can be potentially effective for the detection of prostate cancer.
Collapse
Affiliation(s)
| | | | - Samira Sojoudi
- The University of British Columbia, Vancouver, BC, Canada
| | - Ming Li
- National Institutes of Health, Bethesda, MD, USA
| | - Sheng Xu
- National Institutes of Health, Bethesda, MD, USA
| | | | - Pingkun Yan
- Rensselaer Polytechnic Institute, Troy, NY, USA
| | | | | | | | - Peter Choyke
- National Institutes of Health, Bethesda, MD, USA
| | - Peter Pinto
- National Institutes of Health, Bethesda, MD, USA
| | | | | | | |
Collapse
|
17
|
Bayat S, Azizi S, Daoud MI, Nir G, Imani F, Gerardo CD, Yan P, Tahmasebi A, Vignon F, Sojoudi S, Wilson S, Iczkowski KA, Lucia MS, Goldenberg L, Salcudean SE, Abolmaesumi P, Mousavi P. Investigation of Physical Phenomena Underlying Temporal-Enhanced Ultrasound as a New Diagnostic Imaging Technique: Theory and Simulations. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2018; 65:400-410. [PMID: 29505407 DOI: 10.1109/tuffc.2017.2785230] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Temporal-enhanced ultrasound (TeUS) is a novel noninvasive imaging paradigm that captures information from a temporal sequence of backscattered US radio frequency data obtained from a fixed tissue location. This technology has been shown to be effective for classification of various in vivo and ex vivo tissue types including prostate cancer from benign tissue. Our previous studies have indicated two primary phenomena that influence TeUS: 1) changes in tissue temperature due to acoustic absorption and 2) micro vibrations of tissue due to physiological vibration. In this paper, first, a theoretical formulation for TeUS is presented. Next, a series of simulations are carried out to investigate micro vibration as a source of tissue characterizing information in TeUS. The simulations include finite element modeling of micro vibration in synthetic phantoms, followed by US image generation during TeUS imaging. The simulations are performed on two media, a sparse array of scatterers and a medium with pathology mimicking scatterers that match nuclei distribution extracted from a prostate digital pathology data set. Statistical analysis of the simulated TeUS data shows its ability to accurately classify tissue types. Our experiments suggest that TeUS can capture the microstructural differences, including scatterer density, in tissues as they react to micro vibrations.
Collapse
|
18
|
Nahlawi L, Goncalves C, Imani F, Gaed M, Gomez JA, Moussa M, Gibson E, Fenster A, Ward A, Abolmaesumi P, Shatkay H, Mousavi P. Stochastic Modeling of Temporal Enhanced Ultrasound: Impact of Temporal Properties on Prostate Cancer Characterization. IEEE Trans Biomed Eng 2017; 65:1798-1809. [PMID: 29989922 DOI: 10.1109/tbme.2017.2778007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVES Temporal enhanced ultrasound (TeUS) is a new ultrasound-based imaging technique that provides tissue-specific information. Recent studies have shown the potential of TeUS for improving tissue characterization in prostate cancer diagnosis. We study the temporal properties of TeUS-temporal order and length-and present a new framework to assess their impact on tissue information. METHODS We utilize a probabilistic modeling approach using hidden Markov models (HMMs) to capture the temporal signatures of malignant and benign tissues from TeUS signals of nine patients. We model signals of benign and malignant tissues (284 and 286 signals, respectively) in their original temporal order as well as under order permutations. We then compare the resulting models using the Kullback-Liebler divergence and assess their performance differences in characterization. Moreover, we train HMMs using TeUS signals of different durations and compare their model performance when differentiating tissue types. RESULTS Our findings demonstrate that models of order-preserved signals perform statistically significantly better (85% accuracy) in tissue characterization compared to models of order-altered signals (62% accuracy). The performance degrades as more changes in signal order are introduced. Additionally, models trained on shorter sequences perform as accurately as models of longer sequences. CONCLUSION The work presented here strongly indicates that temporal order has substantial impact on TeUS performance; thus, it plays a significant role in conveying tissue-specific information. Furthermore, shorter TeUS signals can relay sufficient information to accurately distinguish between tissue types. SIGNIFICANCE Understanding the impact of TeUS properties facilitates the process of its adopting in diagnostic procedures and provides insights on improving its acquisition.
Collapse
|
19
|
Azizi S, Bayat S, Yan P, Tahmasebi A, Nir G, Kwak JT, Xu S, Wilson S, Iczkowski KA, Lucia MS, Goldenberg L, Salcudean SE, Pinto PA, Wood B, Abolmaesumi P, Mousavi P. Detection and grading of prostate cancer using temporal enhanced ultrasound: combining deep neural networks and tissue mimicking simulations. Int J Comput Assist Radiol Surg 2017; 12:1293-1305. [PMID: 28634789 PMCID: PMC7900902 DOI: 10.1007/s11548-017-1627-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 05/01/2017] [Indexed: 10/19/2022]
Abstract
PURPOSE : Temporal Enhanced Ultrasound (TeUS) has been proposed as a new paradigm for tissue characterization based on a sequence of ultrasound radio frequency (RF) data. We previously used TeUS to successfully address the problem of prostate cancer detection in the fusion biopsies. METHODS : In this paper, we use TeUS to address the problem of grading prostate cancer in a clinical study of 197 biopsy cores from 132 patients. Our method involves capturing high-level latent features of TeUS with a deep learning approach followed by distribution learning to cluster aggressive cancer in a biopsy core. In this hypothesis-generating study, we utilize deep learning based feature visualization as a means to obtain insight into the physical phenomenon governing the interaction of temporal ultrasound with tissue. RESULTS : Based on the evidence derived from our feature visualization, and the structure of tissue from digital pathology, we build a simulation framework for studying the physical phenomenon underlying TeUS-based tissue characterization. CONCLUSION : Results from simulation and feature visualization corroborated with the hypothesis that micro-vibrations of tissue microstructure, captured by low-frequency spectral features of TeUS, can be used for detection of prostate cancer.
Collapse
Affiliation(s)
| | - Sharareh Bayat
- The University of British Columbia, Vancouver, BC, Canada
| | - Pingkun Yan
- Philips Research North America, Cambridge, MA, USA
| | | | - Guy Nir
- The University of British Columbia, Vancouver, BC, Canada
| | - Jin Tae Kwak
- Sejong University, Gwangjin-Gu, Seoul, South Korea
| | - Sheng Xu
- National Institutes of Health, Bethesda, MD, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
20
|
Azizi S, Mousavi P, Yan P, Tahmasebi A, Kwak JT, Xu S, Turkbey B, Choyke P, Pinto P, Wood B, Abolmaesumi P. Transfer learning from RF to B-mode temporal enhanced ultrasound features for prostate cancer detection. Int J Comput Assist Radiol Surg 2017; 12:1111-1121. [PMID: 28349507 DOI: 10.1007/s11548-017-1573-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2017] [Accepted: 03/18/2017] [Indexed: 02/06/2023]
Abstract
PURPOSE We present a method for prostate cancer (PCa) detection using temporal enhanced ultrasound (TeUS) data obtained either from radiofrequency (RF) ultrasound signals or B-mode images. METHODS For the first time, we demonstrate that by applying domain adaptation and transfer learning methods, a tissue classification model trained on TeUS RF data (source domain) can be deployed for classification using TeUS B-mode data alone (target domain), where both data are obtained on the same ultrasound scanner. This is a critical step for clinical translation of tissue classification techniques that primarily rely on accessing RF data, since this imaging modality is not readily available on all commercial scanners in clinics. Proof of concept is provided for in vivo characterization of PCa using TeUS B-mode data, where different nonlinear processing filters in the pipeline of the RF to B-mode conversion result in a distribution shift between the two domains. RESULTS Our in vivo study includes data obtained in MRI-guided targeted procedure for prostate biopsy. We achieve comparable area under the curve using TeUS RF and B-mode data for medium to large cancer tumor sizes in biopsy cores (>4 mm). CONCLUSION Our result suggests that the proposed adaptation technique is successful in reducing the divergence between TeUS RF and B-mode data.
Collapse
Affiliation(s)
| | | | - Pingkun Yan
- Philips Research North America, Cambridge, MA, USA
| | | | | | - Sheng Xu
- National Institutes of Health, Bethesda, MD, USA
| | | | - Peter Choyke
- National Institutes of Health, Bethesda, MD, USA
| | - Peter Pinto
- National Institutes of Health, Bethesda, MD, USA
| | | | | |
Collapse
|
21
|
Azizi S, Imani F, Ghavidel S, Tahmasebi A, Kwak JT, Xu S, Turkbey B, Choyke P, Pinto P, Wood B, Mousavi P, Abolmaesumi P. Detection of prostate cancer using temporal sequences of ultrasound data: a large clinical feasibility study. Int J Comput Assist Radiol Surg 2016; 11:947-56. [PMID: 27059021 DOI: 10.1007/s11548-016-1395-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Accepted: 03/19/2016] [Indexed: 10/22/2022]
Abstract
PURPOSE This paper presents the results of a large study involving fusion prostate biopsies to demonstrate that temporal ultrasound can be used to accurately classify tissue labels identified in multi-parametric magnetic resonance imaging (mp-MRI) as suspicious for cancer. METHODS We use deep learning to analyze temporal ultrasound data obtained from 255 cancer foci identified in mp-MRI. Each target is sampled in axial and sagittal planes. A deep belief network is trained to automatically learn the high-level latent features of temporal ultrasound data. A support vector machine classifier is then applied to differentiate cancerous versus benign tissue, verified by histopathology. Data from 32 targets are used for the training, while the remaining 223 targets are used for testing. RESULTS Our results indicate that the distance between the biopsy target and the prostate boundary, and the agreement between axial and sagittal histopathology of each target impact the classification accuracy. In 84 test cores that are 5 mm or farther to the prostate boundary, and have consistent pathology outcomes in axial and sagittal biopsy planes, we achieve an area under the curve of 0.80. In contrast, all of these targets were labeled as moderately suspicious in mp-MR. CONCLUSION Using temporal ultrasound data in a fusion prostate biopsy study, we achieved a high classification accuracy specifically for moderately scored mp-MRI targets. These targets are clinically common and contribute to the high false-positive rates associated with mp-MRI for prostate cancer detection. Temporal ultrasound data combined with mp-MRI have the potential to reduce the number of unnecessary biopsies in fusion biopsy settings.
Collapse
Affiliation(s)
- Shekoofeh Azizi
- University of British Columbia, Vancouver, British Columbia, Canada.
| | - Farhad Imani
- University of British Columbia, Vancouver, British Columbia, Canada
| | | | - Amir Tahmasebi
- Philips Research North America, Cambridge, Massachusetts, USA
| | - Jin Tae Kwak
- National Institutes of Health, Bethesda, Maryland, USA
| | - Sheng Xu
- National Institutes of Health, Bethesda, Maryland, USA
| | - Baris Turkbey
- National Institutes of Health, Bethesda, Maryland, USA
| | - Peter Choyke
- National Institutes of Health, Bethesda, Maryland, USA
| | - Peter Pinto
- National Institutes of Health, Bethesda, Maryland, USA
| | - Bradford Wood
- National Institutes of Health, Bethesda, Maryland, USA
| | | | | |
Collapse
|
22
|
Imani F, Ramezani M, Nouranian S, Gibson E, Khojaste A, Gaed M, Moussa M, Gomez JA, Romagnoli C, Leveridge M, Chang S, Fenster A, Siemens DR, Ward AD, Mousavi P, Abolmaesumi P. Ultrasound-Based Characterization of Prostate Cancer Using Joint Independent Component Analysis. IEEE Trans Biomed Eng 2015; 62:1796-1804. [PMID: 25720016 DOI: 10.1109/tbme.2015.2404300] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE This paper presents the results of a new approach for selection of RF time series features based on joint independent component analysis for in vivo characterization of prostate cancer. METHODS We project three sets of RF time series features extracted from the spectrum, fractal dimension, and the wavelet transform of the ultrasound RF data on a space spanned by five joint independent components. Then, we demonstrate that the obtained mixing coefficients from a group of patients can be used to train a classifier, which can be applied to characterize cancerous regions of a test patient. RESULTS In a leave-one-patient-out cross validation, an area under receiver operating characteristic curve of 0.93 and classification accuracy of 84% are achieved. CONCLUSION Ultrasound RF time series can be used to accurately characterize prostate cancer, in vivo without the need for exhaustive search in the feature space. SIGNIFICANCE We use joint independent component analysis for systematic fusion of multiple sets of RF time series features, within a machine learning framework, to characterize PCa in an in vivo study.
Collapse
Affiliation(s)
- Farhad Imani
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | | | | | - Eli Gibson
- Robarts Research Institute, Western University
| | | | - Mena Gaed
- Robarts Research Institute, Western University
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|