1
|
Picot F, Shams R, Dallaire F, Sheehy G, Trang T, Grajales D, Birlea M, Trudel D, Ménard C, Kadoury S, Leblond F. Image-guided Raman spectroscopy navigation system to improve transperineal prostate cancer detection. Part 1: Raman spectroscopy fiber-optics system and in situ tissue characterization. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-220045GRR. [PMID: 36045491 PMCID: PMC9433338 DOI: 10.1117/1.jbo.27.9.095003] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 08/16/2022] [Indexed: 05/28/2023]
Abstract
SIGNIFICANCE The diagnosis of prostate cancer (PCa) and focal treatment by brachytherapy are limited by the lack of precise intraoperative information to target tumors during biopsy collection and radiation seed placement. Image-guidance techniques could improve the safety and diagnostic yield of biopsy collection as well as increase the efficacy of radiotherapy. AIM To estimate the accuracy of PCa detection using in situ Raman spectroscopy (RS) in a pilot in-human clinical study and assess biochemical differences between in vivo and ex vivo measurements. APPROACH A new miniature RS fiber-optics system equipped with an electromagnetic (EM) tracker was guided by trans-rectal ultrasound-guided imaging, fused with preoperative magnetic resonance imaging to acquire 49 spectra in situ (in vivo) from 18 PCa patients. In addition, 179 spectra were acquired ex vivo in fresh prostate samples from 14 patients who underwent radical prostatectomy. Two machine-learning models were trained to discriminate cancer from normal prostate tissue from both in situ and ex vivo datasets. RESULTS A support vector machine (SVM) model was trained on the in situ dataset and its performance was evaluated using leave-one-patient-out cross validation from 28 normal prostate measurements and 21 in-tumor measurements. The model performed at 86% sensitivity and 72% specificity. Similarly, an SVM model was trained with the ex vivo dataset from 152 normal prostate measurements and 27 tumor measurements showing reduced cancer detection performance mostly attributable to spatial registration inaccuracies between probe measurements and histology assessment. A qualitative comparison between in situ and ex vivo measurements demonstrated a one-to-one correspondence and similar ratios between the main Raman bands (e.g., amide I-II bands, phenylalanine). CONCLUSIONS PCa detection can be achieved using RS and machine learning models for image-guidance applications using in situ measurements during prostate biopsy procedures.
Collapse
Affiliation(s)
- Fabien Picot
- Polytechnique Montréal, Department of Engineering Physics, Montreal, Quebec, Canada
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
| | - Roozbeh Shams
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
- Polytechnique Montréal, Medical Laboratory, Montreal, Quebec, Canada
| | - Frédérick Dallaire
- Polytechnique Montréal, Department of Engineering Physics, Montreal, Quebec, Canada
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
| | - Guillaume Sheehy
- Polytechnique Montréal, Department of Engineering Physics, Montreal, Quebec, Canada
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
| | - Tran Trang
- Polytechnique Montréal, Department of Engineering Physics, Montreal, Quebec, Canada
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
| | - David Grajales
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
- Polytechnique Montréal, Medical Laboratory, Montreal, Quebec, Canada
| | - Mirela Birlea
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
| | - Dominique Trudel
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
| | - Cynthia Ménard
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
| | - Samuel Kadoury
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
- Polytechnique Montréal, Medical Laboratory, Montreal, Quebec, Canada
| | - Frédéric Leblond
- Polytechnique Montréal, Department of Engineering Physics, Montreal, Quebec, Canada
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
- Institut du cancer de Montréal, Montreal, Quebec, Canada
| |
Collapse
|
2
|
Grajales D, Picot F, Shams R, Dallaire F, Sheehy G, Alley S, Barkati M, Delouya G, Carrier JF, Birlea M, Trudel D, Leblond F, Ménard C, Kadoury S. Image-guided Raman spectroscopy navigation system to improve transperineal prostate cancer detection. Part 2: in-vivo tumor-targeting using a classification model combining spectral and MRI-radiomics features. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-220064GR. [PMID: 36085571 PMCID: PMC9459023 DOI: 10.1117/1.jbo.27.9.095004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 08/12/2022] [Indexed: 06/01/2023]
Abstract
SIGNIFICANCE The diagnosis and treatment of prostate cancer (PCa) are limited by a lack of intraoperative information to accurately target tumors with needles for biopsy and brachytherapy. An innovative image-guidance technique using optical devices could improve the diagnostic yield of biopsy and efficacy of radiotherapy. AIM To evaluate the performance of multimodal PCa detection using biomolecular features from in-situ Raman spectroscopy (RS) combined with image-based (radiomics) features from multiparametric magnetic resonance images (mpMRI). APPROACH In a prospective pilot clinical study, 18 patients were recruited and underwent high-dose-rate brachytherapy. Multimodality image fusion (preoperative mpMRI with intraoperative transrectal ultrasound) combined with electromagnetic tracking was used to navigate an RS needle in the prostate prior to brachytherapy. This resulting dataset consisted of Raman spectra and co-located radiomics features from mpMRI. Feature selection was performed with the constraint that no more than 10 features were retained overall from a combination of inelastic scattering spectra and radiomics. These features were used to train support vector machine classifiers for PCa detection based on leave-one-patient-out cross-validation. RESULTS RS along with biopsy samples were acquired from 47 sites along the insertion trajectory of the fiber-optics needle: 26 were confirmed as benign or grade group = 1, and 21 as grade group >1, according to histopathological reports. The combination of the fingerprint region of the RS and radiomics showed an accuracy of 83% (sensitivity = 81 % and a specificity = 85 % ), outperforming by more than 9% models trained with either spectroscopic or mpMRI data alone. An optimal number of features was identified between 6 and 8 features, which have good potential for discriminating grade group ≥1 / grade group <1 (accuracy = 87 % ) or grade group >1 / grade group ≤1 (accuracy = 91 % ). CONCLUSIONS In-situ Raman spectroscopy combined with mpMRI radiomics features can lead to highly accurate PCa detection for improved in-vivo targeting of biopsy sample collection and radiotherapy seed placement.
Collapse
Affiliation(s)
- David Grajales
- Polytechnique Montréal, Montreal, Québec, Canada
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Fabien Picot
- Polytechnique Montréal, Montreal, Québec, Canada
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Roozbeh Shams
- Polytechnique Montréal, Montreal, Québec, Canada
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Frédérick Dallaire
- Polytechnique Montréal, Montreal, Québec, Canada
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Guillaume Sheehy
- Polytechnique Montréal, Montreal, Québec, Canada
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Stephanie Alley
- Polytechnique Montréal, Montreal, Québec, Canada
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Maroie Barkati
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Guila Delouya
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Jean-Francois Carrier
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Mirela Birlea
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Dominique Trudel
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Frédéric Leblond
- Polytechnique Montréal, Montreal, Québec, Canada
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
- Institut du Cancer de Montréal, Montreal, Québec, Canada
| | - Cynthia Ménard
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| | - Samuel Kadoury
- Polytechnique Montréal, Montreal, Québec, Canada
- Centre de recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Québec, Canada
| |
Collapse
|
3
|
Zuo Q, Zhang J, Yang Y. DMC-Fusion: Deep Multi-Cascade Fusion With Classifier-Based Feature Synthesis for Medical Multi-Modal Images. IEEE J Biomed Health Inform 2021; 25:3438-3449. [PMID: 34038372 DOI: 10.1109/jbhi.2021.3083752] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-modal medical image fusion is a challenging yet important task for precision diagnosis and surgical planning in clinical practice. Although single feature fusion strategy such as Densefuse has achieved inspiring performance, it tends to be not fully preserved for the source image features. In this paper, a deep multi-fusion framework with classifier-based feature synthesis is proposed to automatically fuse multi-modal medical images. It consists of a pre-trained autoencoder based on dense connections, a feature classifier and a multi-cascade fusion decoder with separately fusing high-frequency and low-frequency. The encoder and decoder are transferred from MS-COCO datasets and pre-trained simultaneously on multi-modal medical image public datasets to extract features. The feature classification is conducted through Gaussian high-pass filtering and the peak signal to noise ratio thresholding, then feature maps in each layer of the pre-trained Dense-Block and decoder are divided into high-frequency and low-frequency sequences. Specifically, in proposed feature fusion block, parameter-adaptive pulse coupled neural network and l1-weighted are employed to fuse high-frequency and low-frequency, respectively. Finally, we design a novel multi-cascade fusion decoder on total decoding feature stage to selectively fuse useful information from different modalities. We also validate our approach for the brain disease classification using the fused images, and a statistical significance test is performed to illustrate that the improvement in classification performance is due to the fusion. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance in both qualitative and quantitative evaluations.
Collapse
|
4
|
Shams R, Picot F, Grajales D, Sheehy G, Dallaire F, Birlea M, Saad F, Trudel D, Menard C, Leblond F, Kadoury S. Pre-clinical evaluation of an image-guided in-situ Raman spectroscopy navigation system for targeted prostate cancer interventions. Int J Comput Assist Radiol Surg 2020; 15:867-876. [PMID: 32227280 DOI: 10.1007/s11548-020-02136-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 03/18/2020] [Indexed: 01/13/2023]
Abstract
PURPOSE Transrectal ultrasound (TRUS) image guidance is the standard of care for diagnostic and therapeutic interventions in prostate cancer (PCa) patients, but can lead to high false-negative rates, compromising downstream effectiveness of therapeutic choices. A promising approach to improve in-situ detection of PCa lies in using the optical properties of the tissue to discern cancer from healthy tissue. In this work, we present the first in-situ image-guided navigation system for a spatially tracked Raman spectroscopy probe integrated in a PCa workflow, capturing the optical tissue fingerprint. The probe is guided with fused TRUS/MR imaging and tested with both tissue-simulating phantoms and ex-vivo prostates. The workflow was designed to be integrated the clinical workflow for trans-perineal prostate biopsies, as well as for high-dose rate (HDR) brachytherapy. METHODS The proposed system developed in 3D Slicer includes an electromagnetically tracked Raman spectroscopy probe, along with tracked TRUS imaging automatically registered to diagnostic MRI. The proposed system is tested on both custom gelatin tissue-simulating optical phantoms and biological tissue phantoms. A random-forest classifier was then trained on optical spectrums from ex-vivo prostates following prostatectomy using our optical probe. Preliminary in-human results are presented with the Raman spectroscopy instrument to detect malignant tissue in-situ with histopathology confirmation. RESULTS In 5 synthetic gelatin and biological tissue phantoms, we demonstrate the ability of the image-guided Raman system by detecting over 95% of lesions, based on biopsy samples. The included lesion volumes ranged from 0.1 to 0.61 cc. We showed the compatibility of our workflow with the current HDR brachytherapy setup. In ex-vivo prostates of PCa patients, the system showed a 81% detection accuracy in high grade lesions. CONCLUSION Pre-clinical experiments demonstrated promising results for in-situ confirmation of lesion locations in prostates using Raman spectroscopy, both in phantoms and human ex-vivo prostate tissue, which is required for integration in HDR brachytherapy procedures.
Collapse
Affiliation(s)
| | | | | | | | | | - Mirela Birlea
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada
| | - Fred Saad
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada
| | - Dominique Trudel
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada
| | - Cynthia Menard
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada
| | | | - Samuel Kadoury
- Polytechnique Montreal, Montreal, Canada.
- Centre Hospitalier de l'Universite de Montreal Research Center, Montreal, Canada.
| |
Collapse
|
5
|
Patel NA, Li G, Shang W, Wartenberg M, Heffter T, Burdette EC, Iordachita I, Tokuda J, Hata N, Tempany CM, Fischer GS. System Integration and Preliminary Clinical Evaluation of a Robotic System for MRI-Guided Transperineal Prostate Biopsy. JOURNAL OF MEDICAL ROBOTICS RESEARCH 2019; 4:1950001. [PMID: 31485544 PMCID: PMC6726403 DOI: 10.1142/s2424905x19500016] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper presents the development, preclinical evaluation, and preliminary clinical study of a robotic system for targeted transperineal prostate biopsy under direct interventional magnetic resonance imaging (MRI) guidance. The clinically integrated robotic system is developed based on a modular design approach, comprised of surgical navigation application, robot control software, MRI robot controller hardware, and robotic needle placement manipulator. The system provides enabling technologies for MRI-guided procedures. It can be easily transported and setup for supporting the clinical workflow of interventional procedures, and the system is readily extensible and reconfigurable to other clinical applications. Preclinical evaluation of the system is performed with phantom studies in a 3 Tesla MRI scanner, rehearsing the proposed clinical workflow, and demonstrating an in-plane targeting error of 1.5mm. The robotic system has been approved by the institutional review board (IRB) for clinical trials. A preliminary clinical study is conducted with the patient consent, demonstrating the targeting errors at two biopsy target sites to be 4.0mm and 3.7mm, which is sufficient to target a clinically significant tumor foci. First-in-human trials to evaluate the system's effectiveness and accuracy for MR image-guide prostate biopsy are underway.
Collapse
Affiliation(s)
- Niravkumar A Patel
- Automation and Interventional Medicine Laboratory, Worcester Polytechnic Institute, Worcester, MA 01609, USA [napatel, gfischerj]@wpi.edu
- indicates shared first authorship
| | - Gang Li
- Automation and Interventional Medicine Laboratory, Worcester Polytechnic Institute, Worcester, MA 01609, USA [napatel, gfischerj]@wpi.edu
- indicates shared first authorship
| | - Weijian Shang
- Automation and Interventional Medicine Laboratory, Worcester Polytechnic Institute, Worcester, MA 01609, USA [napatel, gfischerj]@wpi.edu
| | - Marek Wartenberg
- Automation and Interventional Medicine Laboratory, Worcester Polytechnic Institute, Worcester, MA 01609, USA [napatel, gfischerj]@wpi.edu
| | - Tamas Heffter
- Automation and Interventional Medicine Laboratory, Worcester Polytechnic Institute, Worcester, MA 01609, USA [napatel, gfischerj]@wpi.edu
| | - Everette C Burdette
- Automation and Interventional Medicine Laboratory, Worcester Polytechnic Institute, Worcester, MA 01609, USA [napatel, gfischerj]@wpi.edu
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD, USA
| | - Junichi Tokuda
- Department of Radiology, Surgical Navigation and Robotics Laboratory, Brigham and Womens Hospital, Harvard Medical School, Boston, MA, USA
| | - Nobuhiko Hata
- Department of Radiology, Surgical Navigation and Robotics Laboratory, Brigham and Womens Hospital, Harvard Medical School, Boston, MA, USA
| | - Clare M Tempany
- Department of Radiology, Surgical Navigation and Robotics Laboratory, Brigham and Womens Hospital, Harvard Medical School, Boston, MA, USA
| | - Gregory S Fischer
- Automation and Interventional Medicine Laboratory, Worcester Polytechnic Institute, Worcester, MA 01609, USA [napatel, gfischerj]@wpi.edu
| |
Collapse
|
6
|
De Silva T, Uneri A, Zhang X, Ketcha M, Han R, Sheth N, Martin A, Vogt S, Kleinszig G, Belzberg A, Sciubba DM, Siewerdsen JH. Real-time, image-based slice-to-volume registration for ultrasound-guided spinal intervention. Phys Med Biol 2018; 63:215016. [PMID: 30372418 DOI: 10.1088/1361-6560/aae761] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Real-time fusion of magnetic resonance (MR) and ultrasound (US) images could facilitate safe and accurate needle placement in spinal interventions. We develop an entirely image-based registration method (independent of or complementary to surgical trackers) that includes an efficient US probe pose initialization algorithm. The registration enables the simultaneous display of 2D ultrasound image slices relative to 3D pre-procedure MR images for navigation. A dictionary-based 3D-2D pose initialization algorithm was developed in which likely probe positions are predefined in a dictionary with feature encoding by Haar wavelet filters. Feature vectors representing the 2D US image are computed by scaling and translating multiple Haar basis filters to capture scale, location, and relative intensity patterns of distinct anatomical features. Following pose initialization, fast 3D-2D registration was performed by optimizing normalized cross-correlation between intra- and pre-procedure images using Powell's method. Experiments were performed using a lumbar puncture phantom and a fresh cadaver specimen presenting realistic image quality in spinal US imaging. Accuracy was quantified by comparing registration transforms to ground truth motion imparted by a computer-controlled motion system and calculating target registration error (TRE) in anatomical landmarks. Initialization using a 315-length feature vector yielded median translation accuracy of 2.7 mm (3.4 mm interquartile range, IQR) in the phantom and 2.1 mm (2.5 mm IQR) in the cadaver. By comparison, storing the entire image set in the dictionary and optimizing correlation yielded a comparable median accuracy of 2.1 mm (2.8 mm IQR) in the phantom and 2.9 mm (3.5 mm IQR) in the cadaver. However, the dictionary-based method reduced memory requirements by 47× compared to storing the entire image set. The overall 3D error after registration measured using 3D landmarks was 3.2 mm (1.8 mm IQR) mm in the phantom and 3.0 mm (2.3 mm IQR) mm in the cadaver. The system was implemented in a 3D Slicer interface to facilitate translation to clinical studies. Haar feature based initialization provided accuracy and robustness at a level that was sufficient for real-time registration using an entirely image-based method for ultrasound navigation. Such an approach could improve the accuracy and safety of spinal interventions in broad utilization, since it is entirely software-based and can operate free from the cost and workflow requirements of surgical trackers.
Collapse
Affiliation(s)
- T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
7
|
A spline-based non-linear diffeomorphism for multimodal prostate registration. Med Image Anal 2012; 16:1259-79. [PMID: 22705289 DOI: 10.1016/j.media.2012.04.006] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2011] [Revised: 04/24/2012] [Accepted: 04/25/2012] [Indexed: 11/24/2022]
|