1
|
Cannon PC, Setia SA, Klein-Gardner S, Kavoussi NL, Webster RJ, Herrell SD. Are 3D Image Guidance Systems Ready for Use? A Comparative Analysis of 3D Image Guidance Implementations in Minimally Invasive Partial Nephrectomy. J Endourol 2024; 38:395-407. [PMID: 38251637 PMCID: PMC10979686 DOI: 10.1089/end.2023.0059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Introduction: Three-dimensional image-guided surgical (3D-IGS) systems for minimally invasive partial nephrectomy (MIPN) can potentially improve the efficiency and accuracy of intraoperative anatomical localization and tumor resection. This review seeks to analyze the current state of research regarding 3D-IGS, including the evaluation of clinical outcomes, system functionality, and qualitative insights regarding 3D-IGS's impact on surgical procedures. Methods: We have systematically reviewed the clinical literature pertaining to 3D-IGS deployed for MIPN. For inclusion, studies must produce a patient-specific 3D anatomical model from two-dimensional imaging. Data extracted from the studies include clinical results, registration (alignment of the 3D model to the surgical scene) method used, limitations, and data types reported. A subset of studies was qualitatively analyzed through an inductive coding approach to identify major themes and subthemes across the studies. Results: Twenty-five studies were included in the review. Eight (32%) studies reported clinical results that point to 3D-IGS improving multiple surgical outcomes. Manual registration was the most utilized (48%). Soft tissue deformation was the most cited limitation among the included studies. Many studies reported qualitative statements regarding surgeon accuracy improvement, but quantitative surgeon accuracy data were not reported. During the qualitative analysis, six major themes emerged across the nine applicable studies. They are as follows: 3D-IGS is necessary, 3D-IGS improved surgical outcomes, researcher/surgeon confidence in 3D-IGS system, enhanced surgeon ability/accuracy, anatomical explanation for qualitative assessment, and claims without data or reference to support. Conclusions: Currently, clinical outcomes are the main source of quantitative data available to point to 3D-IGS's efficacy. However, the literature qualitatively suggests the benefit of accurate 3D-IGS for robotic partial nephrectomy.
Collapse
Affiliation(s)
- Piper C. Cannon
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Shaan A. Setia
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Stacy Klein-Gardner
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Nicholas L. Kavoussi
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Robert J. Webster
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - S. Duke Herrell
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
2
|
Makiyama K, Komeya M, Tatenuma T, Noguchi G, Ohtake S. Patient-specific simulations and navigation systems for partial nephrectomy. Int J Urol 2023; 30:1087-1095. [PMID: 37622340 DOI: 10.1111/iju.15287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 08/07/2023] [Indexed: 08/26/2023]
Abstract
Partial nephrectomy (PN) is the standard treatment for T1 renal cell carcinoma. PN is affected more by surgical variations and requires greater surgical experience than radical nephrectomy. Patient-specific simulations and navigation systems may help to reduce the surgical experience required for PN. Recent advances in three-dimensional (3D) virtual reality (VR) imaging and 3D printing technology have allowed accurate patient-specific simulations and navigation systems. We reviewed previous studies about patient-specific simulations and navigation systems for PN. Recently, image reconstruction technology has developed, and commercial software that converts two-dimensional images into 3D images has become available. Many urologists are now able to view 3DVR images when preparing for PN. Surgical simulations based on 3DVR images can change surgical plans and improve surgical outcomes, and are useful during patient consultations. Patient-specific simulators that are capable of simulating surgical procedures, the gold-standard form of patient-specific simulations, have also been reported. Besides VR, 3D printing is also useful for understanding patient-specific information. Some studies have reported simulation and navigation systems for PN based on solid 3D models. Patient-specific simulations are a form of preoperative preparation, whereas patient-specific navigation is used intraoperatively. Navigation-assisted PN procedures using 3DVR images have become increasingly common, especially in robotic surgery. Some studies found that these systems produced improvements in surgical outcomes. Once its accuracy has been confirmed, it is hoped that this technology will spread further and become more generalized.
Collapse
Affiliation(s)
- Kazuhide Makiyama
- Department of Urology, Yokohama City University Graduate School of Medicine, Yokohama, Kanagawa, Japan
| | - Mitsuru Komeya
- Department of Urology, Yokohama City University Graduate School of Medicine, Yokohama, Kanagawa, Japan
| | - Tomoyuki Tatenuma
- Department of Urology, Yokohama City University Graduate School of Medicine, Yokohama, Kanagawa, Japan
| | - Go Noguchi
- Department of Urology, Yokohama City University Graduate School of Medicine, Yokohama, Kanagawa, Japan
| | - Shinji Ohtake
- Department of Urology, Yokohama City University Graduate School of Medicine, Yokohama, Kanagawa, Japan
| |
Collapse
|
3
|
Bierbrier J, Eskandari M, Giovanni DAD, Collins DL. Toward Estimating MRI-Ultrasound Registration Error in Image-Guided Neurosurgery. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:999-1015. [PMID: 37022005 DOI: 10.1109/tuffc.2023.3239320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Image-guided neurosurgery allows surgeons to view their tools in relation to preoperatively acquired patient images and models. To continue using neuronavigation systems throughout operations, image registration between preoperative images [typically magnetic resonance imaging (MRI)] and intraoperative images (e.g., ultrasound) is common to account for brain shift (deformations of the brain during surgery). We implemented a method to estimate MRI-ultrasound registration errors, with the goal of enabling surgeons to quantitatively assess the performance of linear or nonlinear registrations. To the best of our knowledge, this is the first dense error estimating algorithm applied to multimodal image registrations. The algorithm is based on a previously proposed sliding-window convolutional neural network that operates on a voxelwise basis. To create training data where the true registration error is known, simulated ultrasound images were created from preoperative MRI images and artificially deformed. The model was evaluated on artificially deformed simulated ultrasound data and real ultrasound data with manually annotated landmark points. The model achieved a mean absolute error (MAE) of 0.977 ± 0.988 mm and a correlation of 0.8 ± 0.062 on the simulated ultrasound data, and an MAE of 2.24 ± 1.89 mm and a correlation of 0.246 on the real ultrasound data. We discuss concrete areas to improve the results on real ultrasound data. Our progress lays the foundation for future developments and ultimately implementation of clinical neuronavigation systems.
Collapse
|
4
|
Cannon PC, Ferguson JM, Pitt EB, Shrand JA, Setia SA, Nimmagadda N, Barth EJ, Kavoussi NL, Galloway RL, Herrell SD, Webster RJ. A Safe Framework for Quantitative In Vivo Human Evaluation of Image Guidance. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2023; 5:133-139. [PMID: 38487093 PMCID: PMC10939321 DOI: 10.1109/ojemb.2023.3271853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 02/16/2023] [Accepted: 03/27/2023] [Indexed: 03/17/2024] Open
Abstract
Goal: We present a new framework for in vivo image guidance evaluation and provide a case study on robotic partial nephrectomy. Methods: This framework (called the "bystander protocol") involves two surgeons, one who solely performs the therapeutic process without image guidance, and another who solely periodically collects data to evaluate image guidance. This isolates the evaluation from the therapy, so that in-development image guidance systems can be tested without risk of negatively impacting the standard of care. We provide a case study applying this protocol in clinical cases during robotic partial nephrectomy surgery. Results: The bystander protocol was performed successfully in 6 patient cases. We find average lesion centroid localization error with our IGS system to be 6.5 mm in vivo compared to our prior result of 3.0 mm in phantoms. Conclusions: The bystander protocol is a safe, effective method for testing in-development image guidance systems in human subjects.
Collapse
Affiliation(s)
| | | | | | | | | | - Naren Nimmagadda
- Vanderbilt University Medical CenterNashvilleTN37232USA
- The Johns Hopkins University School of MedicineBaltimoreMD21287USA
| | | | | | | | | | | |
Collapse
|
5
|
Yuan JH, Li QS, Shen Y. Visual analysis of image-guided radiation therapy based on bibliometrics: A review. Medicine (Baltimore) 2023; 102:e32989. [PMID: 36827068 DOI: 10.1097/md.0000000000032989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/25/2023] Open
Abstract
Radiation therapy plays an important role in tumor treatment. The development of image-guided radiation therapy (IGRT) technology provides a strong guarantee for precise radiation therapy of tumors. However, bibliometric studies on IGRT research have rarely been reported. This study uses literature collected from the Web of Science during 1987 to 2021 as a sample and uses the bibliometric method to reveal the current research status, hotspots, and development trends in IGRT. Based on 6407 papers published from the Web of Science during 1987 to 2021, we utilized Microsoft Excel 2007 and cite space software to perform statistical analysis and visualization of IGRT. A total of 6407 articles were included, this area of IGRT has gone through 4 stages: budding period, growth period, outbreak period, and stationary period. The research category is mainly distributed in Radiology Nuclear Medicine Medical Imaging, which intersects with the research categories of Materials, Physics, and Mathematics. Yin FF, Tanderup K, and Sonke JJ are highly productive scholars who are active in IGRT research, while Jaffray DA, van Herk M and Guckenberger M are authors with high impact in this field. The team of scholars has close cooperation within the team and weak cooperation among teams. The League of European Research Universities, University of Texas System, University of Toronto, and Princess Margaret Cancer are the main research institutions in this field. The United States has the most research literature, followed by China and Germany. Six thousand four hundred seven articles are distributed in 712 journals, and the top 3 journals are Med Phys, Int J Radiat Oncol, and Radiather Oncol. Precise registration, intelligence, magnetic resonance guidance, and deep learning are current research hotspots. These results demonstrate that the research in this field is relatively mature and fruitful in the past 35 years, providing a solid theoretical basis and practical experience for precision radiotherapy.
Collapse
Affiliation(s)
- Jin-Hui Yuan
- Department of Radiation Oncology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | | | | |
Collapse
|
6
|
Evaluation of a Developed MRI-Guided Focused Ultrasound System in 7 T Small Animal MRI and Proof-of-Concept in a Prostate Cancer Xenograft Model to Improve Radiation Therapy. Cells 2023; 12:cells12030481. [PMID: 36766824 PMCID: PMC9914251 DOI: 10.3390/cells12030481] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 01/17/2023] [Accepted: 01/29/2023] [Indexed: 02/05/2023] Open
Abstract
Focused ultrasound (FUS) can be used to physiologically change or destroy tissue in a non-invasive way. A few commercial systems have clinical approval for the thermal ablation of solid tumors for the treatment of neurological diseases and palliative pain management of bone metastases. However, the thermal effects of FUS are known to lead to various biological effects, such as inhibition of repair of DNA damage, reduction in tumor hypoxia, and induction of apoptosis. Here, we studied radiosensitization as a combination therapy of FUS and RT in a xenograft mouse model using newly developed MRI-compatible FUS equipment. Xenograft tumor-bearing mice were produced by subcutaneous injection of the human prostate cancer cell line PC-3. Animals were treated with FUS in 7 T MRI at 4.8 W/cm2 to reach ~45 °C and held for 30 min. The temperature was controlled via fiber optics and proton resonance frequency shift (PRF) MR thermometry in parallel. In the combination group, animals were treated with FUS followed by X-ray at a single dose of 10 Gy. The effects of FUS and RT were assessed via hematoxylin-eosin (H&E) staining. Tumor proliferation was detected by the immunohistochemistry of Ki67 and apoptosis was measured by a TUNEL assay. At 40 days follow-up, the impact of RT on cancer cells was significantly improved by FUS as demonstrated by a reduction in cell nucleoli from 189 to 237 compared to RT alone. Inhibition of tumor growth by 4.6 times was observed in vivo in the FUS + RT group (85.3%) in contrast to the tumor volume of 393% in the untreated control. Our results demonstrated the feasibility of combined MRI-guided FUS and RT for the treatment of prostate cancer in a xenograft mouse model and may provide a chance for less invasive cancer therapy through radiosensitization.
Collapse
|
7
|
Huang H, Ali A, Liu Y, Xie H, Ullah S, Roy S, Song Z, Guo B, Xu J. Advances in image-guided drug delivery for antibacterial therapy. Adv Drug Deliv Rev 2023; 192:114634. [PMID: 36503884 DOI: 10.1016/j.addr.2022.114634] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 10/20/2022] [Accepted: 11/22/2022] [Indexed: 11/29/2022]
Abstract
The emergence of antibiotic-resistant bacterial strains is seriously endangering the global healthcare system. There is an urgent need for combining imaging with therapies to realize the real-time monitoring of pathological condition and treatment progress. It also provides guidance on exploring new medicines and enhance treatment strategies to overcome the antibiotic resistance of existing conventional antibiotics. In this review, we provide a thorough overview of the most advanced image-guided approaches for bacterial diagnosis (e.g., computed tomography imaging, magnetic resonance imaging, photoacoustic imaging, ultrasound imaging, fluorescence imaging, positron emission tomography, single photon emission computed tomography imaging, and multiple imaging), and therapies (e.g., photothermal therapy, photodynamic therapy, chemodynamic therapy, sonodynamic therapy, immunotherapy, and multiple therapies). This review focuses on how to design and fabricate photo-responsive materials for improved image-guided bacterial theranostics applications. We present a potential application of different image-guided modalities for both bacterial diagnosis and therapies with representative examples. Finally, we highlighted the current challenges and future perspectives image-guided approaches for future clinical translation of nano-theranostics in bacterial infections therapies. We envision that this review will provide for future development in image-guided systems for bacterial theranostics applications.
Collapse
Affiliation(s)
- Haiyan Huang
- Institute of Low-Dimensional Materials Genome Initiative, College of Chemistry and Environmental Engineering, Shenzhen University, Shenzhen 518060, China; School of Science and Shenzhen Key Laboratory of Flexible Printed Electronics Technology, Harbin Institute of Technology, Shenzhen 518055, China
| | - Arbab Ali
- Beijing Key Laboratory of Farmland Soil Pollution Prevention and Remediation, College of Resources and Environmental Sciences, China Agricultural University, Beijing 100193, China; CAS Key Laboratory for Biomedical Effects of Nanomaterials and Nano Safety, CAS Center for Excellence in Nanoscience, National Center for Nanoscience and Technology, Beijing 100190, China
| | - Yi Liu
- State Key Laboratory of Agricultural Microbiology, College of Science, Huazhong Agricultural University, Wuhan 430070, China
| | - Hui Xie
- Institute of Low-Dimensional Materials Genome Initiative, College of Chemistry and Environmental Engineering, Shenzhen University, Shenzhen 518060, China; Chengdu Institute of Organic Chemistry, Chinese Academy of Sciences, Chengdu 610041, China
| | - Sana Ullah
- Department of Biotechnology, Quaid-i-Azam University, Islamabad 45320, Pakistan; Natural and Medical Sciences Research Center, University of Nizwa, P.O. Box: 33, PC: 616, Oman
| | - Shubham Roy
- School of Science and Shenzhen Key Laboratory of Flexible Printed Electronics Technology, Harbin Institute of Technology, Shenzhen 518055, China
| | - Zhiyong Song
- State Key Laboratory of Agricultural Microbiology, College of Science, Huazhong Agricultural University, Wuhan 430070, China.
| | - Bing Guo
- School of Science and Shenzhen Key Laboratory of Flexible Printed Electronics Technology, Harbin Institute of Technology, Shenzhen 518055, China.
| | - Jian Xu
- Institute of Low-Dimensional Materials Genome Initiative, College of Chemistry and Environmental Engineering, Shenzhen University, Shenzhen 518060, China.
| |
Collapse
|
8
|
Nimmagadda N, Ferguson JM, Kavoussi NL, Pitt B, Barth EJ, Granna J, Webster RJ, Herrell SD. Patient-specific, touch-based registration during robotic, image-guided partial nephrectomy. World J Urol 2021; 40:671-677. [PMID: 34132897 DOI: 10.1007/s00345-021-03745-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022] Open
Abstract
Image-guidance during partial nephrectomy enables navigation within the operative field alongside a 3-dimensional roadmap of renal anatomy generated from patient-specific imaging. Once a process is performed by the human mind, the technology will allow standardization of the task for the benefit of all patients undergoing robot-assisted partial nephrectomy. Any surgeon will be able to visualize the kidney and key subsurface landmarks in real-time within a 3-dimensional simulation, with the goals of improving operative efficiency, decreasing surgical complications, and improving oncologic outcomes. For similar purposes, image-guidance has already been adopted as a standard of care in other surgical fields; we are now at the brink of this in urology. This review summarizes touch-based approaches to image-guidance during partial nephrectomy, as the technology begins to enter in vivo human evaluation. The processes of segmentation, localization, registration, and re-registration are all described with seamless integration into the da Vinci surgical system; this will facilitate clinical adoption sooner.
Collapse
Affiliation(s)
- Naren Nimmagadda
- Department of Urology, Vanderbilt Institute for Surgery and Engineering (VISE), Vanderbilt University Medical Center, Nashville, TN, USA
| | - James M Ferguson
- Department of Mechanical Engineering, Vanderbilt Institute for Surgery and Engineering (VISE), Vanderbilt University, Nashville, TN, USA
| | - Nicholas L Kavoussi
- Department of Urology, Vanderbilt Institute for Surgery and Engineering (VISE), Vanderbilt University Medical Center, Nashville, TN, USA
| | - Bryn Pitt
- Department of Mechanical Engineering, Vanderbilt Institute for Surgery and Engineering (VISE), Vanderbilt University, Nashville, TN, USA
| | - Eric J Barth
- Department of Mechanical Engineering, Vanderbilt Institute for Surgery and Engineering (VISE), Vanderbilt University, Nashville, TN, USA
| | - Josephine Granna
- Department of Mechanical Engineering, Vanderbilt Institute for Surgery and Engineering (VISE), Vanderbilt University, Nashville, TN, USA
| | - Robert J Webster
- Department of Mechanical Engineering, Vanderbilt Institute for Surgery and Engineering (VISE), Vanderbilt University, Nashville, TN, USA
| | - S Duke Herrell
- Department of Urology, Vanderbilt Institute for Surgery and Engineering (VISE), Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
9
|
Radu C, Fisher P, Mitrea D, Birlescu I, Marita T, Vancea F, Florian V, Tefas C, Badea R, Ștefănescu H, Nedevschi S, Pisla D, Hajjar NA. Integration of Real-Time Image Fusion in the Robotic-Assisted Treatment of Hepatocellular Carcinoma. BIOLOGY 2020; 9:biology9110397. [PMID: 33198415 PMCID: PMC7697343 DOI: 10.3390/biology9110397] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 10/23/2020] [Accepted: 11/11/2020] [Indexed: 12/19/2022]
Abstract
Simple Summary Hepatocellular carcinoma is one of the leading causes of cancer-related deaths worldwide. An image fusion system is developed for the robotic-assisted treatment of hepatocellular carcinoma, which is not only capable of imaging data interpretation and reconstruction, but also automatic tumor detection. The optimization and integration of the image fusion system within a novel robotic system has the potential to demonstrate the feasibility of the robotic-assisted targeted treatment of hepatocellular carcinoma by showing benefits such as precision, patients safety and procedure ergonomics. Abstract Hepatocellular carcinoma (HCC) is one of the leading causes of cancer-related deaths worldwide, with its mortality rate correlated with the tumor staging; i.e., early detection and treatment are important factors for the survival rate of patients. This paper presents the development of a novel visualization and detection system for HCC, which is a composing module of a robotic system for the targeted treatment of HCC. The system has two modules, one for the tumor visualization that uses image fusion (IF) between computerized tomography (CT) obtained preoperatively and real-time ultrasound (US), and the second module for HCC automatic detection from CT images. Convolutional neural networks (CNN) are used for the tumor segmentation which were trained using 152 contrast-enhanced CT images. Probabilistic maps are shown as well as 3D representation of HCC within the liver tissue. The development of the visualization and detection system represents a milestone in testing the feasibility of a novel robotic system in the targeted treatment of HCC. Further optimizations are planned for the tumor visualization and detection system with the aim of introducing more relevant functions and increase its accuracy.
Collapse
Affiliation(s)
- Corina Radu
- Regional Institute of Gastroenterology and Hepatology Prof. Dr. O.Fodor, 400162 Cluj-Napoca, Romania; (C.R.); (P.F.); (C.T.); (H.Ș.); (N.A.H.)
- Iuliu Hatieganu University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania;
| | - Petra Fisher
- Regional Institute of Gastroenterology and Hepatology Prof. Dr. O.Fodor, 400162 Cluj-Napoca, Romania; (C.R.); (P.F.); (C.T.); (H.Ș.); (N.A.H.)
| | - Delia Mitrea
- Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (D.M.); (T.M.); (F.V.); (V.F.); (S.N.)
| | - Iosif Birlescu
- Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (D.M.); (T.M.); (F.V.); (V.F.); (S.N.)
- Correspondence: (I.B.); (D.P.)
| | - Tiberiu Marita
- Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (D.M.); (T.M.); (F.V.); (V.F.); (S.N.)
| | - Flaviu Vancea
- Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (D.M.); (T.M.); (F.V.); (V.F.); (S.N.)
| | - Vlad Florian
- Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (D.M.); (T.M.); (F.V.); (V.F.); (S.N.)
| | - Cristian Tefas
- Regional Institute of Gastroenterology and Hepatology Prof. Dr. O.Fodor, 400162 Cluj-Napoca, Romania; (C.R.); (P.F.); (C.T.); (H.Ș.); (N.A.H.)
- Iuliu Hatieganu University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania;
| | - Radu Badea
- Iuliu Hatieganu University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania;
| | - Horia Ștefănescu
- Regional Institute of Gastroenterology and Hepatology Prof. Dr. O.Fodor, 400162 Cluj-Napoca, Romania; (C.R.); (P.F.); (C.T.); (H.Ș.); (N.A.H.)
| | - Sergiu Nedevschi
- Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (D.M.); (T.M.); (F.V.); (V.F.); (S.N.)
| | - Doina Pisla
- Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (D.M.); (T.M.); (F.V.); (V.F.); (S.N.)
- Correspondence: (I.B.); (D.P.)
| | - Nadim Al Hajjar
- Regional Institute of Gastroenterology and Hepatology Prof. Dr. O.Fodor, 400162 Cluj-Napoca, Romania; (C.R.); (P.F.); (C.T.); (H.Ș.); (N.A.H.)
- Iuliu Hatieganu University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania;
| |
Collapse
|
10
|
Léger É, Reyes J, Drouin S, Popa T, Hall JA, Collins DL, Kersten-Oertel M. MARIN: an open-source mobile augmented reality interactive neuronavigation system. Int J Comput Assist Radiol Surg 2020; 15:1013-1021. [DOI: 10.1007/s11548-020-02155-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 04/03/2020] [Indexed: 12/20/2022]
|
11
|
Daly MJ, Wilson BC, Irish JC, Jaffray DA. Navigated non-contact fluorescence tomography. ACTA ACUST UNITED AC 2019; 64:135021. [DOI: 10.1088/1361-6560/ab1f33] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
12
|
|
13
|
Yang X, Clements LW, Luo M, Narasimhan S, Thompson RC, Dawant BM, Miga MI. Stereovision-based integrated system for point cloud reconstruction and simulated brain shift validation. J Med Imaging (Bellingham) 2017; 4:035002. [PMID: 28924572 DOI: 10.1117/1.jmi.4.3.035002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 08/14/2017] [Indexed: 11/14/2022] Open
Abstract
Intraoperative soft tissue deformation, referred to as brain shift, compromises the application of current image-guided surgery navigation systems in neurosurgery. A computational model driven by sparse data has been proposed as a cost-effective method to compensate for cortical surface and volumetric displacements. We present a mock environment developed to acquire stereoimages from a tracked operating microscope and to reconstruct three-dimensional point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. When comparing our tracked microscope stereo-pair measure of mock vessel displacements to that of the measurement determined by the independent optically tracked stylus marking, the displacement error was [Formula: see text] on average. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to laser range scanners to collect sufficient intraoperative information for brain shift correction.
Collapse
Affiliation(s)
- Xiaochen Yang
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Logan W Clements
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Ma Luo
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Saramati Narasimhan
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Reid C Thompson
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Benoit M Dawant
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States.,Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University Medical Center, Department of Radiology, Nashville, Tennessee, United States
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States.,Vanderbilt University Medical Center, Department of Radiology, Nashville, Tennessee, United States
| |
Collapse
|
14
|
Buteikienė D, Kybartaitė-Žilienė A, Kriaučiūnienė L, Barzdžiukas V, Janulevičienė I, Paunksnis A. Morphometric parameters of the optic disc in normal and glaucomatous eyes based on time-domain optical coherence tomography image analysis. MEDICINA (KAUNAS, LITHUANIA) 2017; 53:242-252. [PMID: 28867515 DOI: 10.1016/j.medici.2017.05.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 04/23/2017] [Accepted: 05/23/2017] [Indexed: 11/29/2022]
Abstract
BACKGROUND AND OBJECTIVE Assessment of optic disc morphology is essential in diagnosis and management of visual impairment. The aim of this study was to evaluate associations between optic disc morphometric parameters, i.e., size and shape, and age, gender, and ocular axial length in normal and glaucomatous eyes based on time-domain optical coherence tomography image analysis. MATERIALS AND METHODS It was a case-control study of 998 normal and 394 eyes with primary open angle glaucoma that underwent an ophthalmological examination and time-domain optical coherence topography scanning. Areas and shapes of the disc, cup, and neuroretinal rim were analyzed. RESULTS The shape of the optic disc did not differ between the study groups, i.e., normal and glaucomatous case groups, but the disc area of the primary open angle glaucoma group was significantly larger. The shape of the small disc was significantly different, but the shape of the medium and the large disc did not differ between the study groups. The central area of the disc, i.e., cup area was significantly larger in the case group and its shape was significantly different between the study groups. No significant differences in the area of the cup and its shape, nerve fibers on the edge of the disc, i.e., neuroretinal rim area, were found between the study groups of the small discs. There were significant associations between age, gender, and ocular axial length and morphometric parameters of the optic disc. CONCLUSIONS Informative results with regard to the size and shape due to various ocular characteristics between the healthy control group and patients suffering with primary open angle glaucoma were obtained. Both study groups were significant in size, which makes the findings interesting and important contribution in the field.
Collapse
Affiliation(s)
- Dovilė Buteikienė
- Department of Ophthalmology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania.
| | - Asta Kybartaitė-Žilienė
- Laboratory of Biophysics and Bioinformatics, Neuroscience Institute, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Loresa Kriaučiūnienė
- Department of Ophthalmology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Valerijus Barzdžiukas
- Department of Ophthalmology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Ingrida Janulevičienė
- Department of Ophthalmology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Alvydas Paunksnis
- Department of Ophthalmology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| |
Collapse
|
15
|
Park S, Jang J, Kim J, Kim YS, Kim C. Real-time Triple-modal Photoacoustic, Ultrasound, and Magnetic Resonance Fusion Imaging of Humans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1912-1921. [PMID: 28436857 DOI: 10.1109/tmi.2017.2696038] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Imaging that fuses multiple modes has become a useful tool for diagnosis and therapeutic monitoring. As a next step, real-time fusion imaging has attracted interest as for a tool to guide surgery. One widespread fusion imaging technique in surgery combines real-time ultrasound (US) imaging and pre-acquired magnetic resonance (MR) imaging. However, US imaging visualizes only structural information with relatively low contrast. Here, we present a photoacoustic (PA), US, and MR fusion imaging system which integrates a clinical PA and US imaging system with an optical tracking-based navigation sub-system. Through co-registration of pre-acquired MR and real-time PA/US images, overlaid PA, US, and MR images can be concurrently displayed in real time. We successfully acquired fusion images from a phantom and a blood vessel in a human forearm. This fusion imaging can complementarily delineate the morphological and vascular structure of tissues with good contrast and sensitivity, has a well-established user interface, and can be flexibly integrated with clinical environments. As a novel fusion imaging, the proposed triple-mode imaging can provide comprehensive image guidance in real time, and can potentially assist various surgeries.
Collapse
|
16
|
Multifunctional nanoparticles as a tissue adhesive and an injectable marker for image-guided procedures. Nat Commun 2017; 8:15807. [PMID: 28722024 PMCID: PMC5524935 DOI: 10.1038/ncomms15807] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Accepted: 05/02/2017] [Indexed: 12/16/2022] Open
Abstract
Tissue adhesives have emerged as an alternative to sutures and staples for wound closure and reconnection of injured tissues after surgery or trauma. Owing to their convenience and effectiveness, these adhesives have received growing attention particularly in minimally invasive procedures. For safe and accurate applications, tissue adhesives should be detectable via clinical imaging modalities and be highly biocompatible for intracorporeal procedures. However, few adhesives meet all these requirements. Herein, we show that biocompatible tantalum oxide/silica core/shell nanoparticles (TSNs) exhibit not only high contrast effects for real-time imaging but also strong adhesive properties. Furthermore, the biocompatible TSNs cause much less cellular toxicity and less inflammation than a clinically used, imageable tissue adhesive (that is, a mixture of cyanoacrylate and Lipiodol). Because of their multifunctional imaging and adhesive property, the TSNs are successfully applied as a hemostatic adhesive for minimally invasive procedures and as an immobilized marker for image-guided procedures.
Collapse
|
17
|
Vijayan RC, Thompson RC, Chambless LB, Morone PJ, He L, Clements LW, Griesenauer RH, Kang H, Miga MI. Android application for determining surgical variables in brain-tumor resection procedures. J Med Imaging (Bellingham) 2017; 4:015003. [PMID: 28331887 DOI: 10.1117/1.jmi.4.1.015003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Accepted: 02/13/2017] [Indexed: 11/14/2022] Open
Abstract
The fidelity of image-guided neurosurgical procedures is often compromised due to the mechanical deformations that occur during surgery. In recent work, a framework was developed to predict the extent of this brain shift in brain-tumor resection procedures. The approach uses preoperatively determined surgical variables to predict brain shift and then subsequently corrects the patient's preoperative image volume to more closely match the intraoperative state of the patient's brain. However, a clinical workflow difficulty with the execution of this framework is the preoperative acquisition of surgical variables. To simplify and expedite this process, an Android, Java-based application was developed for tablets to provide neurosurgeons with the ability to manipulate three-dimensional models of the patient's neuroanatomy and determine an expected head orientation, craniotomy size and location, and trajectory to be taken into the tumor. These variables can then be exported for use as inputs to the biomechanical model associated with the correction framework. A multisurgeon, multicase mock trial was conducted to compare the accuracy of the virtual plan to that of a mock physical surgery. It was concluded that the Android application was an accurate, efficient, and timely method for planning surgical variables.
Collapse
Affiliation(s)
- Rohan C Vijayan
- Vanderbilt University , Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Reid C Thompson
- Vanderbilt University Medical Center , Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Lola B Chambless
- Vanderbilt University Medical Center , Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Peter J Morone
- Vanderbilt University Medical Center , Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Le He
- Vanderbilt University Medical Center , Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Logan W Clements
- Vanderbilt University , Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Rebekah H Griesenauer
- Vanderbilt University , Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Hakmook Kang
- Vanderbilt University Medical Center , Department of Biostatistics, Nashville, Tennessee, United States
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States; Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States; Vanderbilt University Medical Center, Department of Radiology and Radiological Sciences, Nashville, Tennessee, United States
| |
Collapse
|
18
|
Malthouse T, Kasivisvanathan V, Raison N, Lam W, Challacombe B. The future of partial nephrectomy. Int J Surg 2016; 36:560-567. [PMID: 26975430 DOI: 10.1016/j.ijsu.2016.03.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Accepted: 03/10/2016] [Indexed: 12/29/2022]
Abstract
Innovation in recent times has accelerated due to factors such as the globalization of communication; but there are also more barriers/safeguards in place than ever before as we strive to streamline this process. From the first planned partial nephrectomy completed in 1887, it took over a century to become recommended practice for small renal tumours. At present, identified areas for improvement/innovation are 1) to preserve renal parenchyma, 2) to optimise pre-operative eGFR and 3) to reduce global warm ischaemia time. All 3 of these, are statistically significant predictors of post-operative renal function. Urologists, have a proud history of embracing innovation & have experimented with different clamping techniques of the renal vasculature, image guidance in robotics, renal hypothermia, lasers and new robots under development. The DaVinci model may soon no longer have a monopoly on this market, as it loses its stranglehold with novel technology emerging including added features, such as haptic feedback with reduced costs. As ever, our predictions of the future may well fall wide of the mark, but in order to progress, one must open the mind to the possibilities that already exist, as evolution of existing technology often appears to be a revolution in hindsight.
Collapse
Affiliation(s)
- Theo Malthouse
- Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London SE1 9RT, United Kingdom.
| | - Veeru Kasivisvanathan
- University College London Hospital, 235 Euston Rd, Fitzrovia, London NW1 2BU, United Kingdom
| | - Nicholas Raison
- King's College Hospital NHS Foundation Trust, Denmark Hill, London SE5 9RS, United Kingdom
| | - Wayne Lam
- Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London SE1 9RT, United Kingdom
| | - Ben Challacombe
- Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London SE1 9RT, United Kingdom
| |
Collapse
|
19
|
Azagury DE, Dua MM, Barrese JC, Henderson JM, Buchs NC, Ris F, Cloyd JM, Martinie JB, Razzaque S, Nicolau S, Soler L, Marescaux J, Visser BC. Image-guided surgery. Curr Probl Surg 2015; 52:476-520. [PMID: 26683419 DOI: 10.1067/j.cpsurg.2015.10.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2015] [Accepted: 10/01/2015] [Indexed: 12/11/2022]
Affiliation(s)
- Dan E Azagury
- Department of Surgery, Stanford University School of Medicine, Stanford, CA
| | - Monica M Dua
- Department of Surgery, Stanford University School of Medicine, Stanford, CA
| | - James C Barrese
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA
| | - Jaimie M Henderson
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA
| | - Nicolas C Buchs
- Department of Surgery, University Hospital of Geneva, Clinic for Visceral and Transplantation Surgery, Geneva, Switzerland
| | - Frederic Ris
- Department of Surgery, University Hospital of Geneva, Clinic for Visceral and Transplantation Surgery, Geneva, Switzerland
| | - Jordan M Cloyd
- Department of Surgery, Stanford University School of Medicine, Stanford, CA
| | - John B Martinie
- Department of Surgery, Carolinas Healthcare System, Charlotte, NC
| | - Sharif Razzaque
- Department of Surgery, Carolinas Healthcare System, Charlotte, NC
| | - Stéphane Nicolau
- IRCAD (Research Institute Against Digestive Cancer), Strasbourg, France
| | - Luc Soler
- IRCAD (Research Institute Against Digestive Cancer), Strasbourg, France
| | - Jacques Marescaux
- IRCAD (Research Institute Against Digestive Cancer), Strasbourg, France
| | - Brendan C Visser
- Department of Surgery, Stanford University School of Medicine, Stanford, CA.
| |
Collapse
|
20
|
Computational Modeling for Enhancing Soft Tissue Image Guided Surgery: An Application in Neurosurgery. Ann Biomed Eng 2015; 44:128-38. [PMID: 26354118 DOI: 10.1007/s10439-015-1433-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Accepted: 08/18/2015] [Indexed: 01/14/2023]
Abstract
With the recent advances in computing, the opportunities to translate computational models to more integrated roles in patient treatment are expanding at an exciting rate. One area of considerable development has been directed towards correcting soft tissue deformation within image guided neurosurgery applications. This review captures the efforts that have been undertaken towards enhancing neuronavigation by the integration of soft tissue biomechanical models, imaging and sensing technologies, and algorithmic developments. In addition, the review speaks to the evolving role of modeling frameworks within surgery and concludes with some future directions beyond neurosurgical applications.
Collapse
|
21
|
|
22
|
Herrell SD. Editorial comment. Urology 2014; 83:506-7. [PMID: 24468518 DOI: 10.1016/j.urology.2013.09.054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- S Duke Herrell
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, TN; Department of Biomedical Engineering, Vanderbilt University, Nashville, TN
| |
Collapse
|
23
|
Gerber N, Gavaghan KA, Bell BJ, Williamson TM, Weisstanner C, Caversaccio MD, Weber S. High-accuracy patient-to-image registration for the facilitation of image-guided robotic microsurgery on the head. IEEE Trans Biomed Eng 2013; 60:960-8. [PMID: 23340586 DOI: 10.1109/tbme.2013.2241063] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Image-guided microsurgery requires accuracies an order of magnitude higher than today's navigation systems provide. A critical step toward the achievement of such low-error requirements is a highly accurate and verified patient-to-image registration. With the aim of reducing target registration error to a level that would facilitate the use of image-guided robotic microsurgery on the rigid anatomy of the head, we have developed a semiautomatic fiducial detection technique. Automatic force-controlled localization of fiducials on the patient is achieved through the implementation of a robotic-controlled tactile search within the head of a standard surgical screw. Precise detection of the corresponding fiducials in the image data is realized using an automated model-based matching algorithm on high-resolution, isometric cone beam CT images. Verification of the registration technique on phantoms demonstrated that through the elimination of user variability, clinically relevant target registration errors of approximately 0.1 mm could be achieved.
Collapse
Affiliation(s)
- Nicolas Gerber
- ARTORG Center for Biomedical Engineering Research, University of Bern, 3012 Bern, Switzerland.
| | | | | | | | | | | | | |
Collapse
|
24
|
Dang H, Otake Y, Schafer S, Stayman JW, Kleinszig G, Siewerdsen JH. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance. Med Phys 2012; 39:6484-98. [PMID: 23039683 DOI: 10.1118/1.4754589] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. METHODS Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. RESULTS The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all anatomical sites, including challenging scenarios involving the presence of interventional tools. The reprojection error of marker localization was independent of the distance of the ARM from isocenter, and the overall TRE was dominated by the configuration of individual fiducials and distance from the target as predicted by theory. The median TRE increased with greater ARM-to-isocenter distance (e.g., for the Free-Form method, TRE increasing from 0.78 mm to 2.04 mm at distances of ∼75 mm and 370 mm, respectively). The median TRE within ∼200 mm distance was consistently lower than that of the manual method (TRE = 0.82 mm). Registration performance was independent of anatomical site (head, thorax, and abdomen). The Free-Form method demonstrated a statistically significant improvement (p = 0.0044) in reproducibility compared to manual registration (0.22 mm versus 0.30 mm, respectively). CONCLUSIONS Automatic image-to-world registration methods demonstrate the potential for improved accuracy, reproducibility, and workflow in CBCT-guided procedures. A Free-Form method was shown to exhibit robustness against anatomical site, with comparable or improved TRE compared to manual registration. It was also comparable or superior in performance to a Known-Model method in which the ARM configuration is specified as a predefined tool, thereby allowing configuration of fiducials on the fly or attachment to the patient.
Collapse
Affiliation(s)
- H Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21202, USA
| | | | | | | | | | | |
Collapse
|
25
|
Valdés PA, Leblond F, Jacobs VL, Wilson BC, Paulsen KD, Roberts DW. Quantitative, spectrally-resolved intraoperative fluorescence imaging. Sci Rep 2012; 2:798. [PMID: 23152935 PMCID: PMC3497712 DOI: 10.1038/srep00798] [Citation(s) in RCA: 74] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2012] [Accepted: 10/01/2012] [Indexed: 01/19/2023] Open
Abstract
Intraoperative visual fluorescence imaging (vFI) has emerged as a promising aid to surgical guidance, but does not fully exploit the potential of the fluorescent agents that are currently available. Here, we introduce a quantitative fluorescence imaging (qFI) approach that converts spectrally-resolved data into images of absolute fluorophore concentration pixel-by-pixel across the surgical field of view (FOV). The resulting estimates are linear, accurate, and precise relative to true values, and spectral decomposition of multiple fluorophores is also achieved. Experiments with protoporphyrin IX in a glioma rodent model demonstrate in vivo quantitative and spectrally-resolved fluorescence imaging of infiltrating tumor margins for the first time. Moreover, we present images from human surgery which detect residual tumor not evident with state-of-the-art vFI. The wide-field qFI technique has broad implications for intraoperative surgical guidance because it provides near real-time quantitative assessment of multiple fluorescent biomarkers across the operative field.
Collapse
Affiliation(s)
- Pablo A Valdés
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA.
| | | | | | | | | | | |
Collapse
|
26
|
Galloway RL, Herrell SD, Miga MI. Image-Guided Abdominal Surgery and Therapy Delivery. JOURNAL OF HEALTHCARE ENGINEERING 2012; 3:203-228. [PMID: 25077012 PMCID: PMC4112601 DOI: 10.1260/2040-2295.3.2.203] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2011] [Accepted: 07/01/2011] [Indexed: 01/31/2023]
Abstract
Image-Guided Surgery has become the standard of care in intracranial neurosurgery providing more exact resections while minimizing damage to healthy tissue. Moving that process to abdominal organs presents additional challenges in the form of image segmentation, image to physical space registration, organ motion and deformation. In this paper, we present methodologies and results for addressing these challenges in two specific organs: the liver and the kidney.
Collapse
Affiliation(s)
- Robert L. Galloway
- Department of Biomedical Engineering
- Department of Neurosurgery
- Department of Surgery
| | | | - Michael I. Miga
- Department of Biomedical Engineering
- Department of Neurosurgery
- Department of Radiology and Radiological Sciences Vanderbilt University
| |
Collapse
|
27
|
Kaladji A, Lucas A, Cardon A, Haigron P. Computer-aided surgery: concepts and applications in vascular surgery. ACTA ACUST UNITED AC 2012; 24:23-7. [PMID: 22513982 DOI: 10.1177/1531003512442092] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Computer-aided surgery makes use of a variety of technologies and information sources. The challenge over the past 10 years has been to apply these methods to tissues that deform, as do vessels when relatively rigid flexible objects are introduced into them (Lunderquist rigid guide wire, aortic prosthesis, etc) Three stages of computer-aided endovascular surgery are examined: sizing, planning, and intraoperative assistance. The authors' work shows that an approach based on optimized use of the imaging data acquired during the various observation phases (pre- and intraoperative), involving only lightweight computer equipment that is relatively transparent for the user, makes it possible to provide useful (ie, necessary and sufficient) information at the appropriate moment, in order to aid decision making and enhance the security of endovascular procedures.
Collapse
Affiliation(s)
- Adrien Kaladji
- CHU Hôpital Pontchaillou, Vascular Surgery Unit, Rennes, France.
| | | | | | | |
Collapse
|
28
|
Image-guided robotic surgery: update on research and potential applications in urologic surgery. Curr Opin Urol 2012; 22:47-54. [PMID: 22080871 DOI: 10.1097/mou.0b013e32834d4ce5] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE OF REVIEW New methods of imaging and image-guidance technology have the potential to provide surgeons with spatially accurate three-dimensional information about the location and anatomical relationships of critical subsurface structures and instrument position updated and displayed during the performance of surgery. Robotic platforms and technology in various forms continues to revolutionize surgery and will soon incorporate image guidance. RECENT RESEARCH Image-guided surgery (IGS) for abdominal and urologic interventions presents complex engineering and surgical challenges along with potential benefits to surgeons and patients. Key concepts such as registration, localization, accuracy, and targeting error are necessary for surgeons to understand and utilize the potential of IGS. Standard robotic surgeries, such as partial nephrectomy and radical prostatectomy may soon incorporate IGS. SUMMARY Research continues to explore the potential for combining image guidance and robotics to augment and improve a variety of surgical interventions.
Collapse
|
29
|
Markelj P, Tomaževič D, Likar B, Pernuš F. A review of 3D/2D registration methods for image-guided interventions. Med Image Anal 2012; 16:642-61. [PMID: 20452269 DOI: 10.1016/j.media.2010.03.005] [Citation(s) in RCA: 328] [Impact Index Per Article: 27.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2009] [Revised: 02/22/2010] [Accepted: 03/30/2010] [Indexed: 02/07/2023]
|
30
|
Gurzhiev SN, Novikov VP, Sokolov SN. [Tomosynthesis of the human head phantom on the ProGraf-7000 apparatus]. MEDITSINSKAIA TEKHNIKA 2012; 46:12-17. [PMID: 22442946 DOI: 10.1007/s10527-012-9255-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
|
31
|
Valdés PA, Leblond F, Kim A, Harris BT, Wilson BC, Fan X, Tosteson TD, Hartov A, Ji S, Erkmen K, Simmons NE, Paulsen KD, Roberts DW. Quantitative fluorescence in intracranial tumor: implications for ALA-induced PpIX as an intraoperative biomarker. J Neurosurg 2011; 115:11-7. [PMID: 21438658 DOI: 10.3171/2011.2.jns101451] [Citation(s) in RCA: 211] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECT Accurate discrimination between tumor and normal tissue is crucial for optimal tumor resection. Qualitative fluorescence of protoporphyrin IX (PpIX), synthesized endogenously following δ-aminolevulinic acid (ALA) administration, has been used for this purpose in high-grade glioma (HGG). The authors show that diagnostically significant but visually imperceptible concentrations of PpIX can be quantitatively measured in vivo and used to discriminate normal from neoplastic brain tissue across a range of tumor histologies. METHODS The authors studied 14 patients with diagnoses of low-grade glioma (LGG), HGG, meningioma, and metastasis under an institutional review board-approved protocol for fluorescence-guided resection. The primary aim of the study was to compare the diagnostic capabilities of a highly sensitive, spectrally resolved quantitative fluorescence approach to conventional fluorescence imaging for detection of neoplastic tissue in vivo. RESULTS A significant difference in the quantitative measurements of PpIX concentration occurred in all tumor groups compared with normal brain tissue. Receiver operating characteristic (ROC) curve analysis of PpIX concentration as a diagnostic variable for detection of neoplastic tissue yielded a classification efficiency of 87% (AUC = 0.95, specificity = 92%, sensitivity = 84%) compared with 66% (AUC = 0.73, specificity = 100%, sensitivity = 47%) for conventional fluorescence imaging (p < 0.0001). More than 81% (57 of 70) of the quantitative fluorescence measurements that were below the threshold of the surgeon's visual perception were classified correctly in an analysis of all tumors. CONCLUSIONS These findings are clinically profound because they demonstrate that ALA-induced PpIX is a targeting biomarker for a variety of intracranial tumors beyond HGGs. This study is the first to measure quantitative ALA-induced PpIX concentrations in vivo, and the results have broad implications for guidance during resection of intracranial tumors.
Collapse
Affiliation(s)
- Pablo A Valdés
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire, USA
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Gendrin C, Markelj P, Pawiro SA, Spoerk J, Bloch C, Weber C, Figl M, Bergmann H, Birkfellner W, Likar B, Pernus F. Validation for 2D/3D registration. II: The comparison of intensity- and gradient-based merit functions using a new gold standard data set. Med Phys 2011; 38:1491-502. [PMID: 21520861 PMCID: PMC3089767 DOI: 10.1118/1.3553403] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE A new gold standard data set for validation of 2D/3D registration based on a porcine cadaver head with attached fiducial markers was presented in the first part of this article. The advantage of this new phantom is the large amount of soft tissue, which simulates realistic conditions for registration. This article tests the performance of intensity- and gradient-based algorithms for 2D/3D registration using the new phantom data set. METHODS Intensity-based methods with four merit functions, namely, cross correlation, rank correlation, correlation ratio, and mutual information (MI), and two gradient-based algorithms, the backprojection gradient-based (BGB) registration method and the reconstruction gradient-based (RGB) registration method, were compared. Four volumes consisting of CBCT with two fields of view, 64 slice multidetector CT, and magnetic resonance-T1 weighted images were registered to a pair of kV x-ray images and a pair of MV images. A standardized evaluation methodology was employed. Targets were evenly spread over the volumes and 250 starting positions of the 3D volumes with initial displacements of up to 25 mm from the gold standard position were calculated. After the registration, the displacement from the gold standard was retrieved and the root mean square (RMS), mean, and standard deviation mean target registration errors (mTREs) over 250 registrations were derived. Additionally, the following merit properties were computed: Accuracy, capture range, number of minima, risk of nonconvergence, and distinctiveness of optimum for better comparison of the robustness of each merit. RESULTS Among the merit functions used for the intensity-based method, MI reached the best accuracy with an RMS mTRE down to 1.30 mm. Furthermore, it was the only merit function that could accurately register the CT to the kV x rays with the presence of tissue deformation. As for the gradient-based methods, BGB and RGB methods achieved subvoxel accuracy (RMS mTRE down to 0.56 and 0.70 mm, respectively). Overall, gradient-based similarity measures were found to be substantially more accurate than intensity-based methods and could cope with soft tissue deformation and enabled also accurate registrations of the MR-T1 volume to the kV x-ray image. CONCLUSIONS In this article, the authors demonstrate the usefulness of a new phantom image data set for the evaluation of 2D/3D registration methods, which featured soft tissue deformation. The author's evaluation shows that gradient-based methods are more accurate than intensity-based methods, especially when soft tissue deformation is present. However, the current nonoptimized implementations make them prohibitively slow for practical applications. On the other hand, the speed of the intensity-based method renders these more suitable for clinical use, while the accuracy is still competitive.
Collapse
Affiliation(s)
- Christelle Gendrin
- Center of Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna A-1090, Austria
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
33
|
Kidney Deformation and Intraprocedural Registration: A Study of Elements of Image-Guided Kidney Surgery. J Endourol 2011; 25:511-7. [DOI: 10.1089/end.2010.0249] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
34
|
Lathrop RA, Hackworth DM, Webster RJ. Minimally invasive holographic surface scanning for soft-tissue image registration. IEEE Trans Biomed Eng 2010; 57:1497-506. [PMID: 20659823 PMCID: PMC4104132 DOI: 10.1109/tbme.2010.2040736] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent advances in registration have extended intrasurgical image guidance from its origins in bone-based procedures to new applications in soft tissues, thus enabling visualization of spatial relationships between surgical instruments and subsurface structures before incisions begin. Preoperative images are generally registered to soft tissues through aligning segmented volumetric image data with an intraoperatively sensed cloud of organ surface points. However, there is currently no viable noncontact minimally invasive scanning technology that can collect these points through a single laparoscopic port, which limits wider adoption of soft-tissue image guidance. In this paper, we describe a system based on conoscopic holography that is capable of minimally invasive surface scanning. We present the results of several validation experiments scanning ex vivo biological and phantom tissues with a system consisting of a tracked, off-the-shelf, relatively inexpensive conoscopic holography unit. These experiments indicate that conoscopic holography is suitable for use with biological tissues, and can provide surface scans of comparable quality to existing clinically used laser range scanning systems that require open surgery. We demonstrate experimentally that conoscopic holography can be used to guide a surgical needle to desired subsurface targets with an average tip error of less than 3 mm.
Collapse
|
35
|
Tomazevic D, Likar B, Pernus F. “Gold standard” data for evaluation and comparison of 3D/2D registration methods. ACTA ACUST UNITED AC 2010; 9:137-44. [PMID: 16192053 DOI: 10.3109/10929080500097687] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Evaluation and comparison of registration techniques for image-guided surgery is an important problem that has received little attention in the literature. In this paper we address the challenging problem of generating reliable "gold standard" data for use in evaluating the accuracy of 3D/2D registrations. We have devised a cadaveric lumbar spine phantom with fiducial markers and established highly accurate correspondences between 3D CT and MR images and 18 2D X-ray images. The expected target registration errors for target points on the pedicles are less than 0.26 mm for CT-to-X-ray registration and less than 0.42 mm for MR-to-X-ray registration. As such, the "gold standard" data, which has been made publicly available on the Internet (http://lit.fe.uni-lj.si/Downloads/downloads.asp), is useful for evaluation and comparison of 3D/2D image registration methods.
Collapse
Affiliation(s)
- Dejan Tomazevic
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | |
Collapse
|
36
|
Rettmann ME, Holmes DR, Cameron BM, Robb RA. An event-driven distributed processing architecture for image-guided cardiac ablation therapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2009; 95:95-104. [PMID: 19285747 PMCID: PMC2755259 DOI: 10.1016/j.cmpb.2009.01.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2008] [Revised: 01/20/2009] [Accepted: 01/22/2009] [Indexed: 05/27/2023]
Abstract
Medical imaging data is becoming increasing valuable in interventional medicine, not only for preoperative planning, but also for real-time guidance during clinical procedures. Three key components necessary for image-guided intervention are real-time tracking of the surgical instrument, aligning the real-world patient space with image-space, and creating a meaningful display that integrates the tracked instrument and patient data. Issues to consider when developing image-guided intervention systems include the communication scheme, the ability to distribute CPU intensive tasks, and flexibility to allow for new technologies. In this work, we have designed a communication architecture for use in image-guided catheter ablation therapy. Communication between the system components is through a database which contains an event queue and auxiliary data tables. The communication scheme is unique in that each system component is responsible for querying and responding to relevant events from the centralized database queue. An advantage of the architecture is the flexibility to add new system components without affecting existing software code. In addition, the architecture is intrinsically distributed, in that components can run on different CPU boxes, and even different operating systems. We refer to this Framework for Image-Guided Navigation using a Distributed Event-Driven Database in Real-Time as the FINDER architecture. This architecture has been implemented for the specific application of image-guided cardiac ablation therapy. We describe our prototype image-guidance system and demonstrate its functionality by emulating a cardiac ablation procedure with a patient-specific phantom. The proposed architecture, designed to be modular, flexible, and intuitive, is a key step towards our goal of developing a complete system for visualization and targeting in image-guided cardiac ablation procedures.
Collapse
Affiliation(s)
- M E Rettmann
- Biomedical Imaging Resource, Mayo Clinic College of Medicine, Rochester, MN, USA
| | | | | | | |
Collapse
|
37
|
Hamming NM, Daly MJ, Irish JC, Siewerdsen JH. Automatic image-to-world registration based on x-ray projections in cone-beam CT-guided interventions. Med Phys 2009; 36:1800-12. [PMID: 19544799 DOI: 10.1118/1.3117609] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Intraoperative imaging offers a means to account for morphological changes occurring during the procedure and resolve geometric uncertainties via integration with a surgical navigation system. Such integration requires registration of the image and world reference frames, conventionally a time consuming, error-prone manual process. This work presents a method of automatic image-to-world registration of intraoperative cone-beam computed tomography (CBCT) and an optical tracking system. Multimodality (MM) markers consisting of an infrared (IR) reflective sphere with a 2 mm tungsten sphere (BB) placed precisely at the center were designed to permit automatic detection in both the image and tracking (world) reference frames. Image localization is performed by intensity thresholding and pattern matching directly in 2D projections acquired in each CBCT scan, with 3D image coordinates computed using backprojection and accounting for C-arm geometric calibration. The IR tracking system localized MM markers in the world reference frame, and the image-to-world registration was computed by rigid point matching of image and tracker point sets. The accuracy and reproducibility of the automatic registration technique were compared to conventional (manual) registration using a variety of marker configurations suitable to neurosurgery (markers fixed to cranium) and head and neck surgery (markers suspended on a subcranial frame). The automatic technique exhibited subvoxel marker localization accuracy (< 0.8 mm) for all marker configurations. The fiducial registration error of the automatic technique was (0.35 +/-0.01) mm, compared to (0.64 +/- 0.07 mm) for the manual technique, indicating improved accuracy and reproducibility. The target registration error (TRE) averaged over all configurations was 1.14 mm for the automatic technique, compared to 1.29 mm for the manual in accuracy, although the difference was not statistically significant (p = 0.3). A statistically significant improvement in precision was observed-specifically, the standard deviation in TRE was 0.2 mm for the automatic technique versus 0.34 mm for the manual technique (p = 0.001). The projection-based automatic registration technique demonstrates accuracy and reproducibility equivalent or superior to the conventional manual technique for both neurosurgical and head and neck marker configurations. Use of this method with C-arm CBCT eliminates the burden of manual registration on surgical workflow by providing automatic registration of surgical tracking in 3D images within approximately 20 s of acquisition, with registration automatically updated with each CBCT scan. The automatic registration method is undergoing integration in ongoing clinical trials of intraoperative CBCT-guided head and neck surgery.
Collapse
Affiliation(s)
- N M Hamming
- Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada
| | | | | | | |
Collapse
|
38
|
Ahmad A, Adie SG, Chaney EJ, Sharma U, Boppart SA. Cross-correlation-based image acquisition technique for manually-scanned optical coherence tomography. OPTICS EXPRESS 2009; 17:8125-36. [PMID: 19434144 PMCID: PMC2883319 DOI: 10.1364/oe.17.008125] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
We present a novel image acquisition technique for Optical Coherence Tomography (OCT) that enables manual lateral scanning. The technique compensates for the variability in lateral scan velocity based on feedback obtained from correlation between consecutive A-scans. Results obtained from phantom samples and biological tissues demonstrate successful assembly of OCT images from manually-scanned datasets despite non-uniform scan velocity and abrupt stops encountered during data acquisition. This technique could enable the acquisition of images during manual OCT needle-guided biopsy or catheter-based imaging, and for assembly of large field-of-view images with hand-held probes during intraoperative in vivo OCT imaging.
Collapse
Affiliation(s)
- Adeel Ahmad
- Biophotonics Imaging Laboratory, Beckman Institute for Advanced Science and Technology, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 405 N. Mathews Avenue, Urbana, IL 61801
| | - Steven G. Adie
- Biophotonics Imaging Laboratory, Beckman Institute for Advanced Science and Technology, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 405 N. Mathews Avenue, Urbana, IL 61801
| | - Eric J. Chaney
- Biophotonics Imaging Laboratory, Beckman Institute for Advanced Science and Technology, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 405 N. Mathews Avenue, Urbana, IL 61801
| | - Utkarsh Sharma
- Biophotonics Imaging Laboratory, Beckman Institute for Advanced Science and Technology, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 405 N. Mathews Avenue, Urbana, IL 61801
| | - Stephen A. Boppart
- Biophotonics Imaging Laboratory, Beckman Institute for Advanced Science and Technology, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 405 N. Mathews Avenue, Urbana, IL 61801
- Biophotonics Imaging Laboratory, Beckman Institute for Advanced Science and Technology, Department of Bioengineering, Department of Medicine, University of Illinois at Urbana-Champaign, 405 N. Mathews Avenue, Urbana, IL 61801
| |
Collapse
|
39
|
Bootsma GJ, Siewerdsen JH, Daly MJ, Jaffray DA. Initial investigation of an automatic registration algorithm for surgical navigation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2009; 2008:3638-42. [PMID: 19163499 DOI: 10.1109/iembs.2008.4649996] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The procedure required for registering a surgical navigation system prior to use in a surgical procedure is conventionally a time-consuming manual process that is prone to human errors and must be repeated as necessary through the course of a procedure. The conventional procedure becomes even more time consuming when intra-operative 3D imaging such as the C-arm cone-beam CT (CBCT) is introduced, as each updated volume set requires a new registration. To improve the speed and accuracy of registering image and world reference frames in image-guided surgery, a novel automatic registration algorithm was developed and investigated. The surgical navigation system consists of either Polaris (Northern Digital Inc., Waterloo, ON) or MicronTracker (Claron Technology Inc., Toronto, ON) tracking camera(s), custom software (Cogito running on a PC), and a prototype CBCT imaging system based on a mobile isocentric C-arm (Siemens, Erlangen, Germany). Experiments were conducted to test the accuracy of automatic registration methods for both the MicronTracker and Polaris tracking cameras. Results indicate the automated registration performs as well as the manual registration procedure using either the Claron or Polaris camera. The average root-mean-squared (rms) observed target registration error (TRE) for the manual procedure was 2.58 +/- 0.42 mm and 1.76 +/- 0.49 mm for the Polaris and MicronTracker, respectively. The mean observed TRE for the automatic algorithm was 2.11 +/- 0.13 and 2.03 +/- 0.3 mm for the Polaris and MicronTracker, respectively. Implementation and optimization of the automatic registration technique in Carm CBCT guidance of surgical procedures is underway.
Collapse
Affiliation(s)
- Gregory J Bootsma
- Department of Medical Biophysics, University of Toronto, Ontario, Canada.
| | | | | | | |
Collapse
|
40
|
|
41
|
Bachar G, Siewerdsen JH, Daly MJ, Jaffray DA, Irish JC. Image quality and localization accuracy in C-arm tomosynthesis-guided head and neck surgery. Med Phys 2008; 34:4664-77. [PMID: 18196794 DOI: 10.1118/1.2799492] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
The image quality and localization accuracy for C-arm tomosynthesis and cone-beam computed tomography (CBCT) guidance of head and neck surgery were investigated. A continuum in image acquisition was explored, ranging from a single exposure (radiograph) to multiple projections acquired over a limited arc (tomosynthesis) to a full semicircular trajectory (CBCT). Experiments were performed using a prototype mobile C-arm modified to perform 3D image acquisition (a modified Siemens PowerMobil). The tradeoffs in image quality associated with the extent of the source-detector arc (theta(tot)), the number of projection views, and the total imaging dose were evaluated in phantom and cadaver studies. Surgical localization performance was evaluated using three cadaver heads imaged as a function of theta(tot). Six localization tasks were considered, ranging from high-contrast feature identification (e.g., tip of a K-wire pointer) to more challenging soft-tissue delineation (e.g., junction of the hard and soft palate). Five head and neck surgeons and one radiologist participated as observers. For each localization task, the 3D coordinates of landmarks pinpointed by each observer were analyzed as a function of theta(tot). For all tomosynthesis angles, image quality was highest in the coronal plane, whereas sagittal and axial planes exhibited a substantial decrease in spatial resolution associated with out-of-plane blur and distortion. Tasks involving complex, lower-contrast features demonstrated steeper degradation with smaller tomosynthetic arc. Localization accuracy in the coronal plane was correspondingly high, maintained to < 3 mm down to theta(tot) approximately 30 degrees, whereas sagittal and axial localization degraded rapidly below theta(tot) approximately 60 degrees. Similarly, localization precision was better than approximately 1 mm within the coronal plane, compared to approximately 2-3 mm out-of-plane for tomosynthesis angles below theta(tot) approximately 45 degrees. An overall 3D localization accuracy of approximately 2.5 mm was achieved with theta(tot) approximately 90 degrees for most tasks. The high in-plane spatial resolution, short scanning time, and low radiation dose characteristic of tomosynthesis may enable the surgeon to collect near real-time images throughout the procedure with minimal interference to surgical workflow. Therefore, tomosynthesis could provide a useful addition to the image-guided surgery arsenal, providing on-demand, high quality image updates, complemented by CBCT at critical milestones in the surgical procedure.
Collapse
Affiliation(s)
- G Bachar
- Department of Otolaryngology-Head and Neck Surgery, Princess Margaret Hospital, Toronto, Ontario M5G 2M9, Canada
| | | | | | | | | |
Collapse
|
42
|
Abstract
Endoscopic orbital procedures are hindered by the difficulty in differentiating between orbital structures during those procedures. Image guidance during endoscopic orbital procedures may improve the outcome of orbital endoscopic procedures because real time image and physical space tracking information can be provided to the surgeons to help in the delivery of therapy to the orbit. The research plan proposes to study the feasibility of image guided endoscopic orbital procedures. Specifically this research will characterize both the random and spatial fiducial localization error of the magnetic tracker. We will also determine and optimal fiducial placement that will minimize the TRE at the optic nerve junction and also demonstrate and validate the use of the magnetic tracker for transorbital endoscopic image guidance.
Collapse
Affiliation(s)
- Nkiruka C Atuegwu
- Biomedical Engineering Department, Vanderbilt University TN 37235, USA.
| | | | | |
Collapse
|
43
|
Skerl D, Likar B, Fitzpatrick JM, Pernus F. Comparative evaluation of similarity measures for the rigid registration of multi-modal head images. Phys Med Biol 2007; 52:5587-601. [PMID: 17804883 DOI: 10.1088/0031-9155/52/18/008] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Image registrations that are based on similarity measures simply adjust the parameters of an appropriate spatial transformation model until the similarity measure reaches an optimum. The numerous similarity measures that have been proposed in the past are differently sensitive to imaging modality, image content and differences in the image content, selection of the floating and target image, partial image overlap, etc. In this paper, we evaluate and compare 12 similarity measures for the rigid registration. To study the impact of different imaging modalities on the behavior of similarity measures, we have used 16 CT/MR and 6 PET/MR image pairs with known 'gold standard' registrations. The results for the PET/MR registration and for the registration of CT to both rectified and unrectified MR images indicate that mutual information, normalized mutual information and the entropy correlation coefficient are the most accurate similarity measures and have the smallest risk of being trapped in a local optimum. The results of an experiment on the impact of exchanging the floating and target image indicate that, especially in MR/PET registrations, the behavior of some similarity measures, such as mutual information, significantly depends on which image is the floating and which is the target.
Collapse
Affiliation(s)
- Darko Skerl
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | | |
Collapse
|
44
|
Rauth TP, Bao PQ, Galloway RL, Bieszczad J, Friets EM, Knaus DA, Kynor DB, Herline AJ. Laparoscopic surface scanning and subsurface targeting: Implications for image-guided laparoscopic liver surgery. Surgery 2007; 142:207-14. [PMID: 17689687 DOI: 10.1016/j.surg.2007.04.016] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2000] [Revised: 04/19/2007] [Accepted: 04/23/2007] [Indexed: 01/14/2023]
Abstract
Segmental liver resection and locoregional ablative therapies are dependent upon accurate tumor localization to ensure safety as well as acceptable oncologic results. Because of the liver's limited external landmarks and complex internal anatomy, such tumor localization poses a technical challenge. Image guided therapies (IGT) address this problem by mapping the real-time, intraoperative position of surgical instruments onto preoperative tomographic imaging through a process called registration. Accuracy is critical to IGT and is a function of: 1) the registration technique, 2) the tissue characteristics, and 3) imaging techniques. The purpose of this study is to validate a novel method of registration using an endoscopic Laser Range Scanner (eLRS) and demonstrate its applicability to laparoscopic liver surgery. Six radiopaque targets were inserted into an ex-vivo bovine liver and a computed tomography (CT) scan was obtained. Using the eLRS, the liver surface was scanned and a surface-based registration was constructed to predict the position of the intraparenchymal targets. The target registration error (TRE) achieved using our surface-based registration was 2.4 +/- 1.0 mm. A comparable TRE using traditional fiducial-based registration was 2.6 +/- 1.7 mm. Compared to traditional fiducial-based registration, laparoscopic surface scanning is able to predict the location of intraparenchymal liver targets with similar accuracy and rate of data acquisition.
Collapse
Affiliation(s)
- Thomas P Rauth
- Department of Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
| | | | | | | | | | | | | | | |
Collapse
|
45
|
Lovo EE, Quintana JC, Puebla MC, Torrealba G, Santos JL, Lira IH, Tagle P. A NOVEL, INEXPENSIVE METHOD OF IMAGE COREGISTRATION FOR APPLICATIONS IN IMAGE-GUIDED SURGERY USING AUGMENTED REALITY. Oper Neurosurg (Hagerstown) 2007; 60:366-71; discussion 371-2. [PMID: 17415176 DOI: 10.1227/01.neu.0000255360.32689.fa] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVE Augmented reality (AR) is a technique in which an overlay of a virtual image to a live picture is performed to create a new image in which both original images coexist as a single image. This results in the visualization of internal structures through overlying tissues. The objective was to describe an easy, inexpensive, and successful method to coregister with AR in an image-guided surgery setting using the resources at hand. METHODS Cortical information was obtained with a volumetric acquisition of 200 0.8-mm thick, cerebral magnetic resonance imaging scans in an axial T1-weighted sequence. For the venous anatomy, a contrast phase at 7 mm/s velocity was used. This data was reconstructed in a three-dimensional fashion using MRIcro software (v. 1.37, freeware, courtesy of Chris Rorden) and was overlaid to a digital image of the cerebral cortex either pre- or intraoperatively. RESULTS Eight patients were studied. There was an adequate coregistration in seven of the patients as confirmed by intraoperative ultrasound, frame-based stereotaxy, or obvious anatomic homology between the three-dimensional magnetic resonance imaging scan virtual reconstruction and the live image obtained during surgery. AR was not possible in one case of a cerebellar lesion. CONCLUSION AR coregistration capabilities are adequate when revised by other intraoperative guidance devices. When performed with "freeware" software and conventional digital cameras, it is relatively inexpensive, which makes it a potential tool for surgical planning and noncontinuous intraoperative guidance in neurosurgery. Its largest drawbacks are the inability to function in deep-seated lesions and its lack of tracking devices, which gives it a noncontinuous coregistration nature.
Collapse
Affiliation(s)
- Eduardo E Lovo
- Department of Neurosurgery, Pontificia Universidad Católica de Chile, Santiago de Chile, Chile.
| | | | | | | | | | | | | |
Collapse
|
46
|
Abstract
Contemporary imaging modalities can now provide the surgeon with high quality three- and four-dimensional images depicting not only normal anatomy and pathology, but also vascularity and function. A key component of image-guided surgery (IGS) is the ability to register multi-modal pre-operative images to each other and to the patient. The other important component of IGS is the ability to track instruments in real time during the procedure and to display them as part of a realistic model of the operative volume. Stereoscopic, virtual- and augmented-reality techniques have been implemented to enhance the visualization and guidance process. For the most part, IGS relies on the assumption that the pre-operatively acquired images used to guide the surgery accurately represent the morphology of the tissue during the procedure. This assumption may not necessarily be valid, and so intra-operative real-time imaging using interventional MRI, ultrasound, video and electrophysiological recordings are often employed to ameliorate this situation. Although IGS is now in extensive routine clinical use in neurosurgery and is gaining ground in other surgical disciplines, there remain many drawbacks that must be overcome before it can be employed in more general minimally-invasive procedures. This review overviews the roots of IGS in neurosurgery, provides examples of its use outside the brain, discusses the infrastructure required for successful implementation of IGS approaches and outlines the challenges that must be overcome for IGS to advance further.
Collapse
Affiliation(s)
- Terry M Peters
- Robarts Research Institute, University of Western Ontario, PO Box 5015, 100 Perth Drive, London, ON N6A 5K8, Canada.
| |
Collapse
|
47
|
Skerl D, Likar B, Pernus F. A protocol for evaluation of similarity measures for rigid registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:779-91. [PMID: 16768242 DOI: 10.1109/tmi.2006.874963] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The accuracy and robustness of a registration method depend on a number of factors, such as imaging modality, image content and image degrading effects, the class of spatial transformation used for registration, similarity measure, optimization, and numerous implementation details. The complex interdependence of these factors makes the assessment of the influence of a particular factor on registration difficult, although it is often desirable to have some estimate of such influences prior to registration. The similarity measure used to create the cost function is one of the factors that most influences the quality of registration. Traditionally, limited information on the behavior of a similarity measure is obtained either by studying the quality of the final registration or by drawing plots of similarity measure values obtained by translating or rotating one image relative to the "gold standard." In this paper, we present a protocol for a more thorough, optimization-independent, and systematic statistical evaluation of similarity measures. This protocol estimates a similarity measure's capture range, the number, location and extent of local optima, and the accuracy and distinctiveness of the global optimum. To show that the proposed evaluation protocol is viable, we have conducted several experiments with nine similarity measures and real computed tomography and magnetic resonance (MR) images of a spine phantom, MR brain images, and MR and positron emission tomography brain images, for which "gold standard" registrations were available. We have also studied the impact of histogram bin size on the behavior of nine similarity measures. The proposed evaluation protocol is useful for selecting the best similarity measure and corresponding optimization method for a particular application, as well as for studying the influence of sampling, interpolation, histogram bin size, partial image overlap, and image degradation, such as noise, intensity inhomogeneity, and geometrical distortions on the behavior of a similarity measure.
Collapse
Affiliation(s)
- Darko Skerl
- University of Ljubljana, Faculty of Electrical Engineering, Slovenia.
| | | | | |
Collapse
|
48
|
Verhey JF, Wisser J, Keller T, Westin CF, Kikinis R. Rigid overlay of volume sonography and MR image data of the female pelvic floor using a fiducial based alignment—feasibility due to a case series. Comput Med Imaging Graph 2005; 29:243-9. [PMID: 15890251 DOI: 10.1016/j.compmedimag.2004.10.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2004] [Accepted: 10/26/2004] [Indexed: 10/25/2022]
Abstract
The visual combination of different medical image acquisition techniques (modalities) can lead to new modalities with enhanced informative content. In this paper, we present an overlay technique of magnetic resonance (MR) and 3D US image data sets of the female anal canal (internal and external sphincter) as a base for a new diagnostic modality. It is a new field of the application of the overlay technique. Three corresponding MR and US volume data sets from the female pelvic floor region were filtered using adaptive filtering techniques and overlayed (=registered rigidly) with a landmark based alignment method.
Collapse
Affiliation(s)
- Janko F Verhey
- Department of Medical Informatics, University of Goettingen, Robert-Koch-Str. 40, D-37075 Goettingen, Germany.
| | | | | | | | | |
Collapse
|
49
|
Warmath JR, Herline AJ. New Technologies in Rectal Cancer Management. SEMINARS IN COLON AND RECTAL SURGERY 2005. [DOI: 10.1053/j.scrs.2005.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
50
|
Tomazevic D, Likar B, Slivnik T, Pernus F. 3-D/2-D registration of CT and MR to X-ray images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:1407-1416. [PMID: 14606674 DOI: 10.1109/tmi.2003.819277] [Citation(s) in RCA: 75] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
A crucial part of image-guided therapy is registration of preoperative and intraoperative images, by which the precise position and orientation of the patient's anatomy is determined in three dimensions. This paper presents a novel approach to register three-dimensional (3-D) computed tomography (CT) or magnetic resonance (MR) images to one or more two-dimensional (2-D) X-ray images. The registration is based solely on the information present in 2-D and 3-D images. It does not require fiducial markers, intraoperative X-ray image segmentation, or timely construction of digitally reconstructed radiographs. The originality of the approach is in using normals to bone surfaces, preoperatively defined in 3-D MR or CT data, and gradients of intraoperative X-ray images at locations defined by the X-ray source and 3-D surface points. The registration is concerned with finding the rigid transformation of a CT or MR volume, which provides the best match between surface normals and back projected gradients, considering their amplitudes and orientations. We have thoroughly validated our registration method by using MR, CT, and X-ray images of a cadaveric lumbar spine phantom for which "gold standard" registration was established by means of fiducial markers, and its accuracy assessed by target registration error. Volumes of interest, containing single vertebrae L1-L5, were registered to different pairs of X-ray images from different starting positions, chosen randomly and uniformly around the "gold standard" position. CT/X-ray (MR/ X-ray) registration, which is fast, was successful in more than 91% (82% except for L1) of trials if started from the "gold standard" translated or rotated for less than 6 mm or 17 degrees (3 mm or 8.6 degrees), respectively. Root-mean-square target registration errors were below 0.5 mm for the CT to X-ray registration and below 1.4 mm for MR to X-ray registration.
Collapse
Affiliation(s)
- Dejan Tomazevic
- University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, 1000 Ljubljana, Slovenia.
| | | | | | | |
Collapse
|