1
|
Teufel T, Shu H, Soberanis-Mukul RD, Mangulabnan JE, Sahu M, Vedula SS, Ishii M, Hager G, Taylor RH, Unberath M. OneSLAM to map them all: a generalized approach to SLAM for monocular endoscopic imaging based on tracking any point. Int J Comput Assist Radiol Surg 2024; 19:1259-1266. [PMID: 38775904 DOI: 10.1007/s11548-024-03171-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/30/2024] [Indexed: 07/10/2024]
Abstract
PURPOSE Monocular SLAM algorithms are the key enabling technology for image-based surgical navigation systems for endoscopic procedures. Due to the visual feature scarcity and unique lighting conditions encountered in endoscopy, classical SLAM approaches perform inconsistently. Many of the recent approaches to endoscopic SLAM rely on deep learning models. They show promising results when optimized on singular domains such as arthroscopy, sinus endoscopy, colonoscopy or laparoscopy, but are limited by an inability to generalize to different domains without retraining. METHODS To address this generality issue, we propose OneSLAM a monocular SLAM algorithm for surgical endoscopy that works out of the box for several endoscopic domains, including sinus endoscopy, colonoscopy, arthroscopy and laparoscopy. Our pipeline builds upon robust tracking any point (TAP) foundation models to reliably track sparse correspondences across multiple frames and runs local bundle adjustment to jointly optimize camera poses and a sparse 3D reconstruction of the anatomy. RESULTS We compare the performance of our method against three strong baselines previously proposed for monocular SLAM in endoscopy and general scenes. OneSLAM presents better or comparable performance over existing approaches targeted to that specific data in all four tested domains, generalizing across domains without the need for retraining. CONCLUSION OneSLAM benefits from the convincing performance of TAP foundation models but generalizes to endoscopic sequences of different anatomies all while demonstrating better or comparable performance over domain-specific SLAM approaches. Future research on global loop closure will investigate how to reliably detect loops in endoscopic scenes to reduce accumulated drift and enhance long-term navigation capabilities.
Collapse
Affiliation(s)
- Timo Teufel
- Johns Hopkins University, Baltimore, MD, 21211, USA.
| | - Hongchao Shu
- Johns Hopkins University, Baltimore, MD, 21211, USA
| | | | | | - Manish Sahu
- Johns Hopkins University, Baltimore, MD, 21211, USA
| | | | - Masaru Ishii
- Johns Hopkins Medical Institutions, Baltimore, MD, 21287, USA
| | | | - Russell H Taylor
- Johns Hopkins University, Baltimore, MD, 21211, USA
- Johns Hopkins Medical Institutions, Baltimore, MD, 21287, USA
| | - Mathias Unberath
- Johns Hopkins University, Baltimore, MD, 21211, USA
- Johns Hopkins Medical Institutions, Baltimore, MD, 21287, USA
| |
Collapse
|
2
|
Acar A, Atoum J, Reed A, Li Y, Kavoussi N, Wu JY. Intraoperative gaze guidance with mixed reality. Healthc Technol Lett 2024; 11:85-92. [PMID: 38638505 PMCID: PMC11022221 DOI: 10.1049/htl2.12061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 11/22/2023] [Indexed: 04/20/2024] Open
Abstract
Efficient communication and collaboration are essential in the operating room for successful and safe surgery. While many technologies are improving various aspects of surgery, communication between attending surgeons, residents, and surgical teams is still limited to verbal interactions that are prone to misunderstandings. Novel modes of communication can increase speed and accuracy, and transform operating rooms. A mixed reality (MR) based gaze sharing application on Microsoft HoloLens 2 headset that can help expert surgeons indicate specific regions, communicate with decreased verbal effort, and guide novices throughout an operation is presented. The utility of the application is tested with a user study of endoscopic kidney stone localization completed by urology experts and novice surgeons. Improvement is observed in the NASA task load index surveys (up to 25.23%), in the success rate of the task (6.98% increase in localized stone percentage), and in gaze analyses (up to 31.99%). The proposed application shows promise in both operating room applications and surgical training tasks.
Collapse
Affiliation(s)
- Ayberk Acar
- Department of Computer ScienceVanderbilt UniversityNashvilleTennesseeUSA
| | - Jumanh Atoum
- Department of Computer ScienceVanderbilt UniversityNashvilleTennesseeUSA
- Present address:
Department of Computer ScienceVanderbilt UniversityNashvilleTennesseeUSA
| | - Amy Reed
- Department of UrologyVanderbilt University Medical CenterNashvilleTennesseeUSA
| | - Yizhou Li
- Department of ElectricalComputer and Systems EngineeringCase Western Reserve UniversityClevelandOhioUSA
| | - Nicholas Kavoussi
- Department of UrologyVanderbilt University Medical CenterNashvilleTennesseeUSA
| | - Jie Ying Wu
- Department of Computer ScienceVanderbilt UniversityNashvilleTennesseeUSA
| |
Collapse
|
3
|
Cannon PC, Setia SA, Klein-Gardner S, Kavoussi NL, Webster RJ, Herrell SD. Are 3D Image Guidance Systems Ready for Use? A Comparative Analysis of 3D Image Guidance Implementations in Minimally Invasive Partial Nephrectomy. J Endourol 2024; 38:395-407. [PMID: 38251637 PMCID: PMC10979686 DOI: 10.1089/end.2023.0059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Introduction: Three-dimensional image-guided surgical (3D-IGS) systems for minimally invasive partial nephrectomy (MIPN) can potentially improve the efficiency and accuracy of intraoperative anatomical localization and tumor resection. This review seeks to analyze the current state of research regarding 3D-IGS, including the evaluation of clinical outcomes, system functionality, and qualitative insights regarding 3D-IGS's impact on surgical procedures. Methods: We have systematically reviewed the clinical literature pertaining to 3D-IGS deployed for MIPN. For inclusion, studies must produce a patient-specific 3D anatomical model from two-dimensional imaging. Data extracted from the studies include clinical results, registration (alignment of the 3D model to the surgical scene) method used, limitations, and data types reported. A subset of studies was qualitatively analyzed through an inductive coding approach to identify major themes and subthemes across the studies. Results: Twenty-five studies were included in the review. Eight (32%) studies reported clinical results that point to 3D-IGS improving multiple surgical outcomes. Manual registration was the most utilized (48%). Soft tissue deformation was the most cited limitation among the included studies. Many studies reported qualitative statements regarding surgeon accuracy improvement, but quantitative surgeon accuracy data were not reported. During the qualitative analysis, six major themes emerged across the nine applicable studies. They are as follows: 3D-IGS is necessary, 3D-IGS improved surgical outcomes, researcher/surgeon confidence in 3D-IGS system, enhanced surgeon ability/accuracy, anatomical explanation for qualitative assessment, and claims without data or reference to support. Conclusions: Currently, clinical outcomes are the main source of quantitative data available to point to 3D-IGS's efficacy. However, the literature qualitatively suggests the benefit of accurate 3D-IGS for robotic partial nephrectomy.
Collapse
Affiliation(s)
- Piper C. Cannon
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Shaan A. Setia
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Stacy Klein-Gardner
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Nicholas L. Kavoussi
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Robert J. Webster
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - S. Duke Herrell
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
4
|
Ping L, Wang Z, Yao J, Gao J, Yang S, Li J, Shi J, Wu W, Hua S, Wang H. Application and evaluation of surgical tool and tool tip recognition based on Convolutional Neural Network in multiple endoscopic surgical scenarios. Surg Endosc 2023; 37:7376-7384. [PMID: 37580576 DOI: 10.1007/s00464-023-10323-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 07/19/2023] [Indexed: 08/16/2023]
Abstract
BACKGROUND In recent years, computer-assisted intervention and robot-assisted surgery are receiving increasing attention. The need for real-time identification and tracking of surgical tools and tool tips is constantly demanding. A series of researches focusing on surgical tool tracking and identification have been performed. However, the size of dataset, the sensitivity/precision, and the response time of these studies were limited. In this work, we developed and utilized an automated method based on Convolutional Neural Network (CNN) and You Only Look Once (YOLO) v3 algorithm to locate and identify surgical tools and tool tips covering five different surgical scenarios. MATERIALS AND METHODS An algorithm of object detection was applied to identify and locate the surgical tools and tool tips. DarkNet-19 was used as Backbone Network and YOLOv3 was modified and applied for the detection. We included a series of 181 endoscopy videos covering 5 different surgical scenarios: pancreatic surgery, thyroid surgery, colon surgery, gastric surgery, and external scenes. A total amount of 25,333 images containing 94,463 targets were collected. Training and test sets were divided in a proportion of 2.5:1. The data sets were openly stored at the Kaggle database. RESULTS Under an Intersection over Union threshold of 0.5, the overall sensitivity and precision rate of the model were 93.02% and 89.61% for tool recognition and 87.05% and 83.57% for tool tip recognition, respectively. The model demonstrated the highest tool and tool tip recognition sensitivity and precision rate under external scenes. Among the four different internal surgical scenes, the network had better performances in pancreatic and colon surgeries and poorer performances in gastric and thyroid surgeries. CONCLUSION We developed a surgical tool and tool tip recognition model based on CNN and YOLOv3. Validation of our model demonstrated satisfactory precision, accuracy, and robustness across different surgical scenes.
Collapse
Affiliation(s)
- Lu Ping
- 8-Year MD Program, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Zhihong Wang
- 8-Year MD Program, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Jingjing Yao
- Department of Nursing, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Junyi Gao
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Sen Yang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Jiayi Li
- 8-Year MD Program, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Jile Shi
- 8-Year MD Program, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Wenming Wu
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Surong Hua
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.
| | - Huizhen Wang
- Department of Nursing, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.
| |
Collapse
|
5
|
Kokko MA, Van Citters DW, Seigne JD, Halter RJ. A particle filter approach to dynamic kidney pose estimation in robotic surgical exposure. Int J Comput Assist Radiol Surg 2022; 17:1079-1089. [PMID: 35511394 DOI: 10.1007/s11548-022-02638-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 04/08/2022] [Indexed: 11/27/2022]
Abstract
PURPOSE Traditional soft tissue registration methods require direct intraoperative visualization of a significant portion of the target anatomy in order to produce acceptable surface alignment. Image guidance is therefore generally not available during the robotic exposure of structures like the kidneys which are not immediately visualized upon entry into the abdomen. This paper proposes guiding surgical exposure with an iterative state estimator that assimilates small visual cues into an a priori anatomical model as exposure progresses, thereby evolving pose estimates for the occluded structures of interest. METHODS Intraoperative surface observations of a right kidney are simulated using endoscope tracking and preoperative tomography from a representative robotic partial nephrectomy case. Clinically relevant random perturbations of the true kidney pose are corrected using this sequence of observations in a particle filter framework to estimate an optimal similarity transform for fitting a patient-specific kidney model at each step. The temporal response of registration error is compared against that of serial rigid coherent point drift (CPD) in both static and simulated dynamic surgical fields, and for varying levels of observation persistence. RESULTS In the static case, both particle filtering and persistent CPD achieved sub-5 mm accuracy, with CPD processing observations 75% faster. Particle filtering outperformed CPD in the dynamic case under equivalent computation times due to the former requiring only minimal persistence. CONCLUSION This proof-of-concept simulation study suggests that Bayesian state estimation may provide a viable pathway to image guidance for surgical exposure in the abdomen, especially in the presence of dynamic intraoperative tissue displacement and deformation.
Collapse
Affiliation(s)
- Michael A Kokko
- Thayer School of Engineering, Dartmouth College, 15 Thayer Drive, Hanover, NH, 03755, USA.
| | - Douglas W Van Citters
- Thayer School of Engineering, Dartmouth College, 15 Thayer Drive, Hanover, NH, 03755, USA
| | - John D Seigne
- Dartmouth-Hitchcock Medical Center, Section of Urology, 1 Medical Center Drive, Lebanon, NH, 03756, USA.,Geisel School of Medicine, Dartmouth College, 1 Rope Ferry Road, Hanover, NH, 03755, USA
| | - Ryan J Halter
- Thayer School of Engineering, Dartmouth College, 15 Thayer Drive, Hanover, NH, 03755, USA.,Geisel School of Medicine, Dartmouth College, 1 Rope Ferry Road, Hanover, NH, 03755, USA
| |
Collapse
|
6
|
Su YH, Jiang W, Chitrakar D, Huang K, Peng H, Hannaford B. Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation. SENSORS (BASEL, SWITZERLAND) 2021; 21:5163. [PMID: 34372398 PMCID: PMC8346972 DOI: 10.3390/s21155163] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/19/2022]
Abstract
Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation.
Collapse
Affiliation(s)
- Yun-Hsuan Su
- Department of Computer Science, Mount Holyoke College, 50 College Street, South Hadley, MA 01075, USA;
| | - Wenfan Jiang
- Department of Computer Science, Mount Holyoke College, 50 College Street, South Hadley, MA 01075, USA;
| | - Digesh Chitrakar
- Department of Engineering, Trinity College, 300 Summit St., Hartford, CT 06106, USA; (D.C.); (K.H.)
| | - Kevin Huang
- Department of Engineering, Trinity College, 300 Summit St., Hartford, CT 06106, USA; (D.C.); (K.H.)
| | - Haonan Peng
- Department of Electrical and Computer Engineering, University of Washington, 185 Stevens Way, Paul Allen Center, Seattle, WA 98105, USA; (H.P.); (B.H.)
| | - Blake Hannaford
- Department of Electrical and Computer Engineering, University of Washington, 185 Stevens Way, Paul Allen Center, Seattle, WA 98105, USA; (H.P.); (B.H.)
| |
Collapse
|
7
|
Camboni D, Massari L, Chiurazzi M, Calio R, Alcaide JO, D'Abbraccio J, Mazomenos E, Stoyanov D, Menciassi A, Carrozza MC, Dario P, Oddo CM, Ciuti G. Endoscopic Tactile Capsule for Non-Polypoid Colorectal Tumour Detection. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2021; 3:64-73. [DOI: 10.1109/tmrb.2020.3037255] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
8
|
Ma L, Wang J, Kiyomatsu H, Tsukihara H, Sakuma I, Kobayashi E. Surgical navigation system for laparoscopic lateral pelvic lymph node dissection in rectal cancer surgery using laparoscopic-vision-tracked ultrasonic imaging. Surg Endosc 2020; 35:6556-6567. [PMID: 33185764 DOI: 10.1007/s00464-020-08153-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/04/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND Laparoscopic lateral pelvic lymph node dissection (LPLND) in rectal cancer surgery requires considerable skill because the pelvic arteries, which need to be located to guide the dissection, are covered by other tissues and cannot be observed on laparoscopic views. Therefore, surgeons need to localize the pelvic arteries accurately before dissection, to prevent injury to these arteries. METHODS This report proposes a surgical navigation system to facilitate artery localization in laparoscopic LPLND by combining ultrasonic imaging and laparoscopy. Specifically, free-hand laparoscopic ultrasound (LUS) is employed to capture the arteries intraoperatively in this approach, and a laparoscopic vision-based tracking system is utilized to track the LUS probe. To extract the artery contours from the two-dimensional ultrasound image sequences efficiently, an artery extraction framework based on local phase-based snakes was developed. After reconstructing the three-dimensional intraoperative artery model from ultrasound images, a high-resolution artery model segmented from preoperative computed tomography (CT) images was rigidly registered to the intraoperative artery model and overlaid onto the laparoscopic view to guide laparoscopic LPLND. RESULTS Experiments were conducted to evaluate the performance of the vision-based tracking system, and the average reconstruction error of the proposed tracking system was found to be 2.4 mm. Then, the proposed navigation system was quantitatively evaluated on an artery phantom. The reconstruction time and average navigation error were 8 min and 2.3 mm, respectively. A navigation system was also successfully constructed to localize the pelvic arteries in laparoscopic and open surgeries of a swine. This demonstrated the feasibility of the proposed system in vivo. The construction times in the laparoscopic and open surgeries were 14 and 12 min, respectively. CONCLUSIONS The experimental results showed that the proposed navigation system can guide laparoscopic LPLND and requires a significantly shorter setting time than the state-of-the-art navigation systems do.
Collapse
Affiliation(s)
- Lei Ma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Junchen Wang
- School of Mechanical Engineering, Beihang University, Beijing, China
| | | | | | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
9
|
Elsayed M, Kadom N, Ghobadi C, Strauss B, Al Dandan O, Aggarwal A, Anzai Y, Griffith B, Lazarow F, Straus CM, Safdar NM. Virtual and augmented reality: potential applications in radiology. Acta Radiol 2020; 61:1258-1265. [PMID: 31928346 DOI: 10.1177/0284185119897362] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The modern-day radiologist must be adept at image interpretation, and the one who most successfully leverages new technologies may provide the highest value to patients, clinicians, and trainees. Applications of virtual reality (VR) and augmented reality (AR) have the potential to revolutionize how imaging information is applied in clinical practice and how radiologists practice. This review provides an overview of VR and AR, highlights current applications, future developments, and limitations hindering adoption.
Collapse
Affiliation(s)
- Mohammad Elsayed
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Nadja Kadom
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Comeron Ghobadi
- Department of Radiology, The University of Chicago Pritzker School of Medicine, IL, USA
| | - Benjamin Strauss
- Department of Radiology, The University of Chicago Pritzker School of Medicine, IL, USA
| | - Omran Al Dandan
- Department of Radiology, Imam Abdulrahman Bin Faisal University College of Medicine, Dammam, Eastern Province, Saudi Arabia
| | - Abhimanyu Aggarwal
- Department of Radiology, Eastern Virginia Medical School, Norfolk, VA, USA
| | - Yoshimi Anzai
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, Utah, USA
| | - Brent Griffith
- Department of Radiology, Henry Ford Health System, Detroit, MI, USA
| | - Frances Lazarow
- Department of Radiology, Eastern Virginia Medical School, Norfolk, VA, USA
| | - Christopher M Straus
- Department of Radiology, The University of Chicago Pritzker School of Medicine, IL, USA
| | - Nabile M Safdar
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
10
|
Review of surgical robotic systems for keyhole and endoscopic procedures: state of the art and perspectives. Front Med 2020; 14:382-403. [DOI: 10.1007/s11684-020-0781-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Accepted: 03/05/2020] [Indexed: 02/06/2023]
|
11
|
Yu L, Wang P, Yu X, Yan Y, Xia Y. A Holistically-Nested U-Net: Surgical Instrument Segmentation Based on Convolutional Neural Network. J Digit Imaging 2020; 33:341-347. [PMID: 31595347 PMCID: PMC7165208 DOI: 10.1007/s10278-019-00277-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Surgical instrument segmentation is an essential task in the domain of computer-assisted surgical system. It is critical to increase the context-awareness of surgeons during the operation. We propose a new model based on the U-Net architecture for surgical instrument segmentation, which aggregates multi-scale feature maps and has cascaded dilated convolution layers. The model adopts dense upsampling convolution instead of deconvolution for upsampling. We set the side loss function on each side-output layer. The loss function includes an output loss function and all side loss functions to supervise the training of each layer. To validate our model, we compare our proposed model with advanced architecture U-Net in the dataset consisting of laparoscopy images from multiple surgical operations. Experiment results demonstrate that our model achieves good performance.
Collapse
Affiliation(s)
- Lingtao Yu
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province China
| | - Pengcheng Wang
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province China
| | - Xiaoyan Yu
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province China
| | - Yusheng Yan
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province China
| | - Yongqiang Xia
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province China
| |
Collapse
|
12
|
Liu X, Kane TD, Shekhar R. GPS Laparoscopic Ultrasound: Embedding an Electromagnetic Sensor in a Laparoscopic Ultrasound Transducer. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:989-997. [PMID: 30709691 DOI: 10.1016/j.ultrasmedbio.2018.11.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 11/21/2018] [Accepted: 11/29/2018] [Indexed: 06/09/2023]
Abstract
Tracking the location and orientation of a laparoscopic ultrasound (LUS) transducer is a prerequisite in many surgical visualization and navigation applications. Electromagnetic (EM) tracking is a preferred method to track an LUS transducer with an articulating imaging tip. The conventional approach to integrating EM tracking with LUS is to attach an EM sensor on the outer surface of the imaging tip (external setup), which is not ideal for routine clinical use. In this work, we embedded an EM sensor inside a standard LUS transducer. We found that ultrasound image quality and the four-way articulation function of the transducer were not affected by this sensor integration. Furthermore, we found that the tracking accuracy of our integrated transducer was comparable to that of the external setup. An animal study conducted using the developed transducer suggests that an internally embedded EM sensor is a clinically more viable approach, and may be the future of tracking an articulating LUS transducer.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC, USA
| | - Timothy D Kane
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC, USA
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC, USA.
| |
Collapse
|
13
|
Kocev B, Hahn HK, Linsen L, Wells WM, Kikinis R. Uncertainty-aware asynchronous scattered motion interpolation using Gaussian process regression. Comput Med Imaging Graph 2019; 72:1-12. [PMID: 30654093 PMCID: PMC6433137 DOI: 10.1016/j.compmedimag.2018.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 08/16/2018] [Accepted: 12/03/2018] [Indexed: 11/28/2022]
Abstract
We address the problem of interpolating randomly non-uniformly spatiotemporally scattered uncertain motion measurements, which arises in the context of soft tissue motion estimation. Soft tissue motion estimation is of great interest in the field of image-guided soft-tissue intervention and surgery navigation, because it enables the registration of pre-interventional/pre-operative navigation information on deformable soft-tissue organs. To formally define the measurements as spatiotemporally scattered motion signal samples, we propose a novel motion field representation. To perform the interpolation of the motion measurements in an uncertainty-aware optimal unbiased fashion, we devise a novel Gaussian process (GP) regression model with a non-constant-mean prior and an anisotropic covariance function and show through an extensive evaluation that it outperforms the state-of-the-art GP models that have been deployed previously for similar tasks. The employment of GP regression enables the quantification of uncertainty in the interpolation result, which would allow the amount of uncertainty present in the registered navigation information governing the decisions of the surgeon or intervention specialist to be conveyed.
Collapse
Affiliation(s)
- Bojan Kocev
- Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany; Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Computer Science and Electrical Engineering, Jacobs University Bremen, Bremen, Germany.
| | - Horst Karl Hahn
- Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Computer Science and Electrical Engineering, Jacobs University Bremen, Bremen, Germany
| | - Lars Linsen
- Institute of Computer Science, Westfälische Wilhelms-Universität Münster, Germany
| | - William M Wells
- Department of Radiology, Harvard Medical School and Brigham and Women's Hospital, Boston, MA 02115, USA
| | - Ron Kikinis
- Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany; Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Radiology, Harvard Medical School and Brigham and Women's Hospital, Boston, MA 02115, USA
| |
Collapse
|
14
|
Kenngott HG, Preukschas AA, Wagner M, Nickel F, Müller M, Bellemann N, Stock C, Fangerau M, Radeleff B, Kauczor HU, Meinzer HP, Maier-Hein L, Müller-Stich BP. Mobile, real-time, and point-of-care augmented reality is robust, accurate, and feasible: a prospective pilot study. Surg Endosc 2018; 32:2958-2967. [PMID: 29602988 DOI: 10.1007/s00464-018-6151-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Accepted: 03/21/2018] [Indexed: 11/28/2022]
Abstract
BACKGROUND Augmented reality (AR) systems are currently being explored by a broad spectrum of industries, mainly for improving point-of-care access to data and images. Especially in surgery and especially for timely decisions in emergency cases, a fast and comprehensive access to images at the patient bedside is mandatory. Currently, imaging data are accessed at a distance from the patient both in time and space, i.e., at a specific workstation. Mobile technology and 3-dimensional (3D) visualization of radiological imaging data promise to overcome these restrictions by making bedside AR feasible. METHODS In this project, AR was realized in a surgical setting by fusing a 3D-representation of structures of interest with live camera images on a tablet computer using marker-based registration. The intent of this study was to focus on a thorough evaluation of AR. Feasibility, robustness, and accuracy were thus evaluated consecutively in a phantom model and a porcine model. Additionally feasibility was evaluated in one male volunteer. RESULTS In the phantom model (n = 10), AR visualization was feasible in 84% of the visualization space with high accuracy (mean reprojection error ± standard deviation (SD): 2.8 ± 2.7 mm; 95th percentile = 6.7 mm). In a porcine model (n = 5), AR visualization was feasible in 79% with high accuracy (mean reprojection error ± SD: 3.5 ± 3.0 mm; 95th percentile = 9.5 mm). Furthermore, AR was successfully used and proved feasible within a male volunteer. CONCLUSIONS Mobile, real-time, and point-of-care AR for clinical purposes proved feasible, robust, and accurate in the phantom, animal, and single-trial human model shown in this study. Consequently, AR following similar implementation proved robust and accurate enough to be evaluated in clinical trials assessing accuracy, robustness in clinical reality, as well as integration into the clinical workflow. If these further studies prove successful, AR might revolutionize data access at patient bedside.
Collapse
Affiliation(s)
- Hannes Götz Kenngott
- Department of General, Visceral and Transplantation Surgery, Heidelberg University, Im Neuenheimer Feld 110, 69120, Heidelberg, Germany
| | - Anas Amin Preukschas
- Department of General, Visceral and Transplantation Surgery, Heidelberg University, Im Neuenheimer Feld 110, 69120, Heidelberg, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University, Im Neuenheimer Feld 110, 69120, Heidelberg, Germany
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University, Im Neuenheimer Feld 110, 69120, Heidelberg, Germany
| | - Michael Müller
- Division of Medical and Biological Informatics, German Cancer Research Center, Heidelberg, Germany
| | - Nadine Bellemann
- Department of Diagnostic and Interventional Radiology, Heidelberg University, Heidelberg, Germany
| | - Christian Stock
- Institute for Medical Biometry and Informatics, Heidelberg University, Heidelberg, Germany
| | - Markus Fangerau
- Department of Diagnostic and Interventional Radiology, Heidelberg University, Heidelberg, Germany
| | - Boris Radeleff
- Department of Diagnostic and Interventional Radiology, Heidelberg University, Heidelberg, Germany
| | - Hans-Ulrich Kauczor
- Department of Diagnostic and Interventional Radiology, Heidelberg University, Heidelberg, Germany
| | - Hans-Peter Meinzer
- Division of Medical and Biological Informatics, German Cancer Research Center, Heidelberg, Germany
| | - Lena Maier-Hein
- Division of Medical and Biological Informatics, German Cancer Research Center, Heidelberg, Germany
| | - Beat Peter Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University, Im Neuenheimer Feld 110, 69120, Heidelberg, Germany.
| |
Collapse
|
15
|
Bong JH, Song HJ, Oh Y, Park N, Kim H, Park S. Endoscopic navigation system with extended field of view using augmented reality technology. Int J Med Robot 2017; 14. [DOI: 10.1002/rcs.1886] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 10/30/2017] [Accepted: 11/21/2017] [Indexed: 11/11/2022]
Affiliation(s)
- Jae Hwan Bong
- Department of Mechanical Engineering; Korea University; Seoul Korea
| | | | - Yoojin Oh
- Department of Mechanical Engineering; Korea University; Seoul Korea
| | - Namji Park
- Department of Biomedical Engineering; Columbia University; New York United States
| | - Hyungmin Kim
- Center for Bionics; Korea Institute of Science and Technology; Seoul Korea
| | - Shinsuk Park
- Department of Mechanical Engineering; Korea University; Seoul Korea
| |
Collapse
|
16
|
van Oosterom MN, Meershoek P, KleinJan GH, Hendricksen K, Navab N, van de Velde CJH, van der Poel HG, van Leeuwen FWB. Navigation of Fluorescence Cameras during Soft Tissue Surgery-Is it Possible to Use a Single Navigation Setup for Various Open and Laparoscopic Urological Surgery Applications? J Urol 2017; 199:1061-1068. [PMID: 29174485 DOI: 10.1016/j.juro.2017.09.160] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/02/2017] [Indexed: 01/09/2023]
Abstract
PURPOSE Real-time visualization fluorescence imaging can guide surgeons during tissue resection. Unfortunately tissue induced signal attenuation limits the value of this technique to superficial applications. By positioning the fluorescence camera via a dedicated navigation setup we reasoned that the technology could be made compatible with deeper lesions, increasing its impact on clinical care. Such an impact would benefit from the ability to implement the navigation technology in different surgical settings. For that reason we evaluated whether a single fluorescence camera could be navigated toward targeted lesions during open and laparoscopic surgery. MATERIALS AND METHODS A fluorescence camera with scopes available for open and laparoscopic procedures was integrated with a navigation platform. Lymph nodes identified on SPECT/CT (single photon emission computerized tomography/computerized tomography) or free-hand single photon emission computerized tomography acted as navigation targets and were displayed as augmented overlays in the fluorescence camera video feed. The accuracy of this setup was evaluated in a phantom study of 4 scans per single photon emission computerized tomography imaging modality. This was followed by 4 first in human translations into sentinel lymph node biopsy procedures for penile (open surgery) and prostate (laparoscopic surgery) cancer. RESULTS Overall the phantom studies revealed a tool-target distance accuracy of 2.1 mm for SPECT/CT and 3.2 mm for freehand single photon emission computerized tomography, and an augmented reality registration accuracy of 1.1 and 2.2 mm, respectively. Subsequently open and laparoscopic navigation efforts were accurate enough to localize the fluorescence signals of the targeted tissues in vivo. CONCLUSIONS The phantom and human studies performed suggested that the single navigation setup is applicable in various open and laparoscopic urological surgery applications. Further evaluation in larger patient groups with a greater variety of malignancies is recommended to strengthen these results.
Collapse
Affiliation(s)
- Matthias N van Oosterom
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands; Department of Surgery, Leiden University Medical Center, Leiden, The Netherlands
| | - Philippa Meershoek
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands; Department of Urology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Gijs H KleinJan
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands; Department of Urology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Kees Hendricksen
- Department of Urology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Institut für Informatik, Garching bei München, Germany; Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, Maryland
| | | | - Henk G van der Poel
- Department of Urology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Fijs W B van Leeuwen
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands; Department of Urology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands.
| |
Collapse
|
17
|
Chu Y, Yang J, Ma S, Ai D, Li W, Song H, Li L, Chen D, Chen L, Wang Y. Registration and fusion quantification of augmented reality based nasal endoscopic surgery. Med Image Anal 2017; 42:241-256. [PMID: 28881251 DOI: 10.1016/j.media.2017.08.003] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 06/10/2017] [Accepted: 08/02/2017] [Indexed: 11/24/2022]
Abstract
This paper quantifies the registration and fusion display errors of augmented reality-based nasal endoscopic surgery (ARNES). We comparatively investigated the spatial calibration process for front-end endoscopy and redefined the accuracy level of a calibrated endoscope by using a calibration tool with improved structural reliability. We also studied how registration accuracy was combined with the number and distribution of the deployed fiducial points (FPs) for positioning and the measured registration time. A physically integrated ARNES prototype was customarily configured for performance evaluation in skull base tumor resection surgery with an innovative approach of dynamic endoscopic vision expansion. As advised by surgical experts in otolaryngology, we proposed a hierarchical rendering scheme to properly adapt the fused images with the required visual sensation. By constraining the rendered sight in a known depth and radius, the visual focus of the surgeon can be induced only on the anticipated critical anatomies and vessel structures to avoid misguidance. Furthermore, error analysis was conducted to examine the feasibility of hybrid optical tracking based on point cloud, which was proposed in our previous work as an in-surgery registration solution. Measured results indicated that the error of target registration for ARNES can be reduced to 0.77 ± 0.07 mm. For initial registration, our results suggest that a trade-off for a new minimal time of registration can be reached when the distribution of five FPs is considered. For in-surgery registration, our findings reveal that the intrinsic registration error is a major cause of performance loss. Rigid model and cadaver experiments confirmed that the scenic integration and display fluency of ARNES are smooth, as demonstrated by three clinical trials that surpassed practicality.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| | - Shaodong Ma
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing 100081, China
| | - Liang Li
- Department of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing 100853, China
| | - Duanduan Chen
- School of Life Science, Beijing Institute of Technology, Beijing 100081, China
| | - Lei Chen
- Department of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing 100853, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
18
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
19
|
Decker RS, Shademan A, Opfermann JD, Leonard S, Kim PCW, Krieger A. Biocompatible Near-Infrared Three-Dimensional Tracking System. IEEE Trans Biomed Eng 2017; 64:549-556. [PMID: 28129145 PMCID: PMC5419048 DOI: 10.1109/tbme.2017.2656803] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
A fundamental challenge in soft-tissue surgery is that target tissue moves and deforms, becomes occluded by blood or other tissue, and is difficult to differentiate from surrounding tissue. We developed small biocompatible near-infrared fluorescent (NIRF) markers with a novel fused plenoptic and NIR camera tracking system, enabling three-dimensional tracking of tools and target tissue while overcoming blood and tissue occlusion in the uncontrolled, rapidly changing surgical environment. In this work, we present the tracking system and marker design and compare tracking accuracies to standard optical tracking methods using robotic experiments. At speeds of 1 mm/s, we observe tracking accuracies of 1.61 mm, degrading only to 1.71 mm when the markers are covered in blood and tissue.
Collapse
|
20
|
Ren J, Green M, Huang X, Abdalbari A. Automatic error correction using adaptive weighting for vessel-based deformable image registration. Biomed Eng Lett 2017; 7:173-181. [PMID: 30603163 DOI: 10.1007/s13534-017-0020-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Revised: 11/04/2016] [Accepted: 02/16/2017] [Indexed: 10/20/2022] Open
Abstract
In this paper, we extend our previous work on deformable image registration to inhomogenous tissues. Inhomogenous tissues include the tissues with embedded tumors, which is common in clinical applications. It is a very challenging task since the registration method that works for homogenous tissues may not work well with inhomogenous tissues. The maximum error normally occurs in the regions with tumors and often exceeds the acceptable error threshold. In this paper, we propose a new error correction method with adaptive weighting to reduce the maximum registration error. Our previous fast deformable registration method is used in the inner loop. We have also proposed a new evaluation metric average error of deformation field (AEDF) to evaluate the registration accuracy in regions between vessels and bifurcation points. We have validated the proposed method using liver MR images from human subjects. AEDF results show that the proposed method can greatly reduce the maximum registration errors when compared with the previous method with no adaptive weighting. The proposed method has the potential to be used in clinical applications to reduce registration errors in regions with tumors.
Collapse
Affiliation(s)
- Jing Ren
- 1University of Ontario Institute of Technology, Oshawa, ON Canada
| | - Mark Green
- 1University of Ontario Institute of Technology, Oshawa, ON Canada
| | - Xishi Huang
- Istuary Innovation Group, 75 Tiverton Court, Markham, ON Canada
| | - Anwar Abdalbari
- 1University of Ontario Institute of Technology, Oshawa, ON Canada
| |
Collapse
|
21
|
A pilot study of SPECT/CT-based mixed-reality navigation towards the sentinel node in patients with melanoma or Merkel cell carcinoma of a lower extremity. Nucl Med Commun 2017; 37:812-7. [PMID: 27076206 DOI: 10.1097/mnm.0000000000000524] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE To explore the feasibility of an intraoperative navigation technology based on preoperatively acquired single photon emission computed tomography combined with computed tomography (SPECT/CT) images during sentinel node (SN) biopsy in patients with melanoma or Merkel cell carcinoma. MATERIALS AND METHODS Patients with a melanoma (n=4) or Merkel cell carcinoma (n=1) of a lower extremity scheduled for wide re-excision of the primary lesion site and SN biopsy were studied. Following a Tc-nanocolloid injection and lymphoscintigraphy, SPECT/CT images were acquired with a reference target (ReTp) fixed on the leg or the iliac spine. Intraoperatively, a sterile ReTp was placed at the same site to enable SPECT/CT-based mixed-reality navigation of a gamma ray detection probe also containing a reference target (ReTgp).The accuracy of the navigation procedure was determined in the coronal plane (x, y-axis) by measuring the discrepancy between standard gamma probe-based SN localization and mixed-reality-based navigation to the SN. To determine the depth accuracy (z-axis), the depth estimation provided by the navigation system was compared to the skin surface-to-node distance measured in the computed tomography component of the SPECT/CT images. RESULTS In four of five patients, it was possible to navigate towards the preoperatively defined SN. The average navigational error was 8.0 mm in the sagittal direction and 8.5 mm in the coronal direction. Intraoperative sterile ReTp positioning and tissue movement during surgery exerted a distinct influence on the accuracy of navigation. CONCLUSION Intraoperative navigation during melanoma or Merkel cell carcinoma surgery is feasible and can provide the surgeon with an interactive 3D roadmap towards the SN or SNs in the groin. However, further technical optimization of the modality is required before this technology can become routine practice.
Collapse
|
22
|
Burgmans MC, den Harder JM, Meershoek P, van den Berg NS, Chan SXJM, van Leeuwen FWB, van Erkel AR. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions. Cardiovasc Intervent Radiol 2017; 40:914-923. [PMID: 28204959 PMCID: PMC5409927 DOI: 10.1007/s00270-017-1607-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Accepted: 02/03/2017] [Indexed: 01/05/2023]
Abstract
Purpose To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. Materials and Methods CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Results Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. Conclusion The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.
Collapse
Affiliation(s)
- Mark Christiaan Burgmans
- Department of Radiology, Leiden University Medical Centre, Albinusdreef 2, 2300 RC, Leiden, The Netherlands.
| | - J Michiel den Harder
- Department of Radiology, Leiden University Medical Centre, Albinusdreef 2, 2300 RC, Leiden, The Netherlands
| | - Philippa Meershoek
- Department of Radiology, Leiden University Medical Centre, Albinusdreef 2, 2300 RC, Leiden, The Netherlands.,Interventional and Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Nynke S van den Berg
- Interventional and Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Shaun Xavier Ju Min Chan
- Department of Interventional Radiology, Singapore General Hospital, Outram Road, Singapore, 169608, Singapore
| | - Fijs W B van Leeuwen
- Interventional and Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Arian R van Erkel
- Department of Radiology, Leiden University Medical Centre, Albinusdreef 2, 2300 RC, Leiden, The Netherlands
| |
Collapse
|
23
|
Atalay HA, Ülker V, Alkan İ, Canat HL, Özkuvancı Ü, Altunrende F. Impact of Three-Dimensional Printed Pelvicaliceal System Models on Residents' Understanding of Pelvicaliceal System Anatomy Before Percutaneous Nephrolithotripsy Surgery: A Pilot Study. J Endourol 2016; 30:1132-1137. [DOI: 10.1089/end.2016.0307] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Affiliation(s)
- Hasan Anıl Atalay
- Department of Urology, Okmeydanı Training and Research Hospital, Sisli-Istanbul, Turkey
| | - Volkan Ülker
- Department of Urology, Okmeydanı Training and Research Hospital, Sisli-Istanbul, Turkey
| | - İlter Alkan
- Department of Urology, Okmeydanı Training and Research Hospital, Sisli-Istanbul, Turkey
| | - Halil Lütfi Canat
- Department of Urology, Okmeydanı Training and Research Hospital, Sisli-Istanbul, Turkey
| | - Ünsal Özkuvancı
- Department of Urology, Istanbul Medical School, Çapa- Istanbul, Turkey
| | - Fatih Altunrende
- Department of Urology, Okmeydanı Training and Research Hospital, Sisli-Istanbul, Turkey
| |
Collapse
|
24
|
Bouget D, Allan M, Stoyanov D, Jannin P. Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal 2016; 35:633-654. [PMID: 27744253 DOI: 10.1016/j.media.2016.09.003] [Citation(s) in RCA: 110] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2016] [Revised: 06/26/2016] [Accepted: 09/05/2016] [Indexed: 11/16/2022]
Abstract
In recent years, tremendous progress has been made in surgical practice for example with Minimally Invasive Surgery (MIS). To overcome challenges coming from deported eye-to-hand manipulation, robotic and computer-assisted systems have been developed. Having real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy is a key ingredient for such systems. In this paper, we present a review of the literature dealing with vision-based and marker-less surgical tool detection. This paper includes three primary contributions: (1) identification and analysis of data-sets used for developing and testing detection algorithms, (2) in-depth comparison of surgical tool detection methods from the feature extraction process to the model learning strategy and highlight existing shortcomings, and (3) analysis of validation techniques employed to obtain detection performance results and establish comparison between surgical tool detectors. The papers included in the review were selected through PubMed and Google Scholar searches using the keywords: "surgical tool detection", "surgical tool tracking", "surgical instrument detection" and "surgical instrument tracking" limiting results to the year range 2000 2015. Our study shows that despite significant progress over the years, the lack of established surgical tool data-sets, and reference format for performance assessment and method ranking is preventing faster improvement.
Collapse
Affiliation(s)
- David Bouget
- Medicis team, INSERM U1099, Université de Rennes 1 LTSI, 35000 Rennes, France.
| | - Max Allan
- Center for Medical Image Computing. University College London, WC1E 6BT London, United Kingdom.
| | - Danail Stoyanov
- Center for Medical Image Computing. University College London, WC1E 6BT London, United Kingdom.
| | - Pierre Jannin
- Medicis team, INSERM U1099, Université de Rennes 1 LTSI, 35000 Rennes, France.
| |
Collapse
|
25
|
Affiliation(s)
- Jens Rassweiler
- Department of Urology; SLK Kliniken Heilbronn; University of Heidelberg; Heidelberg Germany
| |
Collapse
|
26
|
van Oosterom MN, Engelen MA, van den Berg NS, KleinJan GH, van der Poel HG, Wendler T, van de Velde CJH, Navab N, van Leeuwen FWB. Navigation of a robot-integrated fluorescence laparoscope in preoperative SPECT/CT and intraoperative freehand SPECT imaging data: a phantom study. JOURNAL OF BIOMEDICAL OPTICS 2016; 21:86008. [PMID: 27548770 DOI: 10.1117/1.jbo.21.8.086008] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2016] [Accepted: 07/25/2016] [Indexed: 06/06/2023]
Abstract
Robot-assisted laparoscopic surgery is becoming an established technique for prostatectomy and is increasingly being explored for other types of cancer. Linking intraoperative imaging techniques, such as fluorescence guidance, with the three-dimensional insights provided by preoperative imaging remains a challenge. Navigation technologies may provide a solution, especially when directly linked to both the robotic setup and the fluorescence laparoscope. We evaluated the feasibility of such a setup. Preoperative single-photon emission computed tomography/X-ray computed tomography (SPECT/CT) or intraoperative freehand SPECT (fhSPECT) scans were used to navigate an optically tracked robot-integrated fluorescence laparoscope via an augmented reality overlay in the laparoscopic video feed. The navigation accuracy was evaluated in soft tissue phantoms, followed by studies in a human-like torso phantom. Navigation accuracies found for SPECT/CT-based navigation were 2.25 mm (coronal) and 2.08 mm (sagittal). For fhSPECT-based navigation, these were 1.92 mm (coronal) and 2.83 mm (sagittal). All errors remained below the <1-cm detection limit for fluorescence imaging, allowing refinement of the navigation process using fluorescence findings. The phantom experiments performed suggest that SPECT-based navigation of the robot-integrated fluorescence laparoscope is feasible and may aid fluorescence-guided surgery procedures.
Collapse
Affiliation(s)
- Matthias Nathanaël van Oosterom
- Leiden University Medical Center, Department of Surgery, Albinusdreef 2, Leiden 2333 ZA, The NetherlandsbLeiden University Medical Center, Department of Radiology, Interventional Molecular Imaging Laboratory, Albinusdreef 2, Leiden 2333 ZA, The Netherlands
| | - Myrthe Adriana Engelen
- Leiden University Medical Center, Department of Radiology, Interventional Molecular Imaging Laboratory, Albinusdreef 2, Leiden 2333 ZA, The Netherlands
| | - Nynke Sjoerdtje van den Berg
- Leiden University Medical Center, Department of Radiology, Interventional Molecular Imaging Laboratory, Albinusdreef 2, Leiden 2333 ZA, The NetherlandscThe Netherlands Cancer Institute, Antoni van Leeuwenhoek Hospital, Department of Urology, Plesmanlaan 121, Amsterdam 1066 CX, The Netherlands
| | - Gijs Hendrik KleinJan
- Leiden University Medical Center, Department of Radiology, Interventional Molecular Imaging Laboratory, Albinusdreef 2, Leiden 2333 ZA, The NetherlandscThe Netherlands Cancer Institute, Antoni van Leeuwenhoek Hospital, Department of Urology, Plesmanlaan 121, Amsterdam 1066 CX, The Netherlands
| | - Henk Gerrit van der Poel
- The Netherlands Cancer Institute, Antoni van Leeuwenhoek Hospital, Department of Urology, Plesmanlaan 121, Amsterdam 1066 CX, The Netherlands
| | - Thomas Wendler
- Technische Universität München, Computer Aided Medical Procedures, Institut für Informatik, I16, Boltzmannstr. 3, Garching bei München 85748, GermanyeSurgicEye GmbH, Friedenstraße 18A, München 81671, Germany
| | | | - Nassir Navab
- Technische Universität München, Computer Aided Medical Procedures, Institut für Informatik, I16, Boltzmannstr. 3, Garching bei München 85748, GermanyfJohns Hopkins University, Computer Aided Medical Procedures, 3400 North Charles Street, Hackerman 200, Baltimore, Maryland 21218, United States
| | - Fijs Willem Bernhard van Leeuwen
- Leiden University Medical Center, Department of Radiology, Interventional Molecular Imaging Laboratory, Albinusdreef 2, Leiden 2333 ZA, The NetherlandscThe Netherlands Cancer Institute, Antoni van Leeuwenhoek Hospital, Department of Urology, Plesmanlaan 121, Amsterdam 1066 CX, The Netherlands
| |
Collapse
|
27
|
Robust augmented reality guidance with fluorescent markers in laparoscopic surgery. Int J Comput Assist Radiol Surg 2016; 11:899-907. [DOI: 10.1007/s11548-016-1385-4] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Accepted: 03/14/2016] [Indexed: 11/25/2022]
|
28
|
Automatic localization of endoscope in intraoperative CT image: A simple approach to augmented reality guidance in laparoscopic surgery. Med Image Anal 2016; 30:130-143. [DOI: 10.1016/j.media.2016.01.008] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2014] [Revised: 04/17/2015] [Accepted: 01/04/2016] [Indexed: 11/23/2022]
|
29
|
Simpfendörfer T, Hatiboglu G, Hadaschik BA, Wild E, Maier-Hein L, Rassweiler MC, Rassweiler J, Hohenfellner M, Teber D. [Navigation in urological surgery: Possibilities and limits of current techniques]. Urologe A 2016; 54:709-15. [PMID: 25572970 DOI: 10.1007/s00120-014-3709-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Surgical navigation describes the concept of real-time processing and presentation of preoperative and intraoperative data from different sources to intraoperatively provide surgeons with additional cognitive support. Imaging methods such as 3D ultrasound, magnetic resonance imaging (MRI) and computed tomography (CT) and data from optical, electromagnetic or mechanical tracking methods are used. The resulting information of the navigation system will be presented by the means of visual methods. Mostly virtual reality or augmented reality visualization is used. There are different guidance systems for various disciplines introduced. Mostly it operates on rigid structures (bone, brain). For soft tissue navigation motion compensation and deformation detection are necessary. Therefore, marker-based tracking methods are used in several urological application examples; however, the systems are often still under development and have not yet arrived in the clinical routine.
Collapse
Affiliation(s)
- T Simpfendörfer
- Urologische Universitätsklinik Heidelberg, Im Neuenheimer Feld 110, 69120, Heidelberg, Deutschland,
| | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Huang X, Ren J, Abdalbari A, Green M. Vessel-based fast deformable registration with minimal strain energy. Biomed Eng Lett 2016. [DOI: 10.1007/s13534-016-0213-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
|
31
|
|
32
|
Nosrati MS, Abugharbieh R, Peyrat JM, Abinahed J, Al-Alao O, Al-Ansari A, Hamarneh G. Simultaneous Multi-Structure Segmentation and 3D Nonrigid Pose Estimation in Image-Guided Robotic Surgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1-12. [PMID: 26151933 DOI: 10.1109/tmi.2015.2452907] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness.
Collapse
|
33
|
Application of a computer-aided navigation technique in surgery for recurrent malignant infratemporal fossa tumors. J Craniofac Surg 2015; 26:e126-32. [PMID: 25710743 DOI: 10.1097/scs.0000000000001350] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Because of the complexity of the local anatomy, tumors in the infratemporal fossa present a great challenge to oral and maxillofacial surgeons. Recurrent malignant tumors in this area are particularly difficult and precarious to resect because scars from previous operations may dislocate some important structures. METHODS From August 2010 to December 2013, all recurrent cases of malignant infratemporal fossa tumors at Peking University Stomatological Hospital were enrolled in this study. The patients were divided into 2 groups: the navigation group and the nonnavigation group, with different managements. The following factors were evaluated: operation time, bleeding volume, tumor size, surgical approach and complications, follow-up survey, and outcomes.In addition, survival analyses were performed for all patients. RESULTS In total, 42 patients were investigated. The mean operation time for the navigation group was not significantly longer than that of the nonnavigation group (283.64 versus 252.10 min, respectively; P = 0.393); the group's mean intraoperative bleeding volumes were similar (536.36 versus 503.87 mL, respectively; P = 0.814). The surgical approach was determined and categorized as an inferior approach (transmandibular approach, with or without splitting of the mandible), anterior approach (transmaxillary approach), lateral approach (subtemporal-preauricular approach), or combined approach. The inferior approach was most frequently used in both groups (ie, 63.6% for the navigation group and 80.6% for the nonnavigation group). The tumors were completely resected in 4 patients from the navigation group and 24 patients from the nonnavigation group. Regarding complications in the navigation and nonnavigation groups, the incidence was not significantly different (27.2% versus 41.9%, respectively; P = 0.485). The 3-year survival for patients in the navigation group was 71.6% compared with 52.9% in the nonnavigation group, with no significant difference. In the survival analysis, no significant factor was determined. CONCLUSIONS A computer-aided navigation technique has been successfully introduced to resect infratemporal fossa tumors and was successfully applied to the resection of recurrent malignant tumors. This new technique alone does not determine the outcome of patients with recurrent malignant infratemporal fossa tumors. Although some improvements are necessary, the visible navigation during surgery could increase the accuracy and safety of the operations and enhance surgeon confidence.
Collapse
|
34
|
Sentinel node approach in prostate cancer. Rev Esp Med Nucl Imagen Mol 2015. [DOI: 10.1016/j.remnie.2015.10.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
35
|
Ronaghi Z, Duffy EB, Kwartowitz DM. Toward real-time remote processing of laparoscopic video. J Med Imaging (Bellingham) 2015; 2:045002. [PMID: 26668817 PMCID: PMC4676794 DOI: 10.1117/1.jmi.2.4.045002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2015] [Accepted: 11/03/2015] [Indexed: 11/14/2022] Open
Abstract
Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and use small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery uses the images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, California). The video streams generate approximately 360 MB of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We have performed image processing algorithms on a high-definition head phantom video (1920 × 1080 pixels) and transferred the video using a message passing interface. The total transfer time is around 53 ms or 19 fps. We will optimize and parallelize these algorithms to reduce the total time to 30 ms.
Collapse
Affiliation(s)
- Zahra Ronaghi
- Clemson University, Department of Bioengineering, 301 Rhodes Research Center, Clemson, South Carolina, 29634-0905, United States
| | - Edward B. Duffy
- Clemson University, Clemson Computing and Information Technology, Barre Hall, 120 McGinty Court, Clemson, South Carolina 29634, United States
| | - David M. Kwartowitz
- Clemson University, Department of Bioengineering, 301 Rhodes Research Center, Clemson, South Carolina, 29634-0905, United States
| |
Collapse
|
36
|
Vidal-Sicart S, Valdés Olmos RA. Sentinel node approach in prostate cancer. Rev Esp Med Nucl Imagen Mol 2015; 34:358-71. [PMID: 26391573 DOI: 10.1016/j.remn.2015.07.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2015] [Accepted: 07/19/2015] [Indexed: 11/17/2022]
Abstract
In general terms, one of the main objectives of sentinel lymph node (SLN) biopsy is to identify the 20-25% of patients with occult regional metastatic involvement. This technique reduces the associated morbidity from lymphadenectomy, as well as increasing the identification rate of occult lymphatic metastases by offering the pathologist those lymph nodes with the highest probability of containing metastatic cells. Pre-surgical lymphoscintigraphy is considered a "road map" to guide the surgeon towards the sentinel nodes and to ascertain unpredictable lymphatic drainages. In prostate cancer this aspect is essential due to the multidirectional character of the lymphatic drainage in the pelvis. In this context the inclusion of SPECT/CT should be mandatory in order to improve the SLN detection rate, to clarify the location when SLNs are difficult to interpret on planar images, to achieve a better definition of them in locations close to injection site, and to provide anatomical landmarks to be recognized during operation to locate SLNs. Conventional and laparoscopic hand-held gamma probes allow the SLN technique to be applied in any kind of surgery. The introduction and combination of new tracers and devices refines this technique, and the use of intraoperative images. These aspects become of vital importance due to the recent incorporation of robot-assisted procedures for SLN biopsy. In spite of these advances various aspects of SLN biopsy in prostate cancer patients still need to be discussed, and therefore their clinical application is not widely used.
Collapse
Affiliation(s)
- S Vidal-Sicart
- Nuclear Medicine Department, Hospital Clínic Barcelona, Barcelona, Spain.
| | - R A Valdés Olmos
- Interventional Molecular Imaging and Nuclear Medicine Section, Leiden University Medical Centre, Leiden, The Netherlands; Nuclear Medicine Department, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| |
Collapse
|
37
|
Okamoto T, Onda S, Yasuda J, Yanaga K, Suzuki N, Hattori A. Navigation surgery using an augmented reality for pancreatectomy. Dig Surg 2015; 32:117-23. [PMID: 25766302 DOI: 10.1159/000371860] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Accepted: 12/31/2014] [Indexed: 12/18/2022]
Abstract
AIM The aim of this study was to evaluate the utility of navigation surgery using augmented reality technology (AR-based NS) for pancreatectomy. METHODS The 3D reconstructed images from CT were created by segmentation. The initial registration was performed by using the optical location sensor. The reconstructed images were superimposed onto the real organs in the monitor display. Of the 19 patients who had undergone hepatobiliary and pancreatic surgery using AR-based NS, the accuracy, visualization ability, and utility of our system were assessed in five cases with pancreatectomy. RESULTS The position of each organ in the surface-rendering image corresponded almost to that of the actual organ. Reference to the display image allowed for safe dissection while preserving the adjacent vessels or organs. The locations of the lesions and resection line on the targeted organ were overlaid on the operating field. The initial mean registration error was improved to approximately 5 mm by our refinements. However, several problems such as registration accuracy, portability and cost still remain. CONCLUSION AR-based NS contributed to accurate and effective surgical resection in pancreatectomy. The pancreas appears to be a suitable organ for further investigations. This technology is promising to improve surgical quality, training, and education.
Collapse
Affiliation(s)
- Tomoyoshi Okamoto
- Department of Surgery, The Jikei University Daisan Hospital, Tokyo, Japan
| | | | | | | | | | | |
Collapse
|
38
|
Vetter C, Lasser T, Okur A, Navab N. 1D-3D registration for intra-operative nuclear imaging in radio-guided surgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:608-617. [PMID: 25343756 DOI: 10.1109/tmi.2014.2363551] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
3D functional nuclear imaging modalities like SPECT or PET provide valuable information, as small structures can be marked with radioactive tracers to be localized before surgery. This positional information is valuable during surgery as well, for example when locating potentially cancerous lymph nodes in the case of breast cancer. However, the volumetric information provided by pre-operative SPECT scans loses validity quickly due to posture changes and manipulation of the soft tissue during surgery. During the intervention, the surgeon has to rely on the acoustic feedback provided by handheld gamma-detectors in order to localize the marked structures. In this paper, we present a method that allows updating the pre-operative image with a very limited number of tracked readings. A previously acquired 3D functional volume serves as prior knowledge and a limited number of new 1D detector readings is used in order to update the prior knowledge. This update is performed by a 1D-3D registration algorithm that registers the volume to the detector readings. This enables the rapid update of the visual guidance provided to the surgeon during a radio-guided surgery without slowing down the surgical workflow. We evaluate the performance of this approach using Monte-Carlo simulations, phantom experiments and patient data, resulting in a positional error of less than 8 mm which is acceptable for surgery. The 1D-3D registration is also compared to a volumetric reconstruction using the tracked detector measurements without taking prior information into account, and achieves a comparable accuracy with significantly less measurements.
Collapse
|
39
|
Detection and modelling of contacts in explicit finite-element simulation of soft tissue biomechanics. Int J Comput Assist Radiol Surg 2015; 10:1873-91. [DOI: 10.1007/s11548-014-1142-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2014] [Accepted: 12/16/2014] [Indexed: 10/24/2022]
|
40
|
Brouwer OR, van den Berg NS, Mathéron HM, Wendler T, van der Poel HG, Horenblas S, Valdés Olmos RA, van Leeuwen FW. Feasibility of Intraoperative Navigation to the Sentinel Node in the Groin Using Preoperatively Acquired Single Photon Emission Computerized Tomography Data: Transferring Functional Imaging to the Operating Room. J Urol 2014; 192:1810-6. [DOI: 10.1016/j.juro.2014.03.127] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/31/2014] [Indexed: 11/15/2022]
Affiliation(s)
- Oscar R. Brouwer
- Department of Nuclear Medicine, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Nynke S. van den Berg
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Hanna M. Mathéron
- Department of Nuclear Medicine, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Thomas Wendler
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
- SurgicEye GmBH, Munich, Germany
| | - Henk G. van der Poel
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Simon Horenblas
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Renato A. Valdés Olmos
- Department of Nuclear Medicine, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Fijs W.B. van Leeuwen
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
41
|
Matsuzaki T, Oda M, Kitasaka T, Hayashi Y, Misawa K, Mori K. Automated anatomical labeling of abdominal arteries and hepatic portal system extracted from abdominal CT volumes. Med Image Anal 2014; 20:152-61. [PMID: 25484019 DOI: 10.1016/j.media.2014.11.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 11/07/2014] [Accepted: 11/07/2014] [Indexed: 11/28/2022]
Abstract
This paper proposes a method for automated anatomical labeling of abdominal arteries and a hepatic portal system. In abdominal surgeries, understanding blood vessel structure is critical since it is very complicated. The input of the proposed method is the blood vessel region extracted from the CT volume. The blood vessel region is expressed as a tree structure by applying a thinning process to it and compute the mapping from the branches in the tree structure to the anatomical names. First, several characteristic anatomical names are assigned by rule-based pre-processing. The branches assigned to these names are used as references. The remaining blood vessel names are assigned using a likelihood function trained by a machine-learning technique. Simple rule-based postprocessing can correct several blood vessel names. The output of the proposed method is a tree structure with anatomical names. In an experiment using 50 blood vessel regions manually extracted from abdominal CT volumes, the recall and precision rates of the abdominal arteries were 86.2% and 85.3%, and they were 86.5% and 79.5% for the hepatic portal system.
Collapse
Affiliation(s)
| | - Masahiro Oda
- Graduate School of Information Science, Nagoya University, Japan
| | - Takayuki Kitasaka
- Faculty of Information Science, Aichi Institute of Technology, Japan
| | | | | | - Kensaku Mori
- Information and Communications, Nagoya University, Japan; Graduate School of Information Science, Nagoya University, Japan
| |
Collapse
|
42
|
Li ZC, Li K, Zhan HL, Chen K, Chen MM, Xie YQ, Wang L. Augmenting interventional ultrasound using statistical shape model for guiding percutaneous nephrolithotomy: Initial evaluation in pigs. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.01.059] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
43
|
dos Santos TR, Seitel A, Kilgus T, Suwelack S, Wekerle AL, Kenngott H, Speidel S, Schlemmer HP, Meinzer HP, Heimann T, Maier-Hein L. Pose-independent surface matching for intra-operative soft-tissue marker-less registration. Med Image Anal 2014; 18:1101-14. [DOI: 10.1016/j.media.2014.06.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2013] [Revised: 04/10/2014] [Accepted: 06/11/2014] [Indexed: 10/25/2022]
|
44
|
Okamoto T, Onda S, Yanaga K, Suzuki N, Hattori A. Clinical application of navigation surgery using augmented reality in the abdominal field. Surg Today 2014; 45:397-406. [PMID: 24898629 DOI: 10.1007/s00595-014-0946-9] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2013] [Accepted: 01/23/2014] [Indexed: 12/20/2022]
Abstract
This article presents general principles and recent advancements in the clinical application of augmented reality-based navigation surgery (AR based NS) for abdominal procedures and includes a description of our clinical trial and subsequent outcomes. Moreover, current problems and future aspects are discussed. The development of AR-based NS in the abdomen is delayed compared with another field because of the problem of intraoperative organ deformations or the existence of established modalities. Although there are a few reports on the clinical use of AR-based NS for digestive surgery, sophisticated technologies in urology have often been reported. However, the rapid widespread use of video- or robot assisted surgeries requires this technology. We have worked to develop a system of AR-based NS for hepatobiliary and pancreatic surgery. Then we developed a short rigid scope that enables surgeons to obtain 3D view. We recently focused on pancreatic surgery, because intraoperative organ shifting is minimal. The position of each organ in overlaid image almost corresponded with that of the actual organ with about 5 mm of mean registration errors. Intraoperative information generated from this system provided us with useful navigation. However, AR-based NS has several problems to overcome such as organ deformity, evaluation of utility, portability or cost.
Collapse
Affiliation(s)
- Tomoyoshi Okamoto
- Department of Surgery, The Jikei University Daisan Hospital, 4-11-1 Izumihoncho, Komae-shi, Tokyo, Japan,
| | | | | | | | | |
Collapse
|
45
|
Vijayan S, Reinertsen I, Hofstad EF, Rethy A, Hernes TAN, Langø T. Liver deformation in an animal model due to pneumoperitoneum assessed by a vessel-based deformable registration. MINIM INVASIV THER 2014; 23:279-86. [PMID: 24848136 DOI: 10.3109/13645706.2014.914955] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
PURPOSE Surgical navigation based on preoperative images partly overcomes some of the drawbacks of minimally invasive interventions - reduction of free sight, lack of dexterity and tactile feedback. The usefulness of preoperative images is limited in laparoscopic liver surgery, as the liver shifts due to respiration, induction of pneumoperitoneum and surgical manipulation. In this study, we evaluated the shift and deformation in an animal liver caused by respiration and pneumopertioneum using intraoperative cone beam CT. MATERIAL AND METHODS 3D cone beam CT scans were acquired with arterial contrast. The centerlines of the segmented vessels were extracted from the images taken at different respiration and pressure settings. A non-rigid registration method was used to measure the shift and deformation. The mean Euclidean distance between the annotated landmarks was used for evaluation. RESULTS A shift and deformation of 44.6 mm on average was introduced due to the combined effect of respiration and pneumoperitoneum. On average 91% of the deformations caused by the respiration and pneumoperitoneum were recovered. CONCLUSION The results can contribute to the use of intraoperative imaging to correct for anatomic shift so that preoperative data can be used with greater confidence and accuracy during guidance of laparoscopic liver procedures.
Collapse
Affiliation(s)
- Sinara Vijayan
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU) , Trondheim , Norway
| | | | | | | | | | | |
Collapse
|
46
|
Current Perspectives in the Use of Molecular Imaging To Target Surgical Treatments for Genitourinary Cancers. Eur Urol 2014; 65:947-64. [DOI: 10.1016/j.eururo.2013.07.033] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2013] [Accepted: 07/17/2013] [Indexed: 01/17/2023]
|
47
|
A novel 3-dimensional image analysis system for case-specific kidney anatomy and surgical simulation to facilitate clampless partial nephrectomy. Urology 2014; 83:500-6. [PMID: 24468517 DOI: 10.1016/j.urology.2013.09.053] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2013] [Revised: 08/25/2013] [Accepted: 09/04/2013] [Indexed: 12/21/2022]
Abstract
OBJECTIVE To report our initial experience with the novel 3-dimensional (3D) image analysis system Synapse Vincent in clampless partial nephrectomy (PN), describing its advantages with regard to short-term surgical outcomes and its usefulness as an informed consent tool. METHODS Twenty-six patients with renal cell carcinoma underwent clampless PN navigated with the aid of case-specific 3D anatomic video images of the kidney, after surgical simulation using the same video system. Baseline characteristics were reviewed, and short-term surgical outcomes were recorded. Of the 26, 6 had imperative indications, and 22 were treated with a minimally invasive approach. Before tumor excision, the renal hilar vessels were meticulously dissected, and definite tumor feeders were selectively ligated. Before patients consented to PN, the surgical procedure and perioperative risks were explained to each patient using case-specific 3D video images; subsequently, surgeons asked patients whether the 3D images had helped them understand PN more clearly than 2D images would have. RESULTS All operations were successfully completed without clamping, with negative surgical margins. No patients required blood transfusions. During PN, the surgeons confirmed the accuracy of the reconstructed 3D images and surgical simulations in all cases. All patients answered that the 3D images had helped them understand their disease status and surgical risks. CONCLUSION This is the first report on the Synapse Vincent 3D image analysis system for kidney surgery. Its 3D images and surgical simulation helped not only surgeons in their performance of clampless PN but also patients in their understanding of the operation.
Collapse
|
48
|
Simultaneous evaluation of wall motion and blood perfusion of a beating heart using stereoscopic fluorescence camera system. Comput Med Imaging Graph 2014; 38:276-84. [PMID: 24507764 DOI: 10.1016/j.compmedimag.2013.12.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2013] [Revised: 12/24/2013] [Accepted: 12/31/2013] [Indexed: 11/22/2022]
Abstract
In this study, we aimed to develop a stereoscopic fluorescence camera system for simultaneous evaluation of wall motion and tissue perfusion using indocyanine green (ICG) fluorescence imaging. The system consists of two high-speed stereo cameras, an excitation lamp, and a computer for image processing. Evaluation experiments demonstrated that the stereoscopic fluorescence camera system successfully performed the simultaneous measurement of wall motion and tissue perfusion based on ICG fluorescence imaging. Our system can be applied to intraoperative evaluation of cardiac status, leading to an improvement in surgical outcomes.
Collapse
|
49
|
|
50
|
Surgical planning and manual image fusion based on 3D model facilitate laparoscopic partial nephrectomy for intrarenal tumors. World J Urol 2013; 32:1493-9. [PMID: 24337151 DOI: 10.1007/s00345-013-1222-0] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2013] [Accepted: 11/29/2013] [Indexed: 01/14/2023] Open
Abstract
PURPOSE Construction of three-dimensional (3D) model of renal tumor facilitated surgical planning and imaging guidance of manual image fusion in laparoscopic partial nephrectomy (LPN) for intrarenal tumors. MATERIALS AND METHODS Fifteen patients with intrarenal tumors underwent LPN between January and December 2012. Computed tomography-based reconstruction of the 3D models of renal tumors was performed using Mimics 12.1 software. Surgical planning was performed through morphometry and multi-angle visual views of the tumor model. Two-step manual image fusion superimposed 3D model images onto 2D laparoscopic images. The image fusion was verified by intraoperative ultrasound. Imaging-guided laparoscopic hilar clamping and tumor excision was performed. Manual fusion time, patient demographics, surgical details, and postoperative treatment parameters were analyzed. RESULTS The reconstructed 3D tumor models accurately represented the patient's physiological anatomical landmarks. The surgical planning markers were marked successfully. Manual image fusion was flexible and feasible with fusion time of 6 min (5-7 min). All surgeries were completed laparoscopically. The median tumor excision time was 5.4 min (3.5-10 min), whereas the median warm ischemia time was 25.5 min (16-32 min). Twelve patients (80 %) demonstrated renal cell carcinoma on final pathology, and all surgical margins were negative. No tumor recurrence was detected after a media follow-up of 1 year (3-15 months). CONCLUSIONS The surgical planning and two-step manual image fusion based on 3D model of renal tumor facilitated visible-imaging-guided tumor resection with negative margin in LPN for intrarenal tumor. It is promising and moves us one step closer to imaging-guided surgery.
Collapse
|