1
|
Wang T, Li H, Pu T, Yang L. Microsurgery Robots: Applications, Design, and Development. SENSORS (BASEL, SWITZERLAND) 2023; 23:8503. [PMID: 37896597 PMCID: PMC10611418 DOI: 10.3390/s23208503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons' skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons' operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human-machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.
Collapse
Affiliation(s)
- Tiexin Wang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Haoyu Li
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Tanhong Pu
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Liangjing Yang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
- Department of Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
2
|
Dehghani S, Sommersperger M, Zhang P, Martin-Gomez A, Busam B, Gehlbach P, Navab N, Nasseri MA, Iordachita I. Robotic Navigation Autonomy for Subretinal Injection via Intelligent Real-Time Virtual iOCT Volume Slicing. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2023; 2023:4724-4731. [PMID: 38125032 PMCID: PMC10732544 DOI: 10.1109/icra48891.2023.10160372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.
Collapse
Affiliation(s)
- Shervin Dehghani
- Department of Computer Science, Technische Universität München, München 85748 Germany
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Michael Sommersperger
- Department of Computer Science, Technische Universität München, München 85748 Germany
| | - Peiyao Zhang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Benjamin Busam
- Department of Computer Science, Technische Universität München, München 85748 Germany
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Nassir Navab
- Computer Aided Medical Procedures & Augmented Reality, Technical University of Munich, 85748 Munich, Germany, and an adjunct professor at the Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M. Ali Nasseri
- Department of Computer Science, Technische Universität München, München 85748 Germany
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München 81675 Germany
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
3
|
Zhou M, Hennerkes F, Liu J, Jiang Z, Wendler T, Nasseri MA, Iordachita I, Navab N. Theoretical error analysis of spotlight-based instrument localization for retinal surgery. ROBOTICA 2023; 41:1536-1549. [PMID: 37982126 PMCID: PMC10655674 DOI: 10.1017/s0263574722001862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Retinal surgery is widely considered to be a complicated and challenging task even for specialists. Image-guided robot-assisted intervention is among the novel and promising solutions that may enhance human capabilities therein. In this paper, we demonstrate the possibility of using spotlights for 5D guidance of a microsurgical instrument. The theoretical basis of the localization for the instrument based on the projection of a single spotlight is analyzed to deduce the position and orientation of the spotlight source. The usage of multiple spotlights is also proposed to check the possibility of further improvements for the performance boundaries. The proposed method is verified within a high-fidelity simulation environment using the 3D creation suite Blender. Experimental results show that the average positioning error is 0.029 mm using a single spotlight and 0.025 mm with three spotlights, respectively, while the rotational errors are 0.124 and 0.101, which shows the application to be promising in instrument localization for retinal surgery.
Collapse
Affiliation(s)
- Mingchuan Zhou
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Felix Hennerkes
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - Jingsong Liu
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - Zhongliang Jiang
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - M Ali Nasseri
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München, Germany
| | - Iulian Iordachita
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| |
Collapse
|
4
|
Mach K, Wei S, Kim JW, Martin-Gomez A, Zhang P, Kang JU, Nasseri MA, Gehlbach P, Navab N, Iordachita I. OCT-guided Robotic Subretinal Needle Injections: A Deep Learning-Based Registration Approach. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE 2022; 2022:781-786. [PMID: 37396671 PMCID: PMC10312384 DOI: 10.1109/bibm55620.2022.9995143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Subretinal injection (SI) is an ophthalmic surgical procedure that allows for the direct injection of therapeutic substances into the subretinal space to treat vitreoretinal disorders. Although this treatment has grown in popularity, various factors contribute to its difficulty. These include the retina's fragile, nonregenerative tissue, as well as hand tremor and poor visual depth perception. In this context, the usage of robotic devices may reduce hand tremors and facilitate gradual and controlled SI. For the robot to successfully move to the target area, it needs to understand the spatial relationship between the attached needle and the tissue. The development of optical coherence tomography (OCT) imaging has resulted in a substantial advancement in visualizing retinal structures at micron resolution. This paper introduces a novel foundation for an OCT-guided robotic steering framework that enables a surgeon to plan and select targets within the OCT volume. At the same time, the robot automatically executes the trajectories necessary to achieve the selected targets. Our contribution consists of a novel combination of existing methods, creating an intraoperative OCT-Robot registration pipeline. We combined straightforward affine transformation computations with robot kinematics and a deep neural network-determined tool-tip location in OCT. We evaluate our framework's capability in a cadaveric pig eye open-sky procedure and using an aluminum target board. Targeting the subretinal space of the pig eye produced encouraging results with a mean Euclidean error of 23.8μm.
Collapse
Affiliation(s)
- Kristina Mach
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Shuwen Wei
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Ji Woong Kim
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Alejandro Martin-Gomez
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Peiyao Zhang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Jin U Kang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - M Ali Nasseri
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität, Munich, Germany
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, USA
| | - Nassir Navab
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
- Chair for Computer Aided Medical Procedures, Technical University of Munich, Germany
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
5
|
Iordachita II, de Smet MD, Naus G, Mitsuishi M, Riviere CN. Robotic Assistance for Intraocular Microsurgery: Challenges and Perspectives. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2022; 110:893-908. [PMID: 36588782 PMCID: PMC9799958 DOI: 10.1109/jproc.2022.3169466] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Intraocular surgery, one of the most challenging discipline of microsurgery, requires sensory and motor skills at the limits of human physiological capabilities combined with tremendously difficult requirements for accuracy and steadiness. Nowadays, robotics combined with advanced imaging has opened conspicuous and significant directions in advancing the field of intraocular microsurgery. Having patient treatment with greater safety and efficiency as the final goal, similar to other medical applications, robotics has a real potential to fundamentally change microsurgery by combining human strengths with computer and sensor-based technology in an information-driven environment. Still in its early stages, robotic assistance for intraocular microsurgery has been accepted with precaution in the operating room and successfully tested in a limited number of clinical trials. However, owing to its demonstrated capabilities including hand tremor reduction, haptic feedback, steadiness, enhanced dexterity, micrometer-scale accuracy, and others, microsurgery robotics has evolved as a very promising trend in advancing retinal surgery. This paper will analyze the advances in retinal robotic microsurgery, its current drawbacks and limitations, as well as the possible new directions to expand retinal microsurgery to techniques currently beyond human boundaries or infeasible without robotics.
Collapse
Affiliation(s)
- Iulian I Iordachita
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Marc D de Smet
- Microinvasive Ocular Surgery Center (MIOS), Lausanne, Switzerland
| | | | - Mamoru Mitsuishi
- Department of Mechanical Engineering, The University of Tokyo, Japan
| | | |
Collapse
|
6
|
Dehghani S, Sommersperger M, Yang J, Salehi M, Busam B, Huang K, Gehlbach P, Iordachita I, Navab N, Nasseri MA. ColibriDoc: An Eye-in-Hand Autonomous Trocar Docking System. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2022; 2022:7717-7723. [PMID: 36128019 PMCID: PMC9484558 DOI: 10.1109/icra46639.2022.9811364] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Retinal surgery is a complex medical procedure that requires exceptional expertise and dexterity. For this purpose, several robotic platforms are currently under development to enable or improve the outcome of microsurgical tasks. Since the control of such robots is often designed for navigation inside the eye in proximity to the retina, successful trocar docking and insertion of the instrument into the eye represents an additional cognitive effort, and is therefore one of the open challenges in robotic retinal surgery. For this purpose, we present a platform for autonomous trocar docking that combines computer vision and a robotic setup. Inspired by the Cuban Colibri (hummingbird) aligning its beak to a flower using only vision, we mount a camera onto the endeffector of a robotic system. By estimating the position and pose of the trocar, the robot is able to autonomously align and navigate the instrument towards the Trocar Entry Point (TEP) and finally perform the insertion. Our experiments show that the proposed method is able to accurately estimate the position and pose of the trocar and achieve repeatable autonomous docking. The aim of this work is to reduce the complexity of the robotic setup prior to the surgical task and therefore, increase the intuitiveness of the system integration into clinical workflow.
Collapse
Affiliation(s)
- Shervin Dehghani
- Department of Computer Science in Technische Universität München, München 85748 Germany
| | - Michael Sommersperger
- Department of Computer Science in Technische Universität München, München 85748 Germany
| | - Junjie Yang
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München 81675 Germany
| | - Mehrdad Salehi
- Department of Computer Science in Technische Universität München, München 85748 Germany
| | - Benjamin Busam
- Department of Computer Science in Technische Universität München, München 85748 Germany
| | - Kai Huang
- Key Laboratory of Machine Intelligence and Advanced Computing (Sun Yat-sen University), Guangzhou, China
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Nassir Navab
- Full professor and head of the Chair for Computer Aided Medical Procedures Augmented Reality, Technical University of Munich, 85748 Munich, Germany, and an adjunct professor at the Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M Ali Nasseri
- Department of Computer Science in Technische Universität München, München 85748 Germany
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München 81675 Germany
| |
Collapse
|
7
|
Sarabandi S, Porta JM, Thomas F. Hand-Eye Calibration Made Easy Through a Closed-Form Two-Stage Method. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3146943] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
8
|
Sommersperger M, Martin-Gomez A, Mach K, Gehlbach PL, Ali Nasseri M, Iordachita I, Navab N. Surgical scene generation and adversarial networks for physics-based iOCT synthesis. BIOMEDICAL OPTICS EXPRESS 2022; 13:2414-2430. [PMID: 35519277 PMCID: PMC9045909 DOI: 10.1364/boe.454286] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/17/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
The development and integration of intraoperative optical coherence tomography (iOCT) into modern operating rooms has motivated novel procedures directed at improving the outcome of ophthalmic surgeries. Although computer-assisted algorithms could further advance such interventions, the limited availability and accessibility of iOCT systems constrains the generation of dedicated data sets. This paper introduces a novel framework combining a virtual setup and deep learning algorithms to generate synthetic iOCT data in a simulated environment. The virtual setup reproduces the geometry of retinal layers extracted from real data and allows the integration of virtual microsurgical instrument models. Our scene rendering approach extracts information from the environment and considers iOCT typical imaging artifacts to generate cross-sectional label maps, which in turn are used to synthesize iOCT B-scans via a generative adversarial network. In our experiments we investigate the similarity between real and synthetic images, show the relevance of using the generated data for image-guided interventions and demonstrate the potential of 3D iOCT data synthesis.
Collapse
Affiliation(s)
- Michael Sommersperger
- Chair for Computer Aided Medical Procedures and Augmented Reality, Informatics Department, Technical University of Munich, Munich, Bayern, Germany
- These authors contributed equally to this work
| | - Alejandro Martin-Gomez
- Chair for Computer Aided Medical Procedures and Augmented Reality, Informatics Department, Technical University of Munich, Munich, Bayern, Germany
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- These authors contributed equally to this work
| | - Kristina Mach
- Chair for Computer Aided Medical Procedures and Augmented Reality, Informatics Department, Technical University of Munich, Munich, Bayern, Germany
| | - Peter Louis Gehlbach
- Wilmer Eye Institute Research, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - M Ali Nasseri
- Klinikum Rechts der Isar, Augenklinik, Munich, Bayern, Germany
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Informatics Department, Technical University of Munich, Munich, Bayern, Germany
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
9
|
Puhar EG, Korat L, Erič M, Jaklič A, Solina F. Microtomographic Analysis of a Palaeolithic Wooden Point from the Ljubljanica River. SENSORS 2022; 22:s22062369. [PMID: 35336540 PMCID: PMC8951160 DOI: 10.3390/s22062369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 03/10/2022] [Accepted: 03/12/2022] [Indexed: 11/05/2022]
Abstract
A rare and valuable Palaeolithic wooden point, presumably belonging to a hunting weapon, was found in the Ljubljanica River in Slovenia in 2008. In order to prevent complete decay, the waterlogged wooden artefact had to undergo conservation treatment, which usually involves some expected deformations of structure and shape. To investigate these changes, a series of surface-based 3D models of the artefact were created before, during and after the conservation process. Unfortunately, the surface-based 3D models were not sufficient to understand the internal processes inside the wooden artefact (cracks, cavities, fractures). Since some of the surface-based 3D models were taken with a microtomographic scanner, we decided to create a volumetric 3D model from the available 2D tomographic images. In order to have complete control and greater flexibility in creating the volumetric 3D model than is the case with commercial software, we decided to implement our own algorithm. In fact, two algorithms were implemented for the construction of surface-based 3D models and for the construction of volumetric 3D models, using (1) unsegmented 2D images CT and (2) segmented 2D images CT. The results were positive in comparison with commercial software and new information was obtained about the actual state and causes of the deformation of the artefact. Such models could be a valuable aid in the selection of appropriate conservation and restoration methods and techniques in cultural heritage research.
Collapse
Affiliation(s)
- Enej Guček Puhar
- Computer Vision Laboratory, Faculty of Computer and Information Science, University of Ljubljana, Večna Pot 113, SI-1000 Ljubljana, Slovenia;
- Correspondence: (E.G.P.); (F.S.)
| | - Lidija Korat
- The Laboratory for Cements, Mortars and Ceramics, Slovenian National Building and Civil Engineering Institute, Dimičeva Ulica 12, SI-1000 Ljubljana, Slovenia;
| | - Miran Erič
- Institute for the Protection of Cultural Heritage of Slovenia, Poljanska 40, SI-1000 Ljubljana, Slovenia;
| | - Aleš Jaklič
- Computer Vision Laboratory, Faculty of Computer and Information Science, University of Ljubljana, Večna Pot 113, SI-1000 Ljubljana, Slovenia;
| | - Franc Solina
- Computer Vision Laboratory, Faculty of Computer and Information Science, University of Ljubljana, Večna Pot 113, SI-1000 Ljubljana, Slovenia;
- Correspondence: (E.G.P.); (F.S.)
| |
Collapse
|
10
|
Koskinen J, Torkamani-Azar M, Hussein A, Huotarinen A, Bednarik R. Automated tool detection with deep learning for monitoring kinematics and eye-hand coordination in microsurgery. Comput Biol Med 2021; 141:105121. [PMID: 34968859 DOI: 10.1016/j.compbiomed.2021.105121] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/30/2021] [Accepted: 12/03/2021] [Indexed: 11/03/2022]
Abstract
In microsurgical procedures, surgeons use micro-instruments under high magnifications to handle delicate tissues. These procedures require highly skilled attentional and motor control for planning and implementing eye-hand coordination strategies. Eye-hand coordination in surgery has mostly been studied in open, laparoscopic, and robot-assisted surgeries, as there are no available tools to perform automatic tool detection in microsurgery. We introduce and investigate a method for simultaneous detection and processing of micro-instruments and gaze during microsurgery. We train and evaluate a convolutional neural network for detecting 17 microsurgical tools with a dataset of 7500 frames from 20 videos of simulated and real surgical procedures. Model evaluations result in mean average precision at the 0.5 threshold of 89.5-91.4% for validation and 69.7-73.2% for testing over partially unseen surgical settings, and the average inference time of 39.90 ± 1.2 frames/second. While prior research has mostly evaluated surgical tool detection on homogeneous datasets with limited number of tools, we demonstrate the feasibility of transfer learning, and conclude that detectors that generalize reliably to new settings require data from several different surgical procedures. In a case study, we apply the detector with a microscope eye tracker to investigate tool use and eye-hand coordination during an intracranial vessel dissection task. The results show that tool kinematics differentiate microsurgical actions. The gaze-to-microscissors distances are also smaller during dissection than other actions when the surgeon has more space to maneuver. The presented detection pipeline provides the clinical and research communities with a valuable resource for automatic content extraction and objective skill assessment in various microsurgical environments.
Collapse
Affiliation(s)
- Jani Koskinen
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland.
| | - Mastaneh Torkamani-Azar
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland
| | - Ahmed Hussein
- Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Faculty of Medicine, Assiut University, Assiut, 71111, Egypt
| | - Antti Huotarinen
- Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Institute of Clinical Medicine, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland
| | - Roman Bednarik
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland
| |
Collapse
|
11
|
Wendler T, van Leeuwen FWB, Navab N, van Oosterom MN. How molecular imaging will enable robotic precision surgery : The role of artificial intelligence, augmented reality, and navigation. Eur J Nucl Med Mol Imaging 2021; 48:4201-4224. [PMID: 34185136 PMCID: PMC8566413 DOI: 10.1007/s00259-021-05445-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 06/01/2021] [Indexed: 02/08/2023]
Abstract
Molecular imaging is one of the pillars of precision surgery. Its applications range from early diagnostics to therapy planning, execution, and the accurate assessment of outcomes. In particular, molecular imaging solutions are in high demand in minimally invasive surgical strategies, such as the substantially increasing field of robotic surgery. This review aims at connecting the molecular imaging and nuclear medicine community to the rapidly expanding armory of surgical medical devices. Such devices entail technologies ranging from artificial intelligence and computer-aided visualization technologies (software) to innovative molecular imaging modalities and surgical navigation (hardware). We discuss technologies based on their role at different steps of the surgical workflow, i.e., from surgical decision and planning, over to target localization and excision guidance, all the way to (back table) surgical verification. This provides a glimpse of how innovations from the technology fields can realize an exciting future for the molecular imaging and surgery communities.
Collapse
Affiliation(s)
- Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
| | - Fijs W. B. van Leeuwen
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Orsi Academy, Melle, Belgium
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
- Chair for Computer Aided Medical Procedures Laboratory for Computational Sensing + Robotics, Johns-Hopkins University, Baltimore, MD USA
| | - Matthias N. van Oosterom
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| |
Collapse
|
12
|
Alamdar A, Patel N, Urias M, Ebrahimi A, Gehlbach P, Iordachita I. Force and Velocity Based Puncture Detection in Robot Assisted Retinal Vein Cannulation: in-vivo Study. IEEE Trans Biomed Eng 2021; 69:1123-1132. [PMID: 34550878 DOI: 10.1109/tbme.2021.3114638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Retinal vein cannulation is a technically demanding surgical procedure and its feasibility may rely on using advanced surgical robots equipped with force-sensing microneedles. Reliable detection of the moment of venous puncture is important, to either alert or prevent the clinician from double puncturing the vessel and damaging the retinal surface beneath. This paper reports the first in-vivo retinal vein cannulation trial on rabbit eyes, using sensorized metal needles, and investigates puncture detection. METHODS We utilized total of four indices including two previously demonstrated ones and two new indices, based on the velocity and force of the needle tip and the correlation between the needle-tissue and tool-sclera interaction forces. We also studied the effect of detection timespan on the performance of detecting actual punctures. RESULTS The new indices, when used in conjunction with the previous algorithm, improved the detection rate form 75% to 92%, but slightly increased the number of false detections from 37 to 43. Increasing the detection window improved the detection performance, at the cost of adding to the delay. CONCLUSION The current algorithm can supplement the surgeons visual feedback and surgical judgment. To achieve automatic puncture detection, more measurements and further analysis are required. Subsequent in-vivo studies in other animals, such as pigs with their more human like eye anatomy, are required, before clinical trials. SIGNIFICANCE The study provides promising results and the criteria developed may serve as guidelines for further investigation into puncture detection in in-vivo retinal vein cannulation.
Collapse
|
13
|
Zhou D, Kimura S, Takeyama H, Haraguchi D, Kaizu Y, Nakao S, Sonoda KH, Tadano K. Eye Explorer: A robotic endoscope holder for eye surgery. Int J Med Robot 2020; 17:1-13. [PMID: 32996194 PMCID: PMC7900951 DOI: 10.1002/rcs.2177] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 09/26/2020] [Accepted: 09/28/2020] [Indexed: 01/16/2023]
Abstract
Background Holding endoscopes by hand when performing eye surgery reduces the dexterity of the surgeon. Methods A robotic endoscope holder called “Eye Explorer” is proposed to hold the endoscope and free the surgeon's hand. Results This device satisfies the engineering and clinical requirements of eye surgery. The force for manual operation is less than 0.5 N. The observable ranges inside the patient's eye considering horizontal and vertical perspectives are 118° and 97°, and the motion of the holder does not interfere with the surgeon's hand and other surgical devices. The self‐weight compensation can prevent the endoscope from falling when extra supporting force is released. When comparing the external force exerted on the eye by the Eye Explorer with that in case of manual operation, a decrease of more than 15% can be observed. Moreover, the consumption time of endoscope view adjustment using the Eye Explorer and manual operation does not significantly differ. Conclusion The Eye Explorer allows dual‐hand operation, facilitating a successful endoscopic eye surgery.
Collapse
Affiliation(s)
- Dongbo Zhou
- Institute of Innovation Research, Tokyo Institute of Technology, Yokohama-shi, Japan
| | - Shintaro Kimura
- School of Engineering, Tokyo Institute of Technology, Yokohama-shi, Japan
| | - Hayato Takeyama
- School of Engineering, Tokyo Institute of Technology, Yokohama-shi, Japan
| | - Daisuke Haraguchi
- Institute of Innovation Research, Tokyo Institute of Technology, Yokohama-shi, Japan
| | - Yoshihiro Kaizu
- Department of Ophthalmology, Kyushu University Hospital, Fukuoka, Japan
| | - Shintaro Nakao
- Department of Ophthalmology, Kyushu University Hospital, Fukuoka, Japan
| | - Koh-Hei Sonoda
- Department of Ophthalmology, Kyushu University Hospital, Fukuoka, Japan
| | - Kotaro Tadano
- Institute of Innovation Research, Tokyo Institute of Technology, Yokohama-shi, Japan
| |
Collapse
|
14
|
Qin F, Lin S, Li Y, Bly RA, Moe KS, Hannaford B. Towards Better Surgical Instrument Segmentation in Endoscopic Vision: Multi-Angle Feature Aggregation and Contour Supervision. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3009073] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
Schlüter M, Glandorf L, Gromniak M, Saathoff T, Schlaefer A. Concept for Markerless 6D Tracking Employing Volumetric Optical Coherence Tomography. SENSORS 2020; 20:s20092678. [PMID: 32397153 PMCID: PMC7248981 DOI: 10.3390/s20092678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 04/21/2020] [Accepted: 05/05/2020] [Indexed: 11/16/2022]
Abstract
Optical tracking systems are widely used, for example, to navigate medical interventions. Typically, they require the presence of known geometrical structures, the placement of artificial markers, or a prominent texture on the target’s surface. In this work, we propose a 6D tracking approach employing volumetric optical coherence tomography (OCT) images. OCT has a micrometer-scale resolution and employs near-infrared light to penetrate few millimeters into, for example, tissue. Thereby, it provides sub-surface information which we use to track arbitrary targets, even with poorly structured surfaces, without requiring markers. Our proposed system can shift the OCT’s field-of-view in space and uses an adaptive correlation filter to estimate the motion at multiple locations on the target. This allows one to estimate the target’s position and orientation. We show that our approach is able to track translational motion with root-mean-squared errors below 0.25 mm and in-plane rotations with errors below 0.3°. For out-of-plane rotations, our prototypical system can achieve errors around 0.6°.
Collapse
|
16
|
Dahroug B, Tamadazte B, Andreff N. PCA-Based Visual Servoing Using Optical Coherence Tomography. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2977259] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
17
|
Schluter M, Fuh MM, Maier S, Otte C, Kiani P, Hansen NO, Dwayne Miller RJ, Schluter H, Schlaefer A. Towards OCT-Navigated Tissue Ablation with a Picosecond Infrared Laser (PIRL) and Mass-Spectrometric Analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:158-161. [PMID: 31945868 DOI: 10.1109/embc.2019.8856808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Medical lasers are commonly used in interventions to ablate tumor tissue. Recently, the picosecond infrared laser has been introduced, which greatly decreases damaging of surrounding healthy tissue. Further, its ablation plume contains intact biomolecules which can be collected and analyzed by mass spectrometry. This allows for a specific chracterization of the tissue. For a precise treatment, however, a suitable guidance is needed. Further, spatial information is required if the tissue is to be characterized at different parts in the ablated area. Therefore, we propose a system which employs optical coherence tomography as the guiding imaging modality. We describe a prototypical system which provides automatic ablation of areas defined in the image data. For this purpose, we use a calibration with a robot which drives the laser fiber and collects the arising plume. We demonstrate our system on porcine tissue samples.
Collapse
|
18
|
Chen X, Yi J, Li J, Zhou J, Wang Z. Soft-Actuator-Based Robotic Joint for Safe and Forceful Interaction With Controllable Impact Response. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2854409] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|