1
|
Mi H, MacLaren RE, Cehajic-Kapetanovic J. Robotising vitreoretinal surgeries. Eye (Lond) 2024:10.1038/s41433-024-03149-3. [PMID: 38965320 DOI: 10.1038/s41433-024-03149-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 03/04/2024] [Accepted: 05/17/2024] [Indexed: 07/06/2024] Open
Abstract
The use of robotic surgery in ophthalmology has been shown to offer many potential advantages to current surgical techniques. Vitreoretinal surgery requires complex manoeuvres and high precision, and this is an area that exceeds manual human dexterity in certain surgical situations. With the advent of advanced therapeutics such as subretinal gene therapy, precise delivery and minimising trauma is imperative to optimize outcomes. There are multiple robotic systems in place for ophthalmology in pre-clinical and clinical use, and the Preceyes Robotic Surgical System (Preceyes BV) has also gained the CE mark and is commercially available for use. Recent in-vivo and in-human surgeries have been performed successfully with robotics systems. This includes membrane peeling, subretinal injections of therapeutics, and retinal vein cannulation. There is huge potential to integrate robotic surgery into mainstream clinical practice. In this review, we summarize the existing systems, and clinical implementation so far, and highlight the future clinical applications for robotic surgery in vitreo-retina.
Collapse
Affiliation(s)
- Helen Mi
- Oxford Eye Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Robert E MacLaren
- Oxford Eye Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Nuffield Laboratory of Ophthalmology, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
- NIHR Oxford Biomedical Research Centre, Oxford, UK
| | - Jasmina Cehajic-Kapetanovic
- Oxford Eye Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK.
- Nuffield Laboratory of Ophthalmology, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK.
- NIHR Oxford Biomedical Research Centre, Oxford, UK.
| |
Collapse
|
2
|
Zhang P, Kim JW, Gehlbach P, Iordachita I, Kobilarov M. Autonomous Needle Navigation in Subretinal Injections via iOCT. IEEE Robot Autom Lett 2024; 9:4154-4161. [PMID: 38550718 PMCID: PMC10972538 DOI: 10.1109/lra.2024.3375710] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/03/2024]
Abstract
Subretinal injection is an effective method for direct delivery of therapeutic agents to treat prevalent subretinal diseases. Among the challenges for surgeons are physiological hand tremor, difficulty resolving single-micron scale depth perception, and lack of tactile feedback. The recent introduction of intraoperative Optical Coherence Tomography (iOCT) enables precise depth information during subretinal surgery. However, even when relying on iOCT, achieving the required micron-scale precision remains a significant surgical challenge. This work presents a robot-assisted workflow for high-precision autonomous needle navigation for subretinal injection. The workflow includes online registration between robot and iOCT coordinates; tool-tip localization in iOCT coordinates using a Convolutional Neural Network (CNN); and tool-tip planning and tracking system using real-time Model Predictive Control (MPC). The proposed workflow is validated using a silicone eye phantom and ex vivo porcine eyes. The experimental results demonstrate that the mean error to reach the user-defined target and the mean procedure duration are within an acceptable precision range. The proposed workflow achieves a 100% success rate for subretinal injection, while maintaining scleral forces at the scleral insertion point below 15mN throughout the navigation procedures.
Collapse
Affiliation(s)
- Peiyao Zhang
- Department of Mechanical Engineering and the Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD 21211, USA
| | - Ji Woong Kim
- Department of Mechanical Engineering and the Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD 21211, USA
| | - Peter Gehlbach
- Peter Gehlbach is with the Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD 21211, USA
| | - Iulian Iordachita
- Department of Mechanical Engineering and the Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD 21211, USA
| | - Marin Kobilarov
- Department of Mechanical Engineering and the Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD 21211, USA
| |
Collapse
|
3
|
Dehghani S, Sommersperger M, Zhang P, Martin-Gomez A, Busam B, Gehlbach P, Navab N, Nasseri MA, Iordachita I. Robotic Navigation Autonomy for Subretinal Injection via Intelligent Real-Time Virtual iOCT Volume Slicing. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2023; 2023:4724-4731. [PMID: 38125032 PMCID: PMC10732544 DOI: 10.1109/icra48891.2023.10160372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.
Collapse
Affiliation(s)
- Shervin Dehghani
- Department of Computer Science, Technische Universität München, München 85748 Germany
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Michael Sommersperger
- Department of Computer Science, Technische Universität München, München 85748 Germany
| | - Peiyao Zhang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Benjamin Busam
- Department of Computer Science, Technische Universität München, München 85748 Germany
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Nassir Navab
- Computer Aided Medical Procedures & Augmented Reality, Technical University of Munich, 85748 Munich, Germany, and an adjunct professor at the Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M. Ali Nasseri
- Department of Computer Science, Technische Universität München, München 85748 Germany
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München 81675 Germany
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
4
|
Zhou M, Hennerkes F, Liu J, Jiang Z, Wendler T, Nasseri MA, Iordachita I, Navab N. Theoretical error analysis of spotlight-based instrument localization for retinal surgery. ROBOTICA 2023; 41:1536-1549. [PMID: 37982126 PMCID: PMC10655674 DOI: 10.1017/s0263574722001862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Retinal surgery is widely considered to be a complicated and challenging task even for specialists. Image-guided robot-assisted intervention is among the novel and promising solutions that may enhance human capabilities therein. In this paper, we demonstrate the possibility of using spotlights for 5D guidance of a microsurgical instrument. The theoretical basis of the localization for the instrument based on the projection of a single spotlight is analyzed to deduce the position and orientation of the spotlight source. The usage of multiple spotlights is also proposed to check the possibility of further improvements for the performance boundaries. The proposed method is verified within a high-fidelity simulation environment using the 3D creation suite Blender. Experimental results show that the average positioning error is 0.029 mm using a single spotlight and 0.025 mm with three spotlights, respectively, while the rotational errors are 0.124 and 0.101, which shows the application to be promising in instrument localization for retinal surgery.
Collapse
Affiliation(s)
- Mingchuan Zhou
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Felix Hennerkes
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - Jingsong Liu
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - Zhongliang Jiang
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - M Ali Nasseri
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München, Germany
| | - Iulian Iordachita
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| |
Collapse
|
5
|
Ebrahimi A, Sefati S, Gehlbach P, Taylor RH, Iordachita I. Simultaneous Online Registration-Independent Stiffness Identification and Tip Localization of Surgical Instruments in Robot-assisted Eye Surgery. IEEE T ROBOT 2023; 39:1373-1387. [PMID: 37377922 PMCID: PMC10292740 DOI: 10.1109/tro.2022.3201393] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing of surgery states (e.g. instrument tip localization and tool-to-tissue interaction forces). Many of the existing tool tip localization methods require preoperative frame registrations or instrument calibrations. In this study using an iterative approach and by combining vision and force-based methods, we develop calibration- and registration-independent (RI) algorithms to provide online estimates of instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model based on the forward kinematics (FWK) of the Steady-Hand Eye Robot (SHER) and Fiber Brag Grating (FBG) sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the deflected instrument tip position estimations during robot-assisted eye surgery. The conducted experiments demonstrate that when the online RI stiffness estimations are used, the instrument tip localization results surpass those obtained from pre-operative offline calibrations for stiffness.
Collapse
Affiliation(s)
- Ali Ebrahimi
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Shahriar Sefati
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD, 21287, USA
| | - Russell H Taylor
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Computer Science and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Iulian Iordachita
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| |
Collapse
|
6
|
Sommersperger M, Martin-Gomez A, Mach K, Gehlbach PL, Ali Nasseri M, Iordachita I, Navab N. Surgical scene generation and adversarial networks for physics-based iOCT synthesis. BIOMEDICAL OPTICS EXPRESS 2022; 13:2414-2430. [PMID: 35519277 PMCID: PMC9045909 DOI: 10.1364/boe.454286] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/17/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
The development and integration of intraoperative optical coherence tomography (iOCT) into modern operating rooms has motivated novel procedures directed at improving the outcome of ophthalmic surgeries. Although computer-assisted algorithms could further advance such interventions, the limited availability and accessibility of iOCT systems constrains the generation of dedicated data sets. This paper introduces a novel framework combining a virtual setup and deep learning algorithms to generate synthetic iOCT data in a simulated environment. The virtual setup reproduces the geometry of retinal layers extracted from real data and allows the integration of virtual microsurgical instrument models. Our scene rendering approach extracts information from the environment and considers iOCT typical imaging artifacts to generate cross-sectional label maps, which in turn are used to synthesize iOCT B-scans via a generative adversarial network. In our experiments we investigate the similarity between real and synthetic images, show the relevance of using the generated data for image-guided interventions and demonstrate the potential of 3D iOCT data synthesis.
Collapse
Affiliation(s)
- Michael Sommersperger
- Chair for Computer Aided Medical Procedures and Augmented Reality, Informatics Department, Technical University of Munich, Munich, Bayern, Germany
- These authors contributed equally to this work
| | - Alejandro Martin-Gomez
- Chair for Computer Aided Medical Procedures and Augmented Reality, Informatics Department, Technical University of Munich, Munich, Bayern, Germany
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- These authors contributed equally to this work
| | - Kristina Mach
- Chair for Computer Aided Medical Procedures and Augmented Reality, Informatics Department, Technical University of Munich, Munich, Bayern, Germany
| | - Peter Louis Gehlbach
- Wilmer Eye Institute Research, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - M Ali Nasseri
- Klinikum Rechts der Isar, Augenklinik, Munich, Bayern, Germany
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Informatics Department, Technical University of Munich, Munich, Bayern, Germany
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
7
|
Zhou M, Wu J, Ebrahimi A, Patel N, He C, Gehlbach P, Taylor RH, Knoll A, Nasseri MA, Iordachita I. Spotlight-based 3D Instrument Guidance for Retinal Surgery. ... INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS. INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS 2021; 2020. [PMID: 34595483 DOI: 10.1109/ismr48331.2020.9312952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Retinal surgery is a complex activity that can be challenging for a surgeon to perform effectively and safely. Image guided robot-assisted surgery is one of the promising solutions that bring significant surgical enhancement in treatment outcome and reduce the physical limitations of human surgeons. In this paper, we demonstrate a novel method for 3D guidance of the instrument based on the projection of spotlight in the single microscope images. The spotlight projection mechanism is firstly analyzed and modeled with a projection on both a plane and a sphere surface. To test the feasibility of the proposed method, a light fiber is integrated into the instrument which is driven by the Steady-Hand Eye Robot (SHER). The spot of light is segmented and tracked on a phantom retina using the proposed algorithm. The static calibration and dynamic test results both show that the proposed method can easily archive 0.5 mm of tip-to-surface distance which is within the clinically acceptable accuracy for intraocular visual guidance.
Collapse
Affiliation(s)
- Mingchuan Zhou
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA.,Chair of Robotics, Artificial Intelligence and Real-time Systems, Technische Universität München, München 85748 Germany
| | - Jiahao Wu
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA.,T Stone Robotics Institute, the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, HKSAR, China
| | - Ali Ebrahimi
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Niravkumar Patel
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Changyan He
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | - Russell H Taylor
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Alois Knoll
- Chair of Robotics, Artificial Intelligence and Real-time Systems, Technische Universität München, München 85748 Germany
| | - M Ali Nasseri
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München 81675 Germany
| | - Iulian Iordachita
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
8
|
Sommersperger M, Weiss J, Ali Nasseri M, Gehlbach P, Iordachita I, Navab N. Real-time tool to layer distance estimation for robotic subretinal injection using intraoperative 4D OCT. BIOMEDICAL OPTICS EXPRESS 2021; 12:1085-1104. [PMID: 33680560 PMCID: PMC7901333 DOI: 10.1364/boe.415477] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 01/15/2021] [Accepted: 01/19/2021] [Indexed: 05/24/2023]
Abstract
The emergence of robotics could enable ophthalmic microsurgical procedures that were previously not feasible due to the precision limits of manual delivery, for example, targeted subretinal injection. Determining the distance between the needle tip, the internal limiting membrane (ILM), and the retinal pigment epithelium (RPE) both precisely and reproducibly is required for safe and successful robotic retinal interventions. Recent advances in intraoperative optical coherence tomography (iOCT) have opened the path for 4D image-guided surgery by providing near video-rate imaging with micron-level resolution to visualize retinal structures, surgical instruments, and tool-tissue interactions. In this work, we present a novel pipeline to precisely estimate the distance between the injection needle and the surface boundaries of two retinal layers, the ILM and the RPE, from iOCT volumes. To achieve high computational efficiency, we reduce the analysis to the relevant area around the needle tip. We employ a convolutional neural network (CNN) to segment the tool surface, as well as the retinal layer boundaries from selected iOCT B-scans within this tip area. This results in the generation and processing of 3D surface point clouds for the tool, ILM and RPE from the B-scan segmentation maps, which in turn allows the estimation of the minimum distance between the resulting tool and layer point clouds. The proposed method is evaluated on iOCT volumes from ex-vivo porcine eyes and achieves an average error of 9.24 µm and 8.61 µm measuring the distance from the needle tip to the ILM and the RPE, respectively. The results demonstrate that this approach is robust to the high levels of noise present in iOCT B-scans and is suitable for the interventional use case by providing distance feedback at an average update rate of 15.66 Hz.
Collapse
Affiliation(s)
- Michael Sommersperger
- Johns Hopkins University, Baltimore, MD 21218, USA
- Technical University of Munich, Germany
| | | | - M. Ali Nasseri
- Technical University of Munich, Germany
- Klinikum Rechts der Isar, Augenklinik, Munich, Germany
| | | | | | - Nassir Navab
- Johns Hopkins University, Baltimore, MD 21218, USA
- Technical University of Munich, Germany
| |
Collapse
|
9
|
Park I, Kim HK, Chung WK, Kim K. Deep Learning Based Real-Time OCT Image Segmentation and Correction for Robotic Needle Insertion Systems. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3001474] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
10
|
Advanced robotic surgical systems in ophthalmology. Eye (Lond) 2020; 34:1554-1562. [PMID: 32152518 PMCID: PMC7608507 DOI: 10.1038/s41433-020-0837-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 02/12/2020] [Accepted: 02/26/2020] [Indexed: 12/17/2022] Open
Abstract
In this paper, an overview of advanced robotic surgical systems in ophthalmology is provided. The systems are introduced as representative examples of the degree of human vs. robotic control during surgical procedures. The details are presented on each system and the latest advancements of each are described. Future potential applications for surgical robotics in ophthalmology are discussed in detail, with representative examples provided alongside recent progress.
Collapse
|
11
|
Al Hajj H, Lamard M, Conze PH, Roychowdhury S, Hu X, Maršalkaitė G, Zisimopoulos O, Dedmari MA, Zhao F, Prellberg J, Sahu M, Galdran A, Araújo T, Vo DM, Panda C, Dahiya N, Kondo S, Bian Z, Vahdat A, Bialopetravičius J, Flouty E, Qiu C, Dill S, Mukhopadhyay A, Costa P, Aresta G, Ramamurthy S, Lee SW, Campilho A, Zachow S, Xia S, Conjeti S, Stoyanov D, Armaitis J, Heng PA, Macready WG, Cochener B, Quellec G. CATARACTS: Challenge on automatic tool annotation for cataRACT surgery. Med Image Anal 2018; 52:24-41. [PMID: 30468970 DOI: 10.1016/j.media.2018.11.008] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Revised: 11/13/2018] [Accepted: 11/15/2018] [Indexed: 12/29/2022]
Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.
Collapse
Affiliation(s)
| | - Mathieu Lamard
- Inserm, UMR 1101, Brest, F-29200, France; Univ Bretagne Occidentale, Brest, F-29200, France
| | - Pierre-Henri Conze
- Inserm, UMR 1101, Brest, F-29200, France; IMT Atlantique, LaTIM UMR 1101, UBL, Brest, F-29200, France
| | | | - Xiaowei Hu
- Dept. of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | | | - Muneer Ahmad Dedmari
- Chair for Computer Aided Medical Procedures, Faculty of Informatics, Technical University of Munich, Garching b. Munich, 85748, Germany
| | - Fenqiang Zhao
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Jonas Prellberg
- Dept. of Informatics, Carl von Ossietzky University, Oldenburg, 26129, Germany
| | - Manish Sahu
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Adrian Galdran
- INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Teresa Araújo
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Duc My Vo
- Gachon University, 1342 Seongnamdaero, Sujeonggu, Seongnam, 13120, Korea
| | | | - Navdeep Dahiya
- Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA
| | | | | | - Arash Vahdat
- D-Wave Systems Inc., Burnaby, BC, V5G 4M9, Canada
| | | | | | - Chenhui Qiu
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Sabrina Dill
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Anirban Mukhopadhyay
- Department of Computer Science, Technische Universität Darmstadt, Darmstadt, 64283, Germany
| | - Pedro Costa
- INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Guilherme Aresta
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Senthil Ramamurthy
- Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA
| | - Sang-Woong Lee
- Gachon University, 1342 Seongnamdaero, Sujeonggu, Seongnam, 13120, Korea
| | - Aurélio Campilho
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Stefan Zachow
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Shunren Xia
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Sailesh Conjeti
- Chair for Computer Aided Medical Procedures, Faculty of Informatics, Technical University of Munich, Garching b. Munich, 85748, Germany; German Center for Neurodegenrative Diseases (DZNE), Bonn, 53127, Germany
| | - Danail Stoyanov
- Digital Surgery Ltd, EC1V 2QY, London, UK; University College London, Gower Street, WC1E 6BT, London, UK
| | | | - Pheng-Ann Heng
- Dept. of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Béatrice Cochener
- Inserm, UMR 1101, Brest, F-29200, France; Univ Bretagne Occidentale, Brest, F-29200, France; Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
12
|
Weiss J, Rieke N, Nasseri MA, Maier M, Eslami A, Navab N. Fast 5DOF needle tracking in iOCT. Int J Comput Assist Radiol Surg 2018; 13:787-796. [PMID: 29603065 DOI: 10.1007/s11548-018-1751-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 03/22/2018] [Indexed: 11/26/2022]
Abstract
PURPOSE Intraoperative optical coherence tomography (iOCT) is an increasingly available imaging technique for ophthalmic microsurgery that provides high-resolution cross-sectional information of the surgical scene. We propose to build on its desirable qualities and present a method for tracking the orientation and location of a surgical needle. Thereby, we enable the direct analysis of instrument-tissue interaction directly in OCT space without complex multimodal calibration that would be required with traditional instrument tracking methods. METHOD The intersection of the needle with the iOCT scan is detected by a peculiar multistep ellipse fitting that takes advantage of the directionality of the modality. The geometric modeling allows us to use the ellipse parameters and provide them into a latency-aware estimator to infer the 5DOF pose during needle movement. RESULTS Experiments on phantom data and ex vivo porcine eyes indicate that the algorithm retains angular precision especially during lateral needle movement and provides a more robust and consistent estimation than baseline methods. CONCLUSION Using solely cross-sectional iOCT information, we are able to successfully and robustly estimate a 5DOF pose of the instrument in less than 5.4 ms on a CPU.
Collapse
Affiliation(s)
- Jakob Weiss
- Computer Aided Medical Procedures, Technische Universität München, Boltzmannstr. 3, 85748, Garching, Germany.
| | - Nicola Rieke
- Computer Aided Medical Procedures, Technische Universität München, Boltzmannstr. 3, 85748, Garching, Germany
| | - Mohammad Ali Nasseri
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universit München, 81675, Munich, Germany
| | - Mathias Maier
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universit München, 81675, Munich, Germany
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Boltzmannstr. 3, 85748, Garching, Germany
- Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
13
|
Gonenc B, Chae J, Gehlbach P, Taylor RH, Iordachita I. Towards Robot-Assisted Retinal Vein Cannulation: A Motorized Force-Sensing Microneedle Integrated with a Handheld Micromanipulator †. SENSORS (BASEL, SWITZERLAND) 2017; 17:E2195. [PMID: 28946634 PMCID: PMC5677255 DOI: 10.3390/s17102195] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2017] [Revised: 09/13/2017] [Accepted: 09/19/2017] [Indexed: 11/25/2022]
Abstract
Retinal vein cannulation is a technically demanding surgical procedure where therapeutic agents are injected into the retinal veins to treat occlusions. The clinical feasibility of this approach has been largely limited by the technical challenges associated with performing the procedure. Among the challenges to successful vein cannulation are identifying the moment of venous puncture, achieving cannulation of the micro-vessel, and maintaining cannulation throughout drug delivery. Recent advances in medical robotics and sensing of tool-tissue interaction forces have the potential to address each of these challenges as well as to prevent tissue trauma, minimize complications, diminish surgeon effort, and ultimately promote successful retinal vein cannulation. In this paper, we develop an assistive system combining a handheld micromanipulator, called "Micron", with a force-sensing microneedle. Using this system, we examine two distinct methods of precisely detecting the instant of venous puncture. This is based on measured tool-tissue interaction forces and also the tracked position of the needle tip. In addition to the existing tremor canceling function of Micron, a new control method is implemented to actively compensate unintended movements of the operator, and to keep the cannulation device securely inside the vein following cannulation. To demonstrate the capabilities and performance of our uniquely upgraded system, we present a multi-user artificial phantom study with subjects from three different surgical skill levels. Results show that our puncture detection algorithm, when combined with the active positive holding feature enables sustained cannulation which is most evident in smaller veins. Notable is that the active holding function significantly attenuates tool motion in the vein, thereby reduces the trauma during cannulation.
Collapse
Affiliation(s)
- Berk Gonenc
- Computer Integrated Surgical Systems and Technology Engineering Research Center (CISST ERC), Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Jeremy Chae
- Wilmer Eye Institute, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| | - Peter Gehlbach
- Wilmer Eye Institute, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| | - Russell H Taylor
- Computer Integrated Surgical Systems and Technology Engineering Research Center (CISST ERC), Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Iulian Iordachita
- Computer Integrated Surgical Systems and Technology Engineering Research Center (CISST ERC), Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|