1
|
Zhou M, Hennerkes F, Liu J, Jiang Z, Wendler T, Nasseri MA, Iordachita I, Navab N. Theoretical error analysis of spotlight-based instrument localization for retinal surgery. ROBOTICA 2023; 41:1536-1549. [PMID: 37982126 PMCID: PMC10655674 DOI: 10.1017/s0263574722001862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Retinal surgery is widely considered to be a complicated and challenging task even for specialists. Image-guided robot-assisted intervention is among the novel and promising solutions that may enhance human capabilities therein. In this paper, we demonstrate the possibility of using spotlights for 5D guidance of a microsurgical instrument. The theoretical basis of the localization for the instrument based on the projection of a single spotlight is analyzed to deduce the position and orientation of the spotlight source. The usage of multiple spotlights is also proposed to check the possibility of further improvements for the performance boundaries. The proposed method is verified within a high-fidelity simulation environment using the 3D creation suite Blender. Experimental results show that the average positioning error is 0.029 mm using a single spotlight and 0.025 mm with three spotlights, respectively, while the rotational errors are 0.124 and 0.101, which shows the application to be promising in instrument localization for retinal surgery.
Collapse
Affiliation(s)
- Mingchuan Zhou
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Felix Hennerkes
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - Jingsong Liu
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - Zhongliang Jiang
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| | - M Ali Nasseri
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München, Germany
| | - Iulian Iordachita
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Computer Science Department, Technische Universität München, Munchen, Germany
| |
Collapse
|
2
|
Gautier B, Tugal H, Tang B, Nabi G, Erden MS. Real-Time 3D Tracking of Laparoscopy Training Instruments for Assessment and Feedback. Front Robot AI 2021; 8:751741. [PMID: 34805292 PMCID: PMC8600079 DOI: 10.3389/frobt.2021.751741] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 10/13/2021] [Indexed: 11/13/2022] Open
Abstract
Assessment of minimally invasive surgical skills is a non-trivial task, usually requiring the presence and time of expert observers, including subjectivity and requiring special and expensive equipment and software. Although there are virtual simulators that provide self-assessment features, they are limited as the trainee loses the immediate feedback from realistic physical interaction. The physical training boxes, on the other hand, preserve the immediate physical feedback, but lack the automated self-assessment facilities. This study develops an algorithm for real-time tracking of laparoscopy instruments in the video cues of a standard physical laparoscopy training box with a single fisheye camera. The developed visual tracking algorithm recovers the 3D positions of the laparoscopic instrument tips, to which simple colored tapes (markers) are attached. With such system, the extracted instrument trajectories can be digitally processed, and automated self-assessment feedback can be provided. In this way, both the physical interaction feedback would be preserved and the need for the observance of an expert would be overcome. Real-time instrument tracking with a suitable assessment criterion would constitute a significant step towards provision of real-time (immediate) feedback to correct trainee actions and show them how the action should be performed. This study is a step towards achieving this with a low cost, automated, and widely applicable laparoscopy training and assessment system using a standard physical training box equipped with a fisheye camera.
Collapse
Affiliation(s)
| | - Harun Tugal
- Heriot-Watt University, Scotland, United Kingdom
| | - Benjie Tang
- University of Dundee and Ninewells Hospital, Dundee, United Kingdom
| | - Ghulam Nabi
- University of Dundee and Ninewells Hospital, Dundee, United Kingdom
| | | |
Collapse
|
3
|
Abstract
The paper addresses the problem of the generation of collision-free trajectories for a robotic manipulator, operating in a scenario in which obstacles may be moving at non-negligible velocities. In particular, the paper aims to present a trajectory generation solution that is fully executable in real-time and that can reactively adapt to both dynamic changes of the environment and fast reconfiguration of the robotic task. The proposed motion planner extends the method based on a dynamical system to cope with the peculiar kinematics of surgical robots for laparoscopic operations, the mechanical constraint being enforced by the fixed point of insertion into the abdomen of the patient the most challenging aspect. The paper includes a validation of the trajectory generator in both simulated and experimental scenarios.
Collapse
|
4
|
Saeidi H, Le HND, Opfermann JD, Leonard S, Kim A, Hsieh MH, Kang JU, Krieger A. Autonomous Laparoscopic Robotic Suturing with a Novel Actuated Suturing Tool and 3D Endoscope. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2019; 2019:1541-1547. [PMID: 33628614 PMCID: PMC7901147 DOI: 10.1109/icra.2019.8794306] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2023]
Abstract
Compared to open surgical techniques, laparoscopic surgical methods aim to reduce the collateral tissue damage and hence decrease the patient recovery time. However, constraints imposed by the laparoscopic surgery, i.e. the operation of surgical tools in limited spaces, turn simple surgical tasks such as suturing into time-consuming and inconsistent tasks for surgeons. In this paper, we develop an autonomous laparoscopic robotic suturing system. More specific, we expand our smart tissue anastomosis robot (STAR) by developing i) a new 3D imaging endoscope, ii) a novel actuated laparoscopic suturing tool, and iii) a suture planning strategy for the autonomous suturing. We experimentally test the accuracy and consistency of our developed system and compare it to sutures performed manually by surgeons. Our test results on suture pads indicate that STAR can reach 2.9 times better consistency in suture spacing compared to manual method and also eliminate suture repositioning and adjustments. Moreover, the consistency of suture bite sizes obtained by STAR matches with those obtained by manual suturing.
Collapse
Affiliation(s)
- H Saeidi
- Department of Mechanical Engineering, University of Maryland, College Park, MD 20742, USA
| | - H N D Le
- Electrical and Computer Science Engineering Department, Johns Hopkins University, Baltimore, MD 21211
| | - J D Opfermann
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Childrens National Health System, 111 Michigan Ave. N.W., Washington, DC 20010
| | - S Leonard
- Electrical and Computer Science Engineering Department, Johns Hopkins University, Baltimore, MD 21211
| | - A Kim
- University of Maryland School of Medicine, 655 W Baltimore S, Baltimore, MD 21201
| | - M H Hsieh
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Childrens National Health System, 111 Michigan Ave. N.W., Washington, DC 20010
| | - J U Kang
- Electrical and Computer Science Engineering Department, Johns Hopkins University, Baltimore, MD 21211
| | - A Krieger
- Department of Mechanical Engineering, University of Maryland, College Park, MD 20742, USA
| |
Collapse
|
5
|
Hoeckelmann M, Rudas IJ, Fiorini P, Kirchner F, Haidegger T. Current Capabilities and Development Potential in Surgical Robotics. INT J ADV ROBOT SYST 2015. [DOI: 10.5772/60133] [Citation(s) in RCA: 78] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Commercial surgical robots have been in clinical use since the mid-1990s, supporting surgeons in various tasks. In the past decades, many systems emerged as research platforms, and a few entered the global market. This paper summarizes the currently available surgical systems and research directions in the broader field of surgical robotics. The widely deployed teleoperated manipulators aim to enhance human cognitive and physical skills and provide smart tools for surgeons, while image-guided robotics focus on surpassing human limitations by introducing automated targeting and treatment delivery methods. Both concepts are discussed based on prototypes and commercial systems. Through concrete examples the possible future development paths of surgical robots are illustrated. While research efforts are taking different approaches to improve the capacity of such systems, the aim of this survey is to assess their maturity from the commercialization point of view.
Collapse
Affiliation(s)
| | - Imre J. Rudas
- Antal Bejczy Center for Intelligent Robotics Obuda University, Hungary
| | - Paolo Fiorini
- Department of Informatics, University of Verona, Italy
| | - Frank Kirchner
- DFKI GmbH, Robotics Innovation Center (RIC), Bremen, Germany
- Robotics Group, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Tamas Haidegger
- Antal Bejczy Center for Intelligent Robotics Obuda University, Hungary
- Austrian Center for Medical Innovation and Technology (ACMIT), Austria
| |
Collapse
|
6
|
Leonard S, Wu KL, Kim Y, Krieger A, Kim PCW. Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing. IEEE Trans Biomed Eng 2014; 61:1305-17. [PMID: 24658254 DOI: 10.1109/tbme.2014.2302385] [Citation(s) in RCA: 79] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360(°)®, and nine times faster than surgeons using manual laparoscopic tools.
Collapse
|
7
|
Abstract
We present a class of fixtures that can be disassembled into four pieces to extract the loosely tied knot. We prove that a fixture can be designed for any particular knot such that the knot can be extracted using only simple pure translations of the four fixture sections. We explore some of the issues raised by our experimental work with these fixtures, which show that simple knots can be tied extremely quickly (less than half a second) and reliably (99% repeatability) using four-piece fixtures.
Collapse
Affiliation(s)
- Matthew P. Bell
- Dartmouth College of Computer Science, Hanover, New Hampshire, USA
| | - Weifu Wang
- Dartmouth College of Computer Science, Hanover, New Hampshire, USA
| | - Jordan Kunzika
- Dartmouth College of Computer Science, Hanover, New Hampshire, USA
| | - Devin Balkcom
- Dartmouth College of Computer Science, Hanover, New Hampshire, USA
| |
Collapse
|