1
|
Li H. Active constrained motion control for a robot-assisted endoscope manipulator in pediatric minimal access surgery. J Robot Surg 2024; 18:378. [PMID: 39443406 DOI: 10.1007/s11701-024-02132-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 10/05/2024] [Indexed: 10/25/2024]
Abstract
Robot-assisted laparoscopic surgery has three main system requirements: safety, simplicity, and intuitiveness. However, accidental movement of the endoscope due to body fatigue and misunderstanding of the verbal orders between the surgeon and assistant will contribute to highly unexpected tool-tissue interactions, particularly in pediatric minimal access surgery with restricted working space. This study introduces a compact, lightweight endoscope manipulator with a mechanical remote-center-motion function. Using a custom-designed human-machine interface, the surgeon can intuitively control the movement of the endoscope manipulator over their view. In addition, an active constrained motion control algorithm is proposed to generate a forbidden-region constraint for avoiding collisions between the endoscope tip and surrounding organs in a pediatric abdominal cavity with restricted space. Simulations and experiments demonstrate the performance of the proposed compact endoscope manipulator and the active constrained surface tracking control scheme.
Collapse
Affiliation(s)
- Hongbing Li
- Department of Instrument Science and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, People's Republic of China.
| |
Collapse
|
2
|
Rueckert T, Rueckert D, Palm C. Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art. Comput Biol Med 2024; 169:107929. [PMID: 38184862 DOI: 10.1016/j.compbiomed.2024.107929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/02/2023] [Accepted: 01/01/2024] [Indexed: 01/09/2024]
Abstract
In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking", resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.
Collapse
Affiliation(s)
- Tobias Rueckert
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany.
| | - Daniel Rueckert
- Artificial Intelligence in Healthcare and Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Computing, Imperial College London, UK
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany; Regensburg Center of Health Sciences and Technology (RCHST), OTH Regensburg, Germany
| |
Collapse
|
3
|
Fozilov K, Colan J, Davila A, Misawa K, Qiu J, Hayashi Y, Mori K, Hasegawa Y. Endoscope Automation Framework with Hierarchical Control and Interactive Perception for Multi-Tool Tracking in Minimally Invasive Surgery. SENSORS (BASEL, SWITZERLAND) 2023; 23:9865. [PMID: 38139711 PMCID: PMC10748016 DOI: 10.3390/s23249865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 12/12/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
In the context of Minimally Invasive Surgery, surgeons mainly rely on visual feedback during medical operations. In common procedures such as tissue resection, the automation of endoscopic control is crucial yet challenging, particularly due to the interactive dynamics of multi-agent operations and the necessity for real-time adaptation. This paper introduces a novel framework that unites a Hierarchical Quadratic Programming controller with an advanced interactive perception module. This integration addresses the need for adaptive visual field control and robust tool tracking in the operating scene, ensuring that surgeons and assistants have optimal viewpoint throughout the surgical task. The proposed framework handles multiple objectives within predefined thresholds, ensuring efficient tracking even amidst changes in operating backgrounds, varying lighting conditions, and partial occlusions. Empirical validations in scenarios involving single, double, and quadruple tool tracking during tissue resection tasks have underscored the system's robustness and adaptability. The positive feedback from user studies, coupled with the low cognitive and physical strain reported by surgeons and assistants, highlight the system's potential for real-world application.
Collapse
Affiliation(s)
- Khusniddin Fozilov
- Department of Micro-Nano Mechanical Science and Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Aichi, Japan
| | - Jacinto Colan
- Department of Micro-Nano Mechanical Science and Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Aichi, Japan
| | - Ana Davila
- Institutes of Innovation for Future Society, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Aichi, Japan (Y.H.)
| | - Kazunari Misawa
- Aichi Cancer Center Hospital, Chikusa Ward, Nagoya 464-8681, Aichi, Japan
| | - Jie Qiu
- Graduate School of Informatics, Nagoya University, Chikusa Ward, Nagoya 464-8601, Aichi, Japan
| | - Yuichiro Hayashi
- Graduate School of Informatics, Nagoya University, Chikusa Ward, Nagoya 464-8601, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Chikusa Ward, Nagoya 464-8601, Aichi, Japan
| | - Yasuhisa Hasegawa
- Institutes of Innovation for Future Society, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Aichi, Japan (Y.H.)
| |
Collapse
|
4
|
Budd C, Garcia-Peraza-Herrera LC, Huber M, Ourselin S, Vercauteren T. Rapid and robust endoscopic content area estimation: A lean GPU-based pipeline and curated benchmark dataset. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2023; 11:1215-1224. [PMID: 38600897 PMCID: PMC7615255 DOI: 10.1080/21681163.2022.2156393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Accepted: 11/19/2022] [Indexed: 01/07/2023]
Abstract
Endoscopic content area refers to the informative area enclosed by the dark, non-informative, border regions present in most endoscopic footage. The estimation of the content area is a common task in endoscopic image processing and computer vision pipelines. Despite the apparent simplicity of the problem, several factors make reliable real-time estimation surprisingly challenging. The lack of rigorous investigation into the topic combined with the lack of a common benchmark dataset for this task has been a long-lasting issue in the field. In this paper, we propose two variants of a lean GPU-based computational pipeline combining edge detection and circle fitting. The two variants differ by relying on handcrafted features, and learned features respectively to extract content area edge point candidates. We also present a first-of-its-kind dataset of manually annotated and pseudo-labelled content areas across a range of surgical indications. To encourage further developments, the curated dataset, and an implementation of both algorithms, has been made public (https://doi.org/10.7303/syn32148000, https://github.com/charliebudd/torch-content-area). We compare our proposed algorithm with a state-of-the-art U-Net-based approach and demonstrate significant improvement in terms of both accuracy (Hausdorff distance: 6.3 px versus 118.1 px) and computational time (Average runtime per frame: 0.13 ms versus 11.2 ms).
Collapse
|
5
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
6
|
Li W, Yin Ng W, Zhang X, Huang Y, Li Y, Song C, Chiu PWY, Li Z. A Kinematic Modeling and Control Scheme for Different Robotic Endoscopes: A Rudimentary Research Prototype. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3186758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Weibing Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Wing Yin Ng
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Xue Zhang
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Yisen Huang
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Yehui Li
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Chengzhi Song
- Shenzhen Cornerstone Technology Co., Ltd., Shenzhen, China
| | - Philip Wai-Yan Chiu
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Zheng Li
- Department of Surgery, Chow Yuk Ho Technology Centre for Innovative Medicine, Li Ka Shing Institute of Health Science, and Multi-Scale Medical Robotics Centre, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
7
|
Surgical Tool Datasets for Machine Learning Research: A Survey. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01640-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AbstractThis paper is a comprehensive survey of datasets for surgical tool detection and related surgical data science and machine learning techniques and algorithms. The survey offers a high level perspective of current research in this area, analyses the taxonomy of approaches adopted by researchers using surgical tool datasets, and addresses key areas of research, such as the datasets used, evaluation metrics applied and deep learning techniques utilised. Our presentation and taxonomy provides a framework that facilitates greater understanding of current work, and highlights the challenges and opportunities for further innovative and useful research.
Collapse
|