1
|
Yang C, Wang K, Wang Y, Dou Q, Yang X, Shen W. Efficient Deformable Tissue Reconstruction via Orthogonal Neural Plane. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3211-3223. [PMID: 38625765 DOI: 10.1109/tmi.2024.3388559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
Intraoperative imaging techniques for reconstructing deformable tissues in vivo are pivotal for advanced surgical systems. Existing methods either compromise on rendering quality or are excessively computationally intensive, often demanding dozens of hours to perform, which significantly hinders their practical application. In this paper, we introduce Fast Orthogonal Plane (Forplane), a novel, efficient framework based on neural radiance fields (NeRF) for the reconstruction of deformable tissues. We conceptualize surgical procedures as 4D volumes, and break them down into static and dynamic fields comprised of orthogonal neural planes. This factorization discretizes the four-dimensional space, leading to a decreased memory usage and faster optimization. A spatiotemporal importance sampling scheme is introduced to improve performance in regions with tool occlusion as well as large motions and accelerate training. An efficient ray marching method is applied to skip sampling among empty regions, significantly improving inference speed. Forplane accommodates both binocular and monocular endoscopy videos, demonstrating its extensive applicability and flexibility. Our experiments, carried out on two in vivo datasets, the EndoNeRF and Hamlyn datasets, demonstrate the effectiveness of our framework. In all cases, Forplane substantially accelerates both the optimization process (by over 100 times) and the inference process (by over 15 times) while maintaining or even improving the quality across a variety of non-rigid deformations. This significant performance improvement promises to be a valuable asset for future intraoperative surgical applications. The code of our project is now available at https://github.com/Loping151/ForPlane.
Collapse
|
2
|
Schmidt A, Mohareri O, DiMaio SP, Salcudean SE. Surgical Tattoos in Infrared: A Dataset for Quantifying Tissue Tracking and Mapping. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2634-2645. [PMID: 38437151 DOI: 10.1109/tmi.2024.3372828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Quantifying performance of methods for tracking and mapping tissue in endoscopic environments is essential for enabling image guidance and automation of medical interventions and surgery. Datasets developed so far either use rigid environments, visible markers, or require annotators to label salient points in videos after collection. These are respectively: not general, visible to algorithms, or costly and error-prone. We introduce a novel labeling methodology along with a dataset that uses said methodology, Surgical Tattoos in Infrared (STIR). STIR has labels that are persistent but invisible to visible spectrum algorithms. This is done by labelling tissue points with IR-fluorescent dye, indocyanine green (ICG), and then collecting visible light video clips. STIR comprises hundreds of stereo video clips in both in vivo and ex vivo scenes with start and end points labelled in the IR spectrum. With over 3,000 labelled points, STIR will help to quantify and enable better analysis of tracking and mapping methods. After introducing STIR, we analyze multiple different frame-based tracking methods on STIR using both 3D and 2D endpoint error and accuracy metrics. STIR is available at https://dx.doi.org/10.21227/w8g4-g548.
Collapse
|
3
|
Yang Z, Dai J, Pan J. 3D reconstruction from endoscopy images: A survey. Comput Biol Med 2024; 175:108546. [PMID: 38704902 DOI: 10.1016/j.compbiomed.2024.108546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/05/2024] [Accepted: 04/28/2024] [Indexed: 05/07/2024]
Abstract
Three-dimensional reconstruction of images acquired through endoscopes is playing a vital role in an increasing number of medical applications. Endoscopes used in the clinic are commonly classified as monocular endoscopes and binocular endoscopes. We have reviewed the classification of methods for depth estimation according to the type of endoscope. Basically, depth estimation relies on feature matching of images and multi-view geometry theory. However, these traditional techniques have many problems in the endoscopic environment. With the increasing development of deep learning techniques, there is a growing number of works based on learning methods to address challenges such as inconsistent illumination and texture sparsity. We have reviewed over 170 papers published in the 10 years from 2013 to 2023. The commonly used public datasets and performance metrics are summarized. We also give a taxonomy of methods and analyze the advantages and drawbacks of algorithms. Summary tables and result atlas are listed to facilitate the comparison of qualitative and quantitative performance of different methods in each category. In addition, we summarize commonly used scene representation methods in endoscopy and speculate on the prospects of deep estimation research in medical applications. We also compare the robustness performance, processing time, and scene representation of the methods to facilitate doctors and researchers in selecting appropriate methods based on surgical applications.
Collapse
Affiliation(s)
- Zhuoyue Yang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Ju Dai
- Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China.
| |
Collapse
|
4
|
Yang Z, Pan J, Dai J, Sun Z, Xiao Y. Self-Supervised Lightweight Depth Estimation in Endoscopy Combining CNN and Transformer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1934-1944. [PMID: 38198275 DOI: 10.1109/tmi.2024.3352390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2024]
Abstract
In recent years, an increasing number of medical engineering tasks, such as surgical navigation, pre-operative registration, and surgical robotics, rely on 3D reconstruction techniques. Self-supervised depth estimation has attracted interest in endoscopic scenarios because it does not require ground truth. Most existing methods depend on expanding the size of parameters to improve their performance. There, designing a lightweight self-supervised model that can obtain competitive results is a hot topic. We propose a lightweight network with a tight coupling of convolutional neural network (CNN) and Transformer for depth estimation. Unlike other methods that use CNN and Transformer to extract features separately and then fuse them on the deepest layer, we utilize the modules of CNN and Transformer to extract features at different scales in the encoder. This hierarchical structure leverages the advantages of CNN in texture perception and Transformer in shape extraction. In the same scale of feature extraction, the CNN is used to acquire local features while the Transformer encodes global information. Finally, we add multi-head attention modules to the pose network to improve the accuracy of predicted poses. Experiments demonstrate that our approach obtains comparable results while effectively compressing the model parameters on two datasets.
Collapse
|
5
|
Schmidt A, Mohareri O, DiMaio S, Yip MC, Salcudean SE. Tracking and mapping in medical computer vision: A review. Med Image Anal 2024; 94:103131. [PMID: 38442528 DOI: 10.1016/j.media.2024.103131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/08/2024] [Accepted: 02/29/2024] [Indexed: 03/07/2024]
Abstract
As computer vision algorithms increase in capability, their applications in clinical systems will become more pervasive. These applications include: diagnostics, such as colonoscopy and bronchoscopy; guiding biopsies, minimally invasive interventions, and surgery; automating instrument motion; and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. After which, we review datasets provided in the field and the clinical needs that motivate their design. Then, we delve into the algorithmic side, and summarize recent developments. This summary should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We maintain focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. With the field summarized, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications. We then provide some research directions and questions. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.
Collapse
Affiliation(s)
- Adam Schmidt
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada.
| | - Omid Mohareri
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Simon DiMaio
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Michael C Yip
- Department of Electrical and Computer Engineering, University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada
| |
Collapse
|
6
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
7
|
Chua Z, Okamura AM. A Modular 3-Degrees-of-Freedom Force Sensor for Robot-Assisted Minimally Invasive Surgery Research. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115230. [PMID: 37299958 DOI: 10.3390/s23115230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/07/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023]
Abstract
Effective force modulation during tissue manipulation is important for ensuring safe, robot-assisted, minimally invasive surgery (RMIS). Strict requirements for in vivo applications have led to prior sensor designs that trade off ease of manufacture and integration against force measurement accuracy along the tool axis. Due to this trade-off, there are no commercial, off-the-shelf, 3-degrees-of-freedom (3DoF) force sensors for RMIS available to researchers. This makes it challenging to develop new approaches to indirect sensing and haptic feedback for bimanual telesurgical manipulation. We present a modular 3DoF force sensor that integrates easily with an existing RMIS tool. We achieve this by relaxing biocompatibility and sterilizability requirements and by using commercial load cells and common electromechanical fabrication techniques. The sensor has a range of ±5 N axially and ±3 N laterally with errors of below 0.15 N and maximum errors below 11% of the sensing range in all directions. During telemanipulation, a pair of jaw-mounted sensors achieved average errors below 0.15 N in all directions. It achieved an average grip force error of 0.156 N. The sensor is for bimanual haptic feedback and robotic force control in delicate tissue telemanipulation. As an open-source design, the sensors can be adapted to suit other non-RMIS robotic applications.
Collapse
Affiliation(s)
- Zonghe Chua
- Department of Electrical, Computer and Systems Engineering, Case Western Reserve University, 10900 Euclid Avenue, Glennan Building 514A, Cleveland, OH 44106, USA
| | - Allison M Okamura
- Department of Mechanical Engineering, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
8
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
9
|
Liu Z, Gao W, Zhu J, Yu Z, Fu Y. Surface deformation tracking in monocular laparoscopic video. Med Image Anal 2023; 86:102775. [PMID: 36848721 DOI: 10.1016/j.media.2023.102775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 02/17/2023] [Accepted: 02/18/2023] [Indexed: 02/23/2023]
Abstract
Image-guided surgery has been proven to enhance the accuracy and safety of minimally invasive surgery (MIS). Nonrigid deformation tracking of soft tissue is one of the main challenges in image-guided MIS owing to the existence of tissue deformation, homogeneous texture, smoke and instrument occlusion, etc. In this paper, we proposed a piecewise affine deformation model-based nonrigid deformation tracking method. A Markov random field based mask generation method is developed to eliminate tracking anomalies. The deformation information vanishes when the regular constraint is invalid, which further deteriorates the tracking accuracy. Atime-series deformation solidification mechanism is introduced to reduce the degradation of the deformation field of the model. For the quantitative evaluation of the proposed method, we synthesized nine laparoscopic videos mimicking instrument occlusion and tissue deformation. Quantitative tracking robustness was evaluated on the synthetic videos. Three real videos of MIS containing challenges of large-scale deformation, large-range smoke, instrument occlusion, and permanent changes in soft tissue texture were also used to evaluate the performance of the proposed method. Experimental results indicate the proposed method outperforms state-of-the-art methods in terms of accuracy and robustness, which shows good performance in image-guided MIS.
Collapse
Affiliation(s)
- Ziteng Liu
- School of Life Science and Technology, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China
| | - Wenpeng Gao
- School of Life Science and Technology, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China.
| | - Jiahua Zhu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China
| | - Zhi Yu
- School of Life Science and Technology, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China.
| |
Collapse
|
10
|
Bourdillon AT, Garg A, Wang H, Woo YJ, Pavone M, Boyd J. Integration of Reinforcement Learning in a Virtual Robotic Surgical Simulation. Surg Innov 2023; 30:94-102. [PMID: 35503302 DOI: 10.1177/15533506221095298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background. The revolutions in AI hold tremendous capacity to augment human achievements in surgery, but robust integration of deep learning algorithms with high-fidelity surgical simulation remains a challenge. We present a novel application of reinforcement learning (RL) for automating surgical maneuvers in a graphical simulation.Methods. In the Unity3D game engine, the Machine Learning-Agents package was integrated with the NVIDIA FleX particle simulator for developing autonomously behaving RL-trained scissors. Proximal Policy Optimization (PPO) was used to reward movements and desired behavior such as movement along desired trajectory and optimized cutting maneuvers along the deformable tissue-like object. Constant and proportional reward functions were tested, and TensorFlow analytics was used to informed hyperparameter tuning and evaluate performance.Results. RL-trained scissors reliably manipulated the rendered tissue that was simulated with soft-tissue properties. A desirable trajectory of the autonomously behaving scissors was achieved along 1 axis. Proportional rewards performed better compared to constant rewards. Cumulative reward and PPO metrics did not consistently improve across RL-trained scissors in the setting for movement across 2 axes (horizontal and depth).Conclusion. Game engines hold promising potential for the design and implementation of RL-based solutions to simulated surgical subtasks. Task completion was sufficiently achieved in one-dimensional movement in simulations with and without tissue-rendering. Further work is needed to optimize network architecture and parameter tuning for increasing complexity.
Collapse
Affiliation(s)
| | - Animesh Garg
- Vector Institute and Department of Computer Science, University of Toronto, Toronto, ON, Canada
| | - Hanjay Wang
- Department of Cardiothoracic Surgery, 198869Stanford University, Stanford, CA, USA
| | - Y Joseph Woo
- Department of Cardiothoracic Surgery, 198869Stanford University, Stanford, CA, USA.,Department of Bioengineering, 198869Stanford University, Stanford, CA, USA
| | - Marco Pavone
- Department of Aeronautics and Astronautics, 198869Stanford University, Stanford, CA, USA
| | - Jack Boyd
- Department of Cardiothoracic Surgery, 198869Stanford University, Stanford, CA, USA
| |
Collapse
|
11
|
Suture Looping Task Pose Planner in a Constrained Surgical Environment. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01772-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
12
|
Sun Y, Pan B, Fu Y. Correlation filters tissue tracking with application to robotic minimally invasive surgery. Int J Med Robot 2022; 18:e2440. [DOI: 10.1002/rcs.2440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 06/28/2022] [Accepted: 07/11/2022] [Indexed: 11/10/2022]
Affiliation(s)
- Yanwen Sun
- State Key Laboratory of Robotics and Systems Harbin Institute of Technology Harbin China
| | - Bo Pan
- State Key Laboratory of Robotics and Systems Harbin Institute of Technology Harbin China
| | - Yili Fu
- State Key Laboratory of Robotics and Systems Harbin Institute of Technology Harbin China
| |
Collapse
|
13
|
Fiorini P, Goldberg KY, Liu Y, Taylor RH. Concepts and Trends n Autonomy for Robot-Assisted Surgery. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2022; 110:993-1011. [PMID: 35911127 PMCID: PMC7613181 DOI: 10.1109/jproc.2022.3176828] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Surgical robots have been widely adopted with over 4000 robots being used in practice daily. However, these are telerobots that are fully controlled by skilled human surgeons. Introducing "surgeon-assist"-some forms of autonomy-has the potential to reduce tedium and increase consistency, analogous to driver-assist functions for lanekeeping, cruise control, and parking. This article examines the scientific and technical backgrounds of robotic autonomy in surgery and some ethical, social, and legal implications. We describe several autonomous surgical tasks that have been automated in laboratory settings, and research concepts and trends.
Collapse
Affiliation(s)
- Paolo Fiorini
- Department of Computer Science, University of Verona, 37134 Verona, Italy
| | - Ken Y. Goldberg
- Department of Industrial Engineering and Operations Research and the Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA 94720 USA
| | - Yunhui Liu
- Department of Mechanical and Automation Engineering, T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong, China
| | - Russell H. Taylor
- Department of Computer Science, the Department of Mechanical Engineering, the Department of Radiology, the Department of Surgery, and the Department of Otolaryngology, Head-and-Neck Surgery, Johns Hopkins University, Baltimore, MD 21218 USA, and also with the Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
14
|
Lu J, Richter F, Yip MC. Pose Estimation for Robot Manipulators via Keypoint Optimization and Sim-to-Real Transfer. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3151981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
15
|
Piccinelli M, Cheng Z, Dall'Alba D, Schmidt MK, Savarimuthu TR, Fiorini P. 3D Vision Based Robot Assisted Electrical Impedance Scanning for Soft Tissue Conductivity Sensing. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3150481] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
16
|
AIM in Endoscopy Procedures. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
17
|
Ma G, Ross W, Codd PJ. StereoCNC: A Stereovision-guided Robotic Laser System. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2021; 2021:540-547. [PMID: 35950084 PMCID: PMC9358620 DOI: 10.1109/iros51168.2021.9636050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
This paper proposes an End-to-End stereovision-guided laser surgery system that can conduct laser ablation on targets selected by human operators in the color image, referred as StereoCNC. Two digital cameras are integrated into a previously developed robotic laser system to add a color sensing modality and formulate the stereovision. A calibration method is implemented to register the coordinate frames between stereo cameras and the laser system, modelled as a 3D-to-3D least-squares problem. The calibration reprojection errors are used to characterize a 3D error field by Gaussian Process Regression (GPR). This error field can make predictions for new point cloud data to identify an optimal position with lower calibration errors. A stereovision-guided laser ablation pipeline is proposed to optimize the positioning of the surgical site within the error field, which is achieved with a Genetic Algorithm search; mechanical stages move the site to the low-error region. The pipeline is validated by the experiments on phantoms with color texture and various geometric shapes. The overall targeting accuracy of the system achieved an average RMSE of 0.13 ± 0.02 mm and maximum error of 0.34 ± 0.06 mm, as measured by pre- and post-laser ablation images. The results show potential applications of using the developed stereovision-guided robotic system for superficial laser surgery, including dermatologic applications or removal of exposed tumorous tissue in neurosurgery.
Collapse
Affiliation(s)
- Guangshen Ma
- Brain Tool Lab. Department of Mechanical Engineering, Duke University
| | - Weston Ross
- Brain Tool Lab. Department of Mechanical Engineering, Duke University
- Department of Neurosurgery, Duke University
| | - Patrick J Codd
- Brain Tool Lab. Department of Mechanical Engineering, Duke University
- Department of Neurosurgery, Duke University
| |
Collapse
|
18
|
Li C, Yan Y, Xiao X, Gu X, Gao H, Duan X, Zuo X, Li Y, Ren H. A Miniature Manipulator With Variable Stiffness Towards Minimally Invasive Transluminal Endoscopic Surgery. IEEE Robot Autom Lett 2021; 6:5541-5548. [DOI: 10.1109/lra.2021.3068115] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
19
|
Kam M, Saeidi H, Hsieh MH, Kang JU, Krieger A. A Confidence-Based Supervised-Autonomous Control Strategy for Robotic Vaginal Cuff Closure. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2021; 2021:10.1109/icra48506.2021.9561685. [PMID: 34840856 PMCID: PMC8612028 DOI: 10.1109/icra48506.2021.9561685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Autonomous robotic suturing has the potential to improve surgery outcomes by leveraging accuracy, repeatability, and consistency compared to manual operations. However, achieving full autonomy in complex surgical environments is not practical and human supervision is required to guarantee safety. In this paper, we develop a confidence-based supervised autonomous suturing method to perform robotic suturing tasks via both Smart Tissue Autonomous Robot (STAR) and surgeon collaboratively with the highest possible degree of autonomy. Via the proposed method, STAR performs autonomous suturing when highly confident and otherwise asks the operator for possible assistance in suture positioning adjustments. We evaluate the accuracy of our proposed control method via robotic suturing tests on synthetic vaginal cuff tissues and compare them to the results of vaginal cuff closures performed by an experienced surgeon. Our test results indicate that by using the proposed confidence-based method, STAR can predict the success of pure autonomous suture placement with an accuracy of 94.74%. Moreover, via an additional 25% human intervention, STAR can achieve a 98.1% suture placement accuracy compared to an 85.4% accuracy of completely autonomous robotic suturing. Finally, our experiment results indicate that STAR using the proposed method achieves 1.6 times better consistency in suture spacing and 1.8 times better consistency in suture bite sizes than the manual results.
Collapse
Affiliation(s)
- Michael Kam
- Dep. of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21211, USA
| | - Hamed Saeidi
- Dep. of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21211, USA
| | - Michael H Hsieh
- Dep. of Urology, Children's National Hospital, 111 Michigan Ave. N.W., Washington, DC 20010, USA
| | - J U Kang
- Dep. of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21211, USA
| | - Axel Krieger
- Dep. of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21211, USA
| |
Collapse
|
20
|
Richter F, Shen S, Liu F, Huang J, Funk EK, Orosco RK, Yip MC. Autonomous Robotic Suction to Clear the Surgical Field for Hemostasis Using Image-Based Blood Flow Detection. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3056057] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
21
|
Abdelaal AE, Liu J, Hong N, Hager GD, Salcudean SE. Parallelism in Autonomous Robotic Surgery. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3060402] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
22
|
Richter F, Lu J, Orosco RK, Yip MC. Robotic Tool Tracking Under Partially Visible Kinematic Chain: A Unified Approach. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3111441] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
23
|
Marzullo A, Moccia S, Calimeri F, De Momi E. AIM in Endoscopy Procedures. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_164-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
24
|
Attanasio A, Scaglioni B, Leonetti M, Frangi AF, Cross W, Biyani CS, Valdastri P. Autonomous Tissue Retraction in Robotic Assisted Minimally Invasive Surgery – A Feasibility Study. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3013914] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
25
|
Compensatory motion scaling for time-delayed robotic surgery. Surg Endosc 2020; 35:2613-2618. [PMID: 32514831 DOI: 10.1007/s00464-020-07681-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 05/27/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Round trip signal latency, or time delay, is an unavoidable constraint that currently stands as a major barrier to safe and efficient remote telesurgery. While there have been significant technological advancements aimed at reducing the time delay, studies evaluating methods of mitigating the negative effects of time delay are needed. Herein, we explored instrument motion scaling as a method to improve performance in time-delayed robotic surgery. METHODS This was a robotic surgery user study using the da Vinci Research Kit system. A ring transfer task was performed under normal circumstances (no added time delay), and with 250 ms, 500 ms, and 750 ms delay. Robotic instrument motion scaling was modulated across a range of values (- 0.15, - 0.1, 0, + 0.1, + 0.15), with negative values indicating less instrument displacement for a given amount of operator movement. The primary outcomes were task completion time and total errors. Three-dimensional instrument movement was compared against different motion scales using dynamic time warping to demonstrate the effects of scaling. RESULTS Performance declined with increasing time delay. Statistically significant increases in task time and number of errors were seen at 500 ms and 750 ms delay (p < 0.05). Total errors were positively correlated with task time on linear regression (R = 0.79, p < 0.001). Under 750 ms delay, negative instrument motion scaling improved error rates. Negative motion scaling trended toward improving task times toward those seen in non-delayed scenarios. Improvements in instrument path motion were seen with the implementation of negative motion scaling. CONCLUSIONS Under time-delayed conditions, negative robotic instrument motion scaling yielded fewer surgical errors with slight improvement in task time. Motion scaling is a promising method of improving the safety and efficiency of time-delayed robotic surgery and warrants further investigation.
Collapse
|