1
|
Li Y, Raison N, Ourselin S, Mahmoodi T, Dasgupta P, Granados A. AI solutions for overcoming delays in telesurgery and telementoring to enhance surgical practice and education. J Robot Surg 2024; 18:403. [PMID: 39527379 PMCID: PMC11554828 DOI: 10.1007/s11701-024-02153-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 10/26/2024] [Indexed: 11/16/2024]
Abstract
Artificial intelligence (AI) has emerged as a transformative tool in surgery, particularly in telesurgery and telementoring. However, its potential to enhance data transmission efficiency and reliability in these fields remains unclear. While previous reviews have explored the general applications of telesurgery and telementoring in specific surgical contexts, this review uniquely focuses on AI models designed to optimise data transmission and mitigate delays. We conducted a comprehensive literature search on PubMed and IEEE Xplore for studies published in English between 2010 and 2023, focusing on AI-driven, surgery-related, telemedicine, and delay-related research. This review includes methodologies from journals, conferences, and symposiums. Our analysis identified a total of twelve AI studies that focus on optimising network resources, enhancing edge computing, and developing delay-robust predictive applications. Specifically, three studies addressed wireless network resource optimisation, two proposed low-latency control and transfer learning algorithms for edge computing, and seven developed delay-robust applications, five of which focused on motion data, with the remaining two addressing visual and haptic data. These advancements lay the foundation for a truly holistic and context-aware telesurgical experience, significantly transforming remote surgical practice and education. By mapping the current role of AI in addressing delay-related challenges, this review highlights the pressing need for collaborative research to drive the evolution of telesurgery and telementoring in modern robotic surgery.
Collapse
Affiliation(s)
- Yang Li
- Surgical and Interventional Engineering, King's College London, London, UK
| | - Nicholas Raison
- Surgical and Interventional Engineering, King's College London, London, UK
- Department of Urology, Guy's Hospital, London, UK
| | - Sebastien Ourselin
- Surgical and Interventional Engineering, King's College London, London, UK
| | - Toktam Mahmoodi
- Department of Engineering, King's College London, London, UK
| | - Prokar Dasgupta
- Surgical and Interventional Engineering, King's College London, London, UK
- Department of Urology, Guy's Hospital, London, UK
| | - Alejandro Granados
- Surgical and Interventional Engineering, King's College London, London, UK.
| |
Collapse
|
2
|
Wang T, Li H, Pu T, Yang L. Microsurgery Robots: Applications, Design, and Development. SENSORS (BASEL, SWITZERLAND) 2023; 23:8503. [PMID: 37896597 PMCID: PMC10611418 DOI: 10.3390/s23208503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons' skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons' operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human-machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.
Collapse
Affiliation(s)
- Tiexin Wang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Haoyu Li
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Tanhong Pu
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Liangjing Yang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
- Department of Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
3
|
Mittal P, Bhatnagar C. Effectual accuracy of OCT image retinal segmentation with the aid of speckle noise reduction and boundary edge detection strategy. J Microsc 2023; 289:164-179. [PMID: 36373509 DOI: 10.1111/jmi.13152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 09/19/2022] [Accepted: 10/13/2022] [Indexed: 11/16/2022]
Abstract
Optical coherence tomography (OCT) has shown to be a valuable imaging tool in the field of ophthalmology, and it is becoming increasingly relevant in the field of neurology. Several OCT image segmentation methods have been developed previously to segment retinal images, however sophisticated speckle noises with low-intensity restrictions, complex retinal tissues, and inaccurate retinal layer structure remain a challenge to perform effective retinal segmentation. Hence, in this research, complicated speckle noises are removed by using a novel far-flung ratio algorithm in which preprocessing has been done to treat the speckle noise thereby highly decreasing the speckle noise through new similarity and statistical measures. Additionally, a novel haphazard walk and inter-frame flattening algorithms have been presented to tackle the weak object boundaries in OCT images. These algorithms are effective at detecting edges and estimating minimal weighted paths to better diverge, which reduces the time complexity. In addition, the segmentation of OCT images is made simpler by using a novel N-ret layer segmentation approach that executes simultaneous segmentation of various surfaces, ensures unambiguous segmentation across neighbouring layers, and improves segmentation accuracy by using two grey scale values to construct data. Consequently, the novel work outperformed the OCT image segmentation with 98.5% of accuracy.
Collapse
Affiliation(s)
- Praveen Mittal
- Computer Engineering & Applications, GLA University, Mathura, UP, India
| | - Charul Bhatnagar
- Computer Engineering & Applications, GLA University, Mathura, UP, India
| |
Collapse
|
4
|
Automatic and accurate needle detection in 2D ultrasound during robot-assisted needle insertion process. Int J Comput Assist Radiol Surg 2021; 17:295-303. [PMID: 34677747 DOI: 10.1007/s11548-021-02519-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 10/05/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Robot-assisted needle insertion guided by 2D ultrasound (US) can effectively improve the accuracy and success rate of clinical puncture. To this end, automatic and accurate needle-tracking methods are important for monitoring puncture processes, avoiding the needle deviating from the intended path, and reducing the risk of injury to surrounding tissues. This work aims to develop a framework for automatic and accurate detection of an inserted needle in 2D US image during the insertion process. METHODS We propose a novel convolutional neural network architecture comprising of a two-channel encoder and single-channel decoder for needle segmentation using needle motion information extracted from two adjacent US image frames. Based on the novel network, we further propose an automatic needle detection framework. According to the prediction result of the previous frame, a region of interest of the needle in the US image was extracted and fed into the proposed network to achieve finer and faster continuous needle localization. RESULTS The performance of our method was evaluated based on 1000 pairs of US images extracted from robot-assisted needle insertions on freshly excised bovine and porcine tissues. The needle segmentation network achieved 99.7% accuracy, 86.2% precision, 89.1% recall, and an F1-score of 0.87. The needle detection framework successfully localized the needle with a mean tip error of 0.45 ± 0.33 mm and a mean orientation error of 0.42° ± 0.34° and achieved a total processing time of 50 ms per image. CONCLUSION The proposed framework demonstrated the capability to realize robust, accurate, and real-time needle localization during robot-assisted needle insertion processes. It has a promising application in tracking the needle and ensuring the safety of robotic-assisted automatic puncture during challenging US-guided minimally invasive procedures.
Collapse
|
5
|
Yu N, Yu H, Li H, Ma N, Hu C, Wang J. A Robust Deep Learning Segmentation Method for Hematoma Volumetric Detection in Intracerebral Hemorrhage. Stroke 2021; 53:167-176. [PMID: 34601899 DOI: 10.1161/strokeaha.120.032243] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
BACKGROUND AND PURPOSE Hematoma volume (HV) is a significant diagnosis for determining the clinical stage and therapeutic approach for intracerebral hemorrhage (ICH). The aim of this study is to develop a robust deep learning segmentation method for the fast and accurate HV analysis using computed tomography. METHODS A novel dimension reduction UNet (DR-UNet) model was developed for computed tomography image segmentation and HV measurement. Two data sets, 512 ICH patients with 12 568 computed tomography slices in the retrospective data set and 50 ICH patients with 1257 slices in the prospective data set, were used for network training, validation, and internal and external testing. Moreover, 13 irregular hematoma cases, 11 subdural and epidural hematoma cases, and 50 different HV cases into 3 groups (<30, 30-60, and >60 mL) were selected to further evaluate the robustness of DR-UNet. The image segmentation performance of DR-UNet was compared with those of UNet, the fuzzy clustering method, and the active contour method. The HV measurement performance was compared using DR-UNet, UNet, and the Coniglobus formula method. RESULTS Using DR-UNet, the segmentation model achieved a performance similar to that of expert clinicians in 2 independent test data sets containing internal testing data (Dice of 0.861±0.139) and external testing data (Dice of 0.874±0.130). The HV measurement derived from DR-UNet was strongly correlated with that from manual segmentation (R2=0.9979; P<0.0001). In the irregularly shaped hematoma group and the subdural and epidural hematoma group, DR-UNet was more robust than UNet in both hematoma segmentation and HV measurement. There is no statistical significance in segmentation accuracy among 3 different HV groups. CONCLUSIONS DR-UNet can segment hematomas from the computed tomography scans of ICH patients and quantify the HV with better accuracy and greater efficiency than the main existing methods and with similar performance to expert clinicians. Due to robust performance and stable segmentation on different ICHs, DR-UNet could facilitate the development of deep learning systems for a variety of clinical applications.
Collapse
Affiliation(s)
- Nannan Yu
- Department of Artificial Intelligence, School of Electrical Engineering and Automation, Jiangsu Normal University, Xuzhou, China (N.Y., H.Y.)
| | - He Yu
- Department of Artificial Intelligence, School of Electrical Engineering and Automation, Jiangsu Normal University, Xuzhou, China (N.Y., H.Y.)
| | - Haonan Li
- Department of Biotechnology, College of Basic Medical Sciences, Dalian Medical University, China (H.L., J.W.)
| | - Nannan Ma
- Radiology Department, Xuzhou Central Hospital, China (N.M., C.H.)
| | - Chunai Hu
- Radiology Department, Xuzhou Central Hospital, China (N.M., C.H.)
| | - Jia Wang
- Department of Biotechnology, College of Basic Medical Sciences, Dalian Medical University, China (H.L., J.W.)
| |
Collapse
|
6
|
Shin C, Gerber MJ, Lee YH, Rodriguez M, Pedram SA, Hubschman JP, Tsao TC, Rosen J. Semi-Automated Extraction of Lens Fragments via a Surgical Robot Using Semantic Segmentation of OCT Images with Deep Learning - Experimental Results in ex vivo Animal Model. IEEE Robot Autom Lett 2021; 6:5261-5268. [PMID: 34621980 PMCID: PMC8492005 DOI: 10.1109/lra.2021.3072574] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The overarching goal of this work is to demonstrate the feasibility of using optical coherence tomography (OCT) to guide a robotic system to extract lens fragments from ex vivo pig eyes. A convolutional neural network (CNN) was developed to semantically segment four intraocular structures (lens material, capsule, cornea, and iris) from OCT images. The neural network was trained on images from ten pig eyes, validated on images from eight different eyes, and tested on images from another ten eyes. This segmentation algorithm was incorporated into the Intraocular Robotic Interventional Surgical System (IRISS) to realize semi-automated detection and extraction of lens material. To demonstrate the system, the semi-automated detection and extraction task was performed on seven separate ex vivo pig eyes. The developed neural network exhibited 78.20% for the validation set and 83.89% for the test set in mean intersection over union metrics. Successful implementation and efficacy of the developed method were confirmed by comparing the preoperative and postoperative OCT volume scans from the seven experiments.
Collapse
Affiliation(s)
- Changyeob Shin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA, USA
| | - Matthew J Gerber
- Stein Eye Institute, University of California, Los Angeles, CA, USA
| | - Yu-Hsiu Lee
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA, USA
| | | | - Sahba Aghajani Pedram
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA, USA
| | | | - Tsu-Chin Tsao
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA, USA
| | - Jacob Rosen
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA, USA
| |
Collapse
|