1
|
Boretto L, Pelanis E, Regensburger A, Fretland ÅA, Edwin B, Elle OJ. Hybrid optical-vision tracking in laparoscopy: accuracy of navigation and ultrasound reconstruction. MINIM INVASIV THER 2024; 33:176-183. [PMID: 38334755 DOI: 10.1080/13645706.2024.2313032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/11/2024] [Indexed: 02/10/2024]
Abstract
INTRODUCTION The use of laparoscopic and robotic liver surgery is increasing. However, it presents challenges such as limited field of view and organ deformations. Surgeons rely on laparoscopic ultrasound (LUS) for guidance, but mentally correlating ultrasound images with pre-operative volumes can be difficult. In this direction, surgical navigation systems are being developed to assist with intra-operative understanding. One approach is performing intra-operative ultrasound 3D reconstructions. The accuracy of these reconstructions depends on tracking the LUS probe. MATERIAL AND METHODS This study evaluates the accuracy of LUS probe tracking and ultrasound 3D reconstruction using a hybrid tracking approach. The LUS probe is tracked from laparoscope images, while an optical tracker tracks the laparoscope. The accuracy of hybrid tracking is compared to full optical tracking using a dual-modality tool. Ultrasound 3D reconstruction accuracy is assessed on an abdominal phantom with CT transformed into the optical tracker's coordinate system. RESULTS Hybrid tracking achieves a tracking error < 2 mm within 10 cm between the laparoscope and the LUS probe. The ultrasound reconstruction accuracy is approximately 2 mm. CONCLUSION Hybrid tracking shows promising results that can meet the required navigation accuracy for laparoscopic liver surgery.
Collapse
Affiliation(s)
- Luca Boretto
- Department of Informatics, The Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
- Siemens Healthcare AS, Oslo, Norway
| | - Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
| | | | - Åsmund Avdem Fretland
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Department of HPB Surgery, Oslo University Hospital Rikshospitalet, Oslo, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Department of HPB Surgery, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Faculty of Medicine, Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Ole Jakob Elle
- Department of Informatics, The Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
| |
Collapse
|
2
|
Ramalhinho J, Yoo S, Dowrick T, Koo B, Somasundaram M, Gurusamy K, Hawkes DJ, Davidson B, Blandford A, Clarkson MJ. The value of Augmented Reality in surgery - A usability study on laparoscopic liver surgery. Med Image Anal 2023; 90:102943. [PMID: 37703675 PMCID: PMC10958137 DOI: 10.1016/j.media.2023.102943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 06/29/2023] [Accepted: 08/24/2023] [Indexed: 09/15/2023]
Abstract
Augmented Reality (AR) is considered to be a promising technology for the guidance of laparoscopic liver surgery. By overlaying pre-operative 3D information of the liver and internal blood vessels on the laparoscopic view, surgeons can better understand the location of critical structures. In an effort to enable AR, several authors have focused on the development of methods to obtain an accurate alignment between the laparoscopic video image and the pre-operative 3D data of the liver, without assessing the benefit that the resulting overlay can provide during surgery. In this paper, we present a study that aims to assess quantitatively and qualitatively the value of an AR overlay in laparoscopic surgery during a simulated surgical task on a phantom setup. We design a study where participants are asked to physically localise pre-operative tumours in a liver phantom using three image guidance conditions - a baseline condition without any image guidance, a condition where the 3D surfaces of the liver are aligned to the video and displayed on a black background, and a condition where video see-through AR is displayed on the laparoscopic video. Using data collected from a cohort of 24 participants which include 12 surgeons, we observe that compared to the baseline, AR decreases the median localisation error of surgeons on non-peripheral targets from 25.8 mm to 9.2 mm. Using subjective feedback, we also identify that AR introduces usability improvements in the surgical task and increases the perceived confidence of the users. Between the two tested displays, the majority of participants preferred to use the AR overlay instead of navigated view of the 3D surfaces on a separate screen. We conclude that AR has the potential to improve performance and decision making in laparoscopic surgery, and that improvements in overlay alignment accuracy and depth perception should be pursued in the future.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom.
| | - Soojeong Yoo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Thomas Dowrick
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Bongjin Koo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Murali Somasundaram
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - David J Hawkes
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Brian Davidson
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Ann Blandford
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Matthew J Clarkson
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| |
Collapse
|
3
|
Deng Z, Xiang N, Pan J. State of the Art in Immersive Interactive Technologies for Surgery Simulation: A Review and Prospective. Bioengineering (Basel) 2023; 10:1346. [PMID: 38135937 PMCID: PMC10740891 DOI: 10.3390/bioengineering10121346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 11/08/2023] [Accepted: 11/21/2023] [Indexed: 12/24/2023] Open
Abstract
Immersive technologies have thrived on a strong foundation of software and hardware, injecting vitality into medical training. This surge has witnessed numerous endeavors incorporating immersive technologies into surgery simulation for surgical skills training, with a growing number of researchers delving into this domain. Relevant experiences and patterns need to be summarized urgently to enable researchers to establish a comprehensive understanding of this field, thus promoting its continuous growth. This study provides a forward-looking perspective by reviewing the latest development of immersive interactive technologies for surgery simulation. The investigation commences from a technological standpoint, delving into the core aspects of virtual reality (VR), augmented reality (AR) and mixed reality (MR) technologies, namely, haptic rendering and tracking. Subsequently, we summarize recent work based on the categorization of minimally invasive surgery (MIS) and open surgery simulations. Finally, the study showcases the impressive performance and expansive potential of immersive technologies in surgical simulation while also discussing the current limitations. We find that the design of interaction and the choice of immersive technology in virtual surgery development should be closely related to the corresponding interactive operations in the real surgical speciality. This alignment facilitates targeted technological adaptations in the direction of greater applicability and fidelity of simulation.
Collapse
Affiliation(s)
- Zihan Deng
- Department of Computing, School of Advanced Technology, Xi’an Jiaotong-Liverpool Uiversity, Suzhou 215123, China;
| | - Nan Xiang
- Department of Computing, School of Advanced Technology, Xi’an Jiaotong-Liverpool Uiversity, Suzhou 215123, China;
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China;
| |
Collapse
|
4
|
El Chemaly T, Athayde Neves C, Leuze C, Hargreaves B, H Blevins N. Stereoscopic calibration for augmented reality visualization in microscopic surgery. Int J Comput Assist Radiol Surg 2023; 18:2033-2041. [PMID: 37450175 DOI: 10.1007/s11548-023-02980-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/26/2023] [Indexed: 07/18/2023]
Abstract
PURPOSE Middle and inner ear procedures target hearing loss, infections, and tumors of the temporal bone and lateral skull base. Despite the advances in surgical techniques, these procedures remain challenging due to limited haptic and visual feedback. Augmented reality (AR) may improve operative safety by allowing the 3D visualization of anatomical structures from preoperative computed tomography (CT) scans on real intraoperative microscope video feed. The purpose of this work was to develop a real-time CT-augmented stereo microscope system using camera calibration and electromagnetic (EM) tracking. METHODS A 3D printed and electromagnetically tracked calibration board was used to compute the intrinsic and extrinsic parameters of the surgical stereo microscope. These parameters were used to establish a transformation between the EM tracker coordinate system and the stereo microscope image space such that any tracked 3D point can be projected onto the left and right images of the microscope video stream. This allowed the augmentation of the microscope feed of a 3D printed temporal bone with its corresponding CT-derived virtual model. Finally, the calibration board was also used for evaluating the accuracy of the calibration. RESULTS We evaluated the accuracy of the system by calculating the registration error (RE) in 2D and 3D in a microsurgical laboratory setting. Our calibration workflow achieved a RE of 0.11 ± 0.06 mm in 2D and 0.98 ± 0.13 mm in 3D. In addition, we overlaid a 3D CT model on the microscope feed of a 3D resin printed model of a segmented temporal bone. The system exhibited small latency and good registration accuracy. CONCLUSION We present the calibration of an electromagnetically tracked surgical stereo microscope for augmented reality visualization. The calibration method achieved accuracy within a range suitable for otologic procedures. The AR process introduces enhanced visualization of the surgical field while allowing depth perception.
Collapse
Affiliation(s)
- Trishia El Chemaly
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, USA.
- Department of Radiology, Stanford School of Medicine, Stanford, CA, USA.
| | - Caio Athayde Neves
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, USA
- Faculty of Medicine, University of Brasília, Brasília, Brazil
| | - Christoph Leuze
- Department of Radiology, Stanford School of Medicine, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Brian Hargreaves
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford School of Medicine, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Nikolas H Blevins
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, USA
| |
Collapse
|
5
|
Alikhani A, Osner S, Dehghani S, Busam B, Inagaki S, Maier M, Navab N, Nasseri MA. RCIT: A Robust Catadioptric-based Instrument 3D Tracking Method For Microsurgical Instruments In a Single-Camera System. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083453 DOI: 10.1109/embc40787.2023.10340955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The field of robotic microsurgery and micro-manipulation has undergone a profound evolution in recent years, particularly with regard to the accuracy, precision, versatility, and dexterity. These advancements have the potential to revolutionize high-precision biomedical procedures, such as neurosurgery, vitreoretinal surgery, and cell micro-manipulation. However, a critical challenge in developing micron-precision robotic systems is accurately verifying the end-effector motion in 3D. Such verification is complicated due to environmental vibrations, inaccuracy of mechanical assembly, and other physical uncertainties. To overcome these challenges, this paper proposes a novel single-camera framework that utilizes mirrors with known geometric parameters to estimate the 3D position of the microsurgical instrument. Euclidean distance between reconstructed points by the algorithm and the robot movement recorded by the highly accurate encoders is considered an error. Our method exhibits an accurate estimation with the mean absolute error of 0.044 mm when tested on a 23G surgical cannula with a diameter of 0.640 mm and operates at a resolution of 4024 × 3036 at 30 frames per second.
Collapse
|