1
|
Schmidt A, Mohareri O, DiMaio S, Yip MC, Salcudean SE. Tracking and mapping in medical computer vision: A review. Med Image Anal 2024; 94:103131. [PMID: 38442528 DOI: 10.1016/j.media.2024.103131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/08/2024] [Accepted: 02/29/2024] [Indexed: 03/07/2024]
Abstract
As computer vision algorithms increase in capability, their applications in clinical systems will become more pervasive. These applications include: diagnostics, such as colonoscopy and bronchoscopy; guiding biopsies, minimally invasive interventions, and surgery; automating instrument motion; and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. After which, we review datasets provided in the field and the clinical needs that motivate their design. Then, we delve into the algorithmic side, and summarize recent developments. This summary should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We maintain focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. With the field summarized, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications. We then provide some research directions and questions. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.
Collapse
Affiliation(s)
- Adam Schmidt
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada.
| | - Omid Mohareri
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Simon DiMaio
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Michael C Yip
- Department of Electrical and Computer Engineering, University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada
| |
Collapse
|
2
|
Liu Z, Gao W, Zhu J, Yu Z, Fu Y. Surface deformation tracking in monocular laparoscopic video. Med Image Anal 2023; 86:102775. [PMID: 36848721 DOI: 10.1016/j.media.2023.102775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 02/17/2023] [Accepted: 02/18/2023] [Indexed: 02/23/2023]
Abstract
Image-guided surgery has been proven to enhance the accuracy and safety of minimally invasive surgery (MIS). Nonrigid deformation tracking of soft tissue is one of the main challenges in image-guided MIS owing to the existence of tissue deformation, homogeneous texture, smoke and instrument occlusion, etc. In this paper, we proposed a piecewise affine deformation model-based nonrigid deformation tracking method. A Markov random field based mask generation method is developed to eliminate tracking anomalies. The deformation information vanishes when the regular constraint is invalid, which further deteriorates the tracking accuracy. Atime-series deformation solidification mechanism is introduced to reduce the degradation of the deformation field of the model. For the quantitative evaluation of the proposed method, we synthesized nine laparoscopic videos mimicking instrument occlusion and tissue deformation. Quantitative tracking robustness was evaluated on the synthetic videos. Three real videos of MIS containing challenges of large-scale deformation, large-range smoke, instrument occlusion, and permanent changes in soft tissue texture were also used to evaluate the performance of the proposed method. Experimental results indicate the proposed method outperforms state-of-the-art methods in terms of accuracy and robustness, which shows good performance in image-guided MIS.
Collapse
Affiliation(s)
- Ziteng Liu
- School of Life Science and Technology, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China
| | - Wenpeng Gao
- School of Life Science and Technology, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China.
| | - Jiahua Zhu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China
| | - Zhi Yu
- School of Life Science and Technology, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China.
| |
Collapse
|
3
|
Mattos LS, Acemoglu A, Geraldes A, Laborai A, Schoob A, Tamadazte B, Davies B, Wacogne B, Pieralli C, Barbalata C, Caldwell DG, Kundrat D, Pardo D, Grant E, Mora F, Barresi G, Peretti G, Ortiz J, Rabenorosoa K, Tavernier L, Pazart L, Fichera L, Guastini L, Kahrs LA, Rakotondrabe M, Andreff N, Deshpande N, Gaiffe O, Renevier R, Moccia S, Lescano S, Ortmaier T, Penza V. μRALP and Beyond: Micro-Technologies and Systems for Robot-Assisted Endoscopic Laser Microsurgery. Front Robot AI 2021; 8:664655. [PMID: 34568434 PMCID: PMC8455830 DOI: 10.3389/frobt.2021.664655] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 07/14/2021] [Indexed: 01/05/2023] Open
Abstract
Laser microsurgery is the current gold standard surgical technique for the treatment of selected diseases in delicate organs such as the larynx. However, the operations require large surgical expertise and dexterity, and face significant limitations imposed by available technology, such as the requirement for direct line of sight to the surgical field, restricted access, and direct manual control of the surgical instruments. To change this status quo, the European project μRALP pioneered research towards a complete redesign of current laser microsurgery systems, focusing on the development of robotic micro-technologies to enable endoscopic operations. This has fostered awareness and interest in this field, which presents a unique set of needs, requirements and constraints, leading to research and technological developments beyond μRALP and its research consortium. This paper reviews the achievements and key contributions of such research, providing an overview of the current state of the art in robot-assisted endoscopic laser microsurgery. The primary target application considered is phonomicrosurgery, which is a representative use case involving highly challenging microsurgical techniques for the treatment of glottic diseases. The paper starts by presenting the motivations and rationale for endoscopic laser microsurgery, which leads to the introduction of robotics as an enabling technology for improved surgical field accessibility, visualization and management. Then, research goals, achievements, and current state of different technologies that can build-up to an effective robotic system for endoscopic laser microsurgery are presented. This includes research in micro-robotic laser steering, flexible robotic endoscopes, augmented imaging, assistive surgeon-robot interfaces, and cognitive surgical systems. Innovations in each of these areas are shown to provide sizable progress towards more precise, safer and higher quality endoscopic laser microsurgeries. Yet, major impact is really expected from the full integration of such individual contributions into a complete clinical surgical robotic system, as illustrated in the end of this paper with a description of preliminary cadaver trials conducted with the integrated μRALP system. Overall, the contribution of this paper lays in outlining the current state of the art and open challenges in the area of robot-assisted endoscopic laser microsurgery, which has important clinical applications even beyond laryngology.
Collapse
Affiliation(s)
| | | | | | - Andrea Laborai
- Department of Otorhinolaryngology, Guglielmo da Saliceto Hospital, Piacenza, Italy
| | | | - Brahim Tamadazte
- Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, CNRS, Paris, France
| | | | - Bruno Wacogne
- FEMTO-ST Institute, Univ. Bourgogne Franche-Comte, CNRS, Besançon, France.,Centre Hospitalier Régional Universitaire, Besançon, France
| | - Christian Pieralli
- FEMTO-ST Institute, Univ. Bourgogne Franche-Comte, CNRS, Besançon, France
| | - Corina Barbalata
- Mechanical and Industrial Engineering Department, Louisiana State University, Baton Rouge, LA, United States
| | | | | | - Diego Pardo
- Istituto Italiano di Tecnologia, Genoa, Italy
| | - Edward Grant
- Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC, United States
| | - Francesco Mora
- Clinica Otorinolaringoiatrica, IRCCS Policlinico San Martino, Genoa, Italy.,Dipartimento di Scienze Chirurgiche e Diagnostiche Integrate, Università Degli Studi di Genova, Genoa, Italy
| | | | - Giorgio Peretti
- Clinica Otorinolaringoiatrica, IRCCS Policlinico San Martino, Genoa, Italy.,Dipartimento di Scienze Chirurgiche e Diagnostiche Integrate, Università Degli Studi di Genova, Genoa, Italy
| | - Jesùs Ortiz
- Istituto Italiano di Tecnologia, Genoa, Italy
| | - Kanty Rabenorosoa
- FEMTO-ST Institute, Univ. Bourgogne Franche-Comte, CNRS, Besançon, France
| | | | - Lionel Pazart
- Centre Hospitalier Régional Universitaire, Besançon, France
| | - Loris Fichera
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, United States
| | - Luca Guastini
- Clinica Otorinolaringoiatrica, IRCCS Policlinico San Martino, Genoa, Italy.,Dipartimento di Scienze Chirurgiche e Diagnostiche Integrate, Università Degli Studi di Genova, Genoa, Italy
| | - Lüder A Kahrs
- Department of Mathematical and Computational Sciences, University of Toronto, Mississauga, ON, Canada
| | - Micky Rakotondrabe
- National School of Engineering in Tarbes, University of Toulouse, Tarbes, France
| | - Nicolas Andreff
- FEMTO-ST Institute, Univ. Bourgogne Franche-Comte, CNRS, Besançon, France
| | | | - Olivier Gaiffe
- Centre Hospitalier Régional Universitaire, Besançon, France
| | - Rupert Renevier
- FEMTO-ST Institute, Univ. Bourgogne Franche-Comte, CNRS, Besançon, France
| | - Sara Moccia
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Sergio Lescano
- FEMTO-ST Institute, Univ. Bourgogne Franche-Comte, CNRS, Besançon, France
| | - Tobias Ortmaier
- Institute of Mechatronic Systems, Leibniz Universität Hannover, Garbsen, Germany
| | | |
Collapse
|
4
|
Zhou H, Jayender J. EMDQ-SLAM: Real-time High-resolution Reconstruction of Soft Tissue Surface from Stereo Laparoscopy Videos. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12904:331-340. [PMID: 35664445 PMCID: PMC9165607 DOI: 10.1007/978-3-030-87202-1_32] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
We propose a novel stereo laparoscopy video-based non-rigid SLAM method called EMDQ-SLAM, which can incrementally reconstruct thee-dimensional (3D) models of soft tissue surfaces in real-time and preserve high-resolution color textures. EMDQ-SLAM uses the expectation maximization and dual quaternion (EMDQ) algorithm combined with SURF features to track the camera motion and estimate tissue deformation between video frames. To overcome the problem of accumulative errors over time, we have integrated a g2o-based graph optimization method that combines the EMDQ mismatch removal and as-rigid-as-possible (ARAP) smoothing methods. Finally, the multi-band blending (MBB) algorithm has been used to obtain high resolution color textures with real-time performance. Experimental results demonstrate that our method outperforms two state-of-the-art non-rigid SLAM methods: MISSLAM and DefSLAM. Quantitative evaluation shows an average error in the range of 0.8-2.2 mm for different cases.
Collapse
Affiliation(s)
- Haoyin Zhou
- Surgical Planning Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Jagadeesan Jayender
- Surgical Planning Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| |
Collapse
|
5
|
Ma G, Ross W, Codd PJ. StereoCNC: A Stereovision-guided Robotic Laser System. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2021; 2021:540-547. [PMID: 35950084 PMCID: PMC9358620 DOI: 10.1109/iros51168.2021.9636050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
This paper proposes an End-to-End stereovision-guided laser surgery system that can conduct laser ablation on targets selected by human operators in the color image, referred as StereoCNC. Two digital cameras are integrated into a previously developed robotic laser system to add a color sensing modality and formulate the stereovision. A calibration method is implemented to register the coordinate frames between stereo cameras and the laser system, modelled as a 3D-to-3D least-squares problem. The calibration reprojection errors are used to characterize a 3D error field by Gaussian Process Regression (GPR). This error field can make predictions for new point cloud data to identify an optimal position with lower calibration errors. A stereovision-guided laser ablation pipeline is proposed to optimize the positioning of the surgical site within the error field, which is achieved with a Genetic Algorithm search; mechanical stages move the site to the low-error region. The pipeline is validated by the experiments on phantoms with color texture and various geometric shapes. The overall targeting accuracy of the system achieved an average RMSE of 0.13 ± 0.02 mm and maximum error of 0.34 ± 0.06 mm, as measured by pre- and post-laser ablation images. The results show potential applications of using the developed stereovision-guided robotic system for superficial laser surgery, including dermatologic applications or removal of exposed tumorous tissue in neurosurgery.
Collapse
Affiliation(s)
- Guangshen Ma
- Brain Tool Lab. Department of Mechanical Engineering, Duke University
| | - Weston Ross
- Brain Tool Lab. Department of Mechanical Engineering, Duke University
- Department of Neurosurgery, Duke University
| | - Patrick J Codd
- Brain Tool Lab. Department of Mechanical Engineering, Duke University
- Department of Neurosurgery, Duke University
| |
Collapse
|
6
|
Kundrat D, Graesslin R, Schoob A, Friedrich DT, Scheithauer MO, Hoffmann TK, Ortmaier T, Kahrs LA, Schuler PJ. Preclinical Performance Evaluation of a Robotic Endoscope for Non-Contact Laser Surgery. Ann Biomed Eng 2020; 49:585-600. [PMID: 32785862 PMCID: PMC7851027 DOI: 10.1007/s10439-020-02577-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 06/16/2020] [Indexed: 12/12/2022]
Abstract
Despite great efforts, transoral robotic laser surgery has not been established clinically. Patient benefits are yet to be proven to accept shortcomings of robotic systems. In particular, laryngeal reachability and transition from microscope to accurate endoscopic laser ablation have not been achieved. We have addressed those challenges with a highly integrated robotic endoscope for non-contact endolaryngeal laser surgery. The current performance status has been assessed in multi-level user studies. In addition, the system was deployed to an ex vivo porcine larynx. The robotic design comprises an extensible continuum manipulator with multifunctional tip. The latter features laser optics, stereo vision, and illumination. Vision-based performance assessment is derived from depth estimation and scene tracking. Novices and experts (n = 20) conducted teleoperated delineation tasks to mimic laser ablation of delicate anatomy. Delineation with motion-compensated and raw endoscopic visualisation was carried out on planar and non-planar nominal patterns. Root mean square tracing errors of less than 0.75 mm were feasible with task completion times below 45 s. Relevant anatomy in the porcine larynx was exposed successfully. Accuracy and usability of the integrated platform bear potential for dexterous laser manipulation in clinical settings. Cadaver and in vivo animal studies may translate ex vivo findings.
Collapse
Affiliation(s)
- D. Kundrat
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstraße 11a, 30167 Hannover, Germany
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ UK
| | - R. Graesslin
- Department of Otorhinolaryngology, Head and Neck Surgery, Ulm University Medical Center, Frauensteige 12, 89075 Ulm, Germany
- Surgical Oncology Ulm, i2SOUL Consortium, Ulm, Germany
| | - A. Schoob
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstraße 11a, 30167 Hannover, Germany
| | - D. T. Friedrich
- Department of Otorhinolaryngology, Head and Neck Surgery, Augsburg University Medical Center, Stenglinstr. 2, 86156 Augsburg, Germany
| | - M. O. Scheithauer
- Department of Otorhinolaryngology, Head and Neck Surgery, Ulm University Medical Center, Frauensteige 12, 89075 Ulm, Germany
- Surgical Oncology Ulm, i2SOUL Consortium, Ulm, Germany
| | - T. K. Hoffmann
- Department of Otorhinolaryngology, Head and Neck Surgery, Ulm University Medical Center, Frauensteige 12, 89075 Ulm, Germany
- Surgical Oncology Ulm, i2SOUL Consortium, Ulm, Germany
| | - T. Ortmaier
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstraße 11a, 30167 Hannover, Germany
| | - L. A. Kahrs
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstraße 11a, 30167 Hannover, Germany
- Department of Mathematical and Computational Sciences, University of Toronto Mississauga, Mississauga, ON L5L 1C6 Canada
| | - P. J. Schuler
- Department of Otorhinolaryngology, Head and Neck Surgery, Ulm University Medical Center, Frauensteige 12, 89075 Ulm, Germany
- Surgical Oncology Ulm, i2SOUL Consortium, Ulm, Germany
| |
Collapse
|
7
|
Drioli C, Foresti GL. Fitting a biomechanical model of the folds to high-speed video data through bayesian estimation. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
|
8
|
Zhou H, Jagadeesan J. Real-Time Surface Deformation Recovery from Stereo Videos. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11764:339-347. [PMID: 32391525 PMCID: PMC7206979 DOI: 10.1007/978-3-030-32239-7_38] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Tissue deformation during the surgery may significantly decrease the accuracy of surgical navigation systems. In this paper, we propose an approach to estimate the deformation of tissue surface from stereo videos in real-time, which is capable of handling occlusion, smooth surface and fast deformation. We first use a stereo matching method to extract depth information from stereo video frames and generate the tissue template, and then estimate the deformation of the obtained template by minimizing ICP, ORB feature matching and as-rigid-as-possible (ARAP) costs. The main novelties are twofold: (1) Due to non-rigid deformation, feature matching outliers are difficult to be removed by traditional RANSAC methods; therefore we propose a novel 1-point RANSAC and reweighting method to preselect matching inliers, which handles smooth surfaces and fast deformations. (2) We propose a novel ARAP cost function based on dense connections between the control points to achieve better smoothing performance with limited number of iterations. Algorithms are designed and implemented for GPU parallel computing. Experiments on ex- and in vivo data showed that this approach works at an update rate of 15 Hz with an accuracy of less than 2.5 mm on a NVIDIA Titan X GPU.
Collapse
Affiliation(s)
- Haoyin Zhou
- Surgical Planning Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Jayender Jagadeesan
- Surgical Planning Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
9
|
Laves MH, Bicker J, Kahrs LA, Ortmaier T. A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation. Int J Comput Assist Radiol Surg 2019; 14:483-492. [PMID: 30649670 DOI: 10.1007/s11548-018-01910-0] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 12/28/2018] [Indexed: 11/24/2022]
Abstract
PURPOSE Automated segmentation of anatomical structures in medical image analysis is a prerequisite for autonomous diagnosis as well as various computer- and robot-aided interventions. Recent methods based on deep convolutional neural networks (CNN) have outperformed former heuristic methods. However, those methods were primarily evaluated on rigid, real-world environments. In this study, existing segmentation methods were evaluated for their use on a new dataset of transoral endoscopic exploration. METHODS Four machine learning-based methods SegNet, UNet, ENet and ErfNet were trained with supervision on a novel 7-class dataset of the human larynx. The dataset contains 536 manually segmented images from two patients during laser incisions. The Intersection-over-Union (IoU) evaluation metric was used to measure the accuracy of each method. Data augmentation and network ensembling were employed to increase segmentation accuracy. Stochastic inference was used to show uncertainties of the individual models. Patient-to-patient transfer was investigated using patient-specific fine-tuning. RESULTS In this study, a weighted average ensemble network of UNet and ErfNet was best suited for the segmentation of laryngeal soft tissue with a mean IoU of 84.7%. The highest efficiency was achieved by ENet with a mean inference time of 9.22 ms per image. It is shown that 10 additional images from a new patient are sufficient for patient-specific fine-tuning. CONCLUSION CNN-based methods for semantic segmentation are applicable to endoscopic images of laryngeal soft tissue. The segmentation can be used for active constraints or to monitor morphological changes and autonomously detect pathologies. Further improvements could be achieved by using a larger dataset or training the models in a self-supervised manner on additional unlabeled data.
Collapse
Affiliation(s)
- Max-Heinrich Laves
- Leibniz Universität Hannover, Appelstraße 11A, 30167, Hannover, Germany.
| | - Jens Bicker
- Leibniz Universität Hannover, Appelstraße 11A, 30167, Hannover, Germany
| | - Lüder A Kahrs
- Leibniz Universität Hannover, Appelstraße 11A, 30167, Hannover, Germany
| | - Tobias Ortmaier
- Leibniz Universität Hannover, Appelstraße 11A, 30167, Hannover, Germany
| |
Collapse
|
10
|
Augmented visualization with depth perception cues to improve the surgeon's performance in minimally invasive surgery. Med Biol Eng Comput 2018; 57:995-1013. [PMID: 30511205 DOI: 10.1007/s11517-018-1929-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 11/03/2018] [Indexed: 01/14/2023]
Abstract
Minimally invasive techniques, such as laparoscopy and radiofrequency ablation of tumors, bring important advantages in surgery: by minimizing incisions on the patient's body, they can reduce the hospitalization period and the risk of postoperative complications. Unfortunately, they come with drawbacks for surgeons, who have a restricted vision of the operation area through an indirect access and 2D images provided by a camera inserted in the body. Augmented reality provides an "X-ray vision" of the patient anatomy thanks to the visualization of the internal organs of the patient. In this way, surgeons are free from the task of mentally associating the content from CT images to the operative scene. We present a navigation system that supports surgeons in preoperative and intraoperative phases and an augmented reality system that superimposes virtual organs on the patient's body together with depth and distance information. We implemented a combination of visual and audio cues allowing the surgeon to improve the intervention precision and avoid the risk of damaging anatomical structures. The test scenarios proved the good efficacy and accuracy of the system. Moreover, tests in the operating room suggested some modifications to the tracking system to make it more robust with respect to occlusions. Graphical Abstract Augmented visualization in minimally invasive surgery.
Collapse
|