Zampokas G, Peleka G, Tsiolis K, Topalidou-Kyniazopoulou A, Mariolis I, Tzovaras D. Real-time stereo reconstruction of intra-operative scene and registration to pre-operative 3D models for augmenting surgeons' view during RAMIS.
Med Phys 2022;
49:6517-6526. [PMID:
35754200 DOI:
10.1002/mp.15830]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 05/28/2022] [Accepted: 06/06/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE
During Minimally Invasive Surgery (MIS) procedures, there exists an ever-growing/apparent need for providing computer generated visual feedback to the surgeon(s), through a visualization device. While multiple solutions have been proposed in the literature, there is limited evidence of such a system performing reliably in practice, and when it does, it is often tailored to a specific operation type. Another important aspect is regarding the usability of such systems, which typically include complicated and time-consuming steps, and often require the assistance of specialized personnel. In this study, we propose an auxiliary visualization system for surgeons, which includes streamlined process to use pre-operative data of the patient, and apply it to two different MIS cases, namely Robot-assisted Partial Nephrectomy (RAPN) and Robot-assisted Partial Lateral Meniscectomy (RaPLM).
METHODS
The visualization and processing pipeline consists of an intra-operative 3D reconstruction of the surgical area, using an optimized version of the Quasi Dense method, aimed to perform with good accuracy while maintaining real-time speed. A set of pre-processing and post-processing techniques further contribute to the result by providing a smoother and more dense point cloud. DynamicFusion is used for the registration of the pre-operative model to the intra-operative scene. Two silicon kidney phantoms and an ex-vivo porcine meniscus are used for evaluation, representing subjects for the examined surgical cases.
RESULTS
Performance is evaluated qualitatively using the two datasets. The pre-operative model of the subject is projected on top of the actual 2D image and also in 3D space. The model is superimposed on top of the actual physical structure it represents, and remains in the correct position throughout the experiments, even when abrupt camera movements are taking place. Finally, when deformation is introduced, the model is deformed as well, resembling the real subject's structure.
CONCLUSIONS
Results demonstrate and validate the use of the presented algorithms for each separate task of the pipeline. A complete methodology to provide surgeon(s) with visual information during surgery is presented. Its operation is evaluated over two different surgical scenarios, paving the way for a single visualization methodology that can adapt and perform robustly for multiple cases, with minimal effort. This article is protected by copyright. All rights reserved.
Collapse