1
|
El Chemaly T, Athayde Neves C, Leuze C, Hargreaves B, H Blevins N. Stereoscopic calibration for augmented reality visualization in microscopic surgery. Int J Comput Assist Radiol Surg 2023; 18:2033-2041. [PMID: 37450175 DOI: 10.1007/s11548-023-02980-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/26/2023] [Indexed: 07/18/2023]
Abstract
PURPOSE Middle and inner ear procedures target hearing loss, infections, and tumors of the temporal bone and lateral skull base. Despite the advances in surgical techniques, these procedures remain challenging due to limited haptic and visual feedback. Augmented reality (AR) may improve operative safety by allowing the 3D visualization of anatomical structures from preoperative computed tomography (CT) scans on real intraoperative microscope video feed. The purpose of this work was to develop a real-time CT-augmented stereo microscope system using camera calibration and electromagnetic (EM) tracking. METHODS A 3D printed and electromagnetically tracked calibration board was used to compute the intrinsic and extrinsic parameters of the surgical stereo microscope. These parameters were used to establish a transformation between the EM tracker coordinate system and the stereo microscope image space such that any tracked 3D point can be projected onto the left and right images of the microscope video stream. This allowed the augmentation of the microscope feed of a 3D printed temporal bone with its corresponding CT-derived virtual model. Finally, the calibration board was also used for evaluating the accuracy of the calibration. RESULTS We evaluated the accuracy of the system by calculating the registration error (RE) in 2D and 3D in a microsurgical laboratory setting. Our calibration workflow achieved a RE of 0.11 ± 0.06 mm in 2D and 0.98 ± 0.13 mm in 3D. In addition, we overlaid a 3D CT model on the microscope feed of a 3D resin printed model of a segmented temporal bone. The system exhibited small latency and good registration accuracy. CONCLUSION We present the calibration of an electromagnetically tracked surgical stereo microscope for augmented reality visualization. The calibration method achieved accuracy within a range suitable for otologic procedures. The AR process introduces enhanced visualization of the surgical field while allowing depth perception.
Collapse
Affiliation(s)
- Trishia El Chemaly
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, USA.
- Department of Radiology, Stanford School of Medicine, Stanford, CA, USA.
| | - Caio Athayde Neves
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, USA
- Faculty of Medicine, University of Brasília, Brasília, Brazil
| | - Christoph Leuze
- Department of Radiology, Stanford School of Medicine, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Brian Hargreaves
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford School of Medicine, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Nikolas H Blevins
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, USA
| |
Collapse
|
2
|
Guan B, Zou Y, Zhao J, Pan L, Yi B, Li J. Clean visual field reconstruction in robot-assisted laparoscopic surgery based on dynamic prediction. Comput Biol Med 2023; 165:107472. [PMID: 37713788 DOI: 10.1016/j.compbiomed.2023.107472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 08/24/2023] [Accepted: 09/04/2023] [Indexed: 09/17/2023]
Abstract
Robot-assisted minimally invasive surgery has been broadly employed in complicated operations. However, the multiple surgical instruments may occupy a large amount of visual space in complex operations performed in narrow spaces, which affects the surgeon's judgment on the shape and position of the lesion as well as the course of its adjacent vessels/lacunae. In this paper, a surgical scene reconstruction method is proposed, which involves the tracking and removal of surgical instruments and the dynamic prediction of the obscured region. For tracking and segmentation of instruments, the image sequences are preprocessed by a modified U-Net architecture composed of a pre-trained ResNet101 encoder and a redesigned decoder. Also, the segmentation boundaries of the instrument shafts are extended using image filtering and a real-time index mask algorithm to achieve precise localization of the obscured elements. For predicting the deformation of soft tissues, a soft tissue deformation prediction algorithm is proposed based on dense optical flow gravitational field and entropy increase, which can achieve local dynamic visualization of the surgical scene by integrating image morphological operations. Finally, the preliminary experiments and the pre-clinical evaluation were presented to demonstrate the performance of the proposed method. The results show that the proposed method can provide the surgeon with a clean and comprehensive surgical scene, reconstruct the course of important vessels/lacunae, and avoid inadvertent injuries.
Collapse
Affiliation(s)
- Bo Guan
- The Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, No. 92 Weijin Road, Nankai District, Tianjin, 300072, China
| | - Yuelin Zou
- The Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, No. 92 Weijin Road, Nankai District, Tianjin, 300072, China
| | - Jianchang Zhao
- National Engineering Research Center of Neuromodulation, School of Aerospace Engineering, Tsinghua University, No. 30 Shuangqing Road, Haidian District, Beijing, 100084, China
| | - Lizhi Pan
- The Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, No. 92 Weijin Road, Nankai District, Tianjin, 300072, China
| | - Bo Yi
- Third Xiangya Hospital, Central South University, No. 138 Tongzipo Road, Yuelu District, Changsha, 410013, China.
| | - Jianmin Li
- The Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, No. 92 Weijin Road, Nankai District, Tianjin, 300072, China.
| |
Collapse
|
3
|
Seetohul J, Shafiee M, Sirlantzis K. Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions. SENSORS (BASEL, SWITZERLAND) 2023; 23:6202. [PMID: 37448050 DOI: 10.3390/s23136202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 06/09/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future.
Collapse
Affiliation(s)
- Jenna Seetohul
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| | - Mahmood Shafiee
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
- School of Mechanical Engineering Sciences, University of Surrey, Guildford GU2 7XH, UK
| | - Konstantinos Sirlantzis
- School of Engineering, Technology and Design, Canterbury Christ Church University, Canterbury CT1 1QU, UK
- Intelligent Interactions Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| |
Collapse
|
4
|
Chen W, Kalia M, Zeng Q, Pang EHT, Bagherinasab R, Milner TD, Sabiq F, Prisman E, Salcudean SE. Towards transcervical ultrasound image guidance for transoral robotic surgery. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02898-y. [PMID: 37103728 DOI: 10.1007/s11548-023-02898-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 03/29/2023] [Indexed: 04/28/2023]
Abstract
PURPOSE Trans-oral robotic surgery (TORS) using the da Vinci surgical robot is a new minimally-invasive surgery method to treat oropharyngeal tumors, but it is a challenging operation. Augmented reality (AR) based on intra-operative ultrasound (US) has the potential to enhance the visualization of the anatomy and cancerous tumors to provide additional tools for decision-making in surgery. METHODS We propose a US-guided AR system for TORS, with the transducer placed on the neck for a transcervical view. Firstly, we perform a novel MRI-to-transcervical 3D US registration study, comprising (i) preoperative MRI to preoperative US registration, and (ii) preoperative to intraoperative US registration to account for tissue deformation due to retraction. Secondly, we develop a US-robot calibration method with an optical tracker and demonstrate its use in an AR system that displays anatomy models in the surgeon's console in real-time. RESULTS Our AR system achieves a projection error from the US to the stereo cameras of 27.14 and 26.03 pixels (image is 540[Formula: see text]960) in a water bath experiment. The average target registration error (TRE) for MRI to 3D US is 8.90 mm for the 3D US transducer and 5.85 mm for freehand 3D US, and the TRE for pre-intra operative US registration is 7.90 mm. CONCLUSION We demonstrate the feasibility of each component of the first complete pipeline for MRI-US-robot-patient registration for a proof-of-concept transcervical US-guided AR system for TORS. Our results show that trans-cervical 3D US is a promising technique for TORS image guidance.
Collapse
Affiliation(s)
- Wanwen Chen
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada.
| | - Megha Kalia
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada
| | - Qi Zeng
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada
| | - Emily H T Pang
- Department of Radiology, Vancouver General Hospital, Vancouver, BC, Canada
| | - Razeyeh Bagherinasab
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada
| | - Thomas D Milner
- Division of Otolaryngology, Department of Surgery, Vancouver General Hospital, Vancouver, BC, Canada
| | - Farahna Sabiq
- Department of Radiology, Vancouver General Hospital, Vancouver, BC, Canada
| | - Eitan Prisman
- Division of Otolaryngology, Department of Surgery, Vancouver General Hospital, Vancouver, BC, Canada
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
5
|
Intraoperative Imaging Techniques to Improve Surgical Resection Margins of Oropharyngeal Squamous Cell Cancer: A Comprehensive Review of Current Literature. Cancers (Basel) 2023; 15:cancers15030896. [PMID: 36765858 PMCID: PMC9913756 DOI: 10.3390/cancers15030896] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/24/2023] [Accepted: 01/26/2023] [Indexed: 02/04/2023] Open
Abstract
Inadequate resection margins in head and neck squamous cell carcinoma surgery necessitate adjuvant therapies such as re-resection and radiotherapy with or without chemotherapy and imply increasing morbidity and worse prognosis. On the other hand, taking larger margins by extending the resection also leads to avoidable increased morbidity. Oropharyngeal squamous cell carcinomas (OPSCCs) are often difficult to access; resections are limited by anatomy and functionality and thus carry an increased risk for close or positive margins. Therefore, there is a need to improve intraoperative assessment of resection margins. Several intraoperative techniques are available, but these often lead to prolonged operative time and are only suitable for a subgroup of patients. In recent years, new diagnostic tools have been the subject of investigation. This study reviews the available literature on intraoperative techniques to improve resection margins for OPSCCs. A literature search was performed in Embase, PubMed, and Cochrane. Narrow band imaging (NBI), high-resolution microendoscopic imaging, confocal laser endomicroscopy, frozen section analysis (FSA), ultrasound (US), computed tomography scan (CT), (auto) fluorescence imaging (FI), and augmented reality (AR) have all been used for OPSCC. NBI, FSA, and US are most commonly used and increase the rate of negative margins. Other techniques will become available in the future, of which fluorescence imaging has high potential for use with OPSCC.
Collapse
|
6
|
Grad P, Przeklasa-Bierowiec AM, Malinowski KP, Witowski J, Proniewska K, Tatoń G. Application of HoloLens-based augmented reality and three-dimensional printed anatomical tooth reference models in dental education. ANATOMICAL SCIENCES EDUCATION 2022. [PMID: 36524288 DOI: 10.1002/ase.2241] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 11/29/2022] [Accepted: 12/01/2022] [Indexed: 06/17/2023]
Abstract
Tooth anatomy is fundamental knowledge used in everyday dental practice to reconstruct the occlusal surface during cavity fillings. The main objective of this project was to evaluate the suitability of two types of anatomical tooth reference models used to support reconstruction of the occlusal anatomy of the teeth: (1) a three-dimensional (3D)-printed model and (2) a model displayed in augmented reality (AR) using Microsoft HoloLens. The secondary objective was to evaluate three aspects impacting the outcome: clinical experience, comfort of work, and other variables. The tertiary objective was to evaluate the usefulness of AR in dental education. Anatomical models of crowns of three different molars were made using cone beam computed tomography image segmentation, printed with a stereolithographic 3D-printer, and then displayed in the HoloLens. Each participant reconstructed the occlusal anatomy of three teeth. One without any reference materials and two with an anatomical reference model, either 3D-printed or holographic. The reconstruction work was followed by the completion of an evaluation questionnaire. The maximum Hausdorff distances (Hmax) between the superimposed images of the specimens after the procedures and the anatomical models were then calculated. The results showed that the most accurate but slowest reconstruction was achieved with the use of 3D-printed reference models and that the results were not affected by other aspects considered. For this method, the Hmax was observed to be 630 μm (p = 0.004). It was concluded that while AR models can be helpful in dental anatomy education, they are not suitable replacements for physical models.
Collapse
Affiliation(s)
- Piotr Grad
- Department of Integrated Dentistry, Institute of Dentistry, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| | - Anna M Przeklasa-Bierowiec
- Department of Integrated Dentistry, Institute of Dentistry, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| | - Krzysztof P Malinowski
- Department of Bioinformatics and Telemedicine, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| | - Jan Witowski
- Department of Radiology, New York University Grossman School of Medicine, New York, New York, USA
| | - Klaudia Proniewska
- Department of Bioinformatics and Telemedicine, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| | - Grzegorz Tatoń
- Department of Biophysics, Chair of Physiology, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| |
Collapse
|
7
|
Al-hammuri K, Gebali F, Thirumarai Chelvan I, Kanan A. Tongue Contour Tracking and Segmentation in Lingual Ultrasound for Speech Recognition: A Review. Diagnostics (Basel) 2022; 12:diagnostics12112811. [PMID: 36428870 PMCID: PMC9689563 DOI: 10.3390/diagnostics12112811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 11/07/2022] [Accepted: 11/13/2022] [Indexed: 11/18/2022] Open
Abstract
Lingual ultrasound imaging is essential in linguistic research and speech recognition. It has been used widely in different applications as visual feedback to enhance language learning for non-native speakers, study speech-related disorders and remediation, articulation research and analysis, swallowing study, tongue 3D modelling, and silent speech interface. This article provides a comparative analysis and review based on quantitative and qualitative criteria of the two main streams of tongue contour segmentation from ultrasound images. The first stream utilizes traditional computer vision and image processing algorithms for tongue segmentation. The second stream uses machine and deep learning algorithms for tongue segmentation. The results show that tongue tracking using machine learning-based techniques is superior to traditional techniques, considering the performance and algorithm generalization ability. Meanwhile, traditional techniques are helpful for implementing interactive image segmentation to extract valuable features during training and postprocessing. We recommend using a hybrid approach to combine machine learning and traditional techniques to implement a real-time tongue segmentation tool.
Collapse
Affiliation(s)
- Khalid Al-hammuri
- Department of Electrical and Computer Engineering, University of Victoria, Victoria, BC V8W 2Y2, Canada
- Correspondence:
| | - Fayez Gebali
- Department of Electrical and Computer Engineering, University of Victoria, Victoria, BC V8W 2Y2, Canada
| | | | - Awos Kanan
- Department of Computer Engineering, Princess Sumaya University for Technology, Amman 11941, Jordan
| |
Collapse
|
8
|
Boekestijn I, van Oosterom MN, Dell'Oglio P, van Velden FHP, Pool M, Maurer T, Rietbergen DDD, Buckle T, van Leeuwen FWB. The current status and future prospects for molecular imaging-guided precision surgery. Cancer Imaging 2022; 22:48. [PMID: 36068619 PMCID: PMC9446692 DOI: 10.1186/s40644-022-00482-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 08/21/2022] [Indexed: 01/19/2023] Open
Abstract
Molecular imaging technologies are increasingly used to diagnose, monitor, and guide treatment of i.e., cancer. In this review, the current status and future prospects of the use of molecular imaging as an instrument to help realize precision surgery is addressed with focus on the main components that form the conceptual basis of intraoperative molecular imaging. Paramount for successful interventions is the relevance and accessibility of surgical targets. In addition, selection of the correct combination of imaging agents and modalities is critical to visualize both microscopic and bulk disease sites with high affinity and specificity. In this context developments within engineering/imaging physics continue to drive the growth of image-guided surgery. Particularly important herein is enhancement of sensitivity through improved contrast and spatial resolution, features that are critical if sites of cancer involvement are not to be overlooked during surgery. By facilitating the connection between surgical planning and surgical execution, digital surgery technologies such as computer-aided visualization nicely complement these technologies. The complexity of image guidance, combined with the plurality of technologies that are becoming available, also drives the need for evaluation mechanisms that can objectively score the impact that technologies exert on the performance of healthcare professionals and outcome improvement for patients.
Collapse
Affiliation(s)
- Imke Boekestijn
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Section of Nuclear Medicine, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Matthias N van Oosterom
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Paolo Dell'Oglio
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Floris H P van Velden
- Medical Physics, Department of Radiology , Leiden University Medical Center, Leiden, the Netherlands
| | - Martin Pool
- Department of Clinical Farmacy and Toxicology, Leiden University Medical Center, Leiden, the Netherlands
| | - Tobias Maurer
- Martini-Klinik Prostate Cancer Centre Hamburg, Hamburg, Germany
| | - Daphne D D Rietbergen
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Section of Nuclear Medicine, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Tessa Buckle
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Fijs W B van Leeuwen
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.
| |
Collapse
|
9
|
Cone-Beam Computed Tomography-Derived Augmented Fluoroscopy Improves the Diagnostic Yield of Endobronchial Ultrasound-Guided Transbronchial Biopsy for Peripheral Pulmonary Lesions. Diagnostics (Basel) 2021; 12:diagnostics12010041. [PMID: 35054208 PMCID: PMC8774719 DOI: 10.3390/diagnostics12010041] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 12/14/2021] [Accepted: 12/22/2021] [Indexed: 11/17/2022] Open
Abstract
Background: Endobronchial ultrasound-guided transbronchial biopsy (EBUS-TBB) is used for the diagnosis of peripheral pulmonary lesions (PPLs), but the diagnostic yield is not adequate. Cone-beam computed tomography-derived augmented fluoroscopy (CBCT-AF) can be utilized to assess the location of PPLs and biopsy devices, and has the potential to improve the diagnostic accuracy of bronchoscopic techniques. The purpose of this study was to verify the contribution of CBCT-AF to EBUS-TBB. Methods: Patients who underwent EBUS-TBB for diagnosis of PPLs were enrolled. The navigation success rate and diagnostic yield were used to evaluate the effectiveness of CBCT-AF in EBUS-TBB. Results: In this study, 236 patients who underwent EBUS-TBB for PPL diagnosis were enrolled. One hundred fifteen patients were in CBCT-AF group and 121 were in non-AF group. The navigation success rate was significantly higher in the CBCT-AF group (96.5% vs. 86.8%, p = 0.006). The diagnostic yield was even better in the CBCT-AF group when the target lesion was small in size (68.8% vs. 0%, p = 0.026 for lesions ≤10 mm and 77.5% vs. 46.4%, p = 0.016 for lesions 10–20 mm, respectively). The diagnostic yield of the two study groups became similar when the procedures with a failure of navigation were excluded. The procedure-related complication rate was similar between the two study groups. Conclusion: CBCT-AF is safe, and effectively enhances the navigation success rate, thereby increasing the diagnostic yield of EBUS-TBB for PPLs.
Collapse
|
10
|
Sahovaler A, Chan HHL, Gualtieri T, Daly M, Ferrari M, Vannelli C, Eu D, Manojlovic-Kolarski M, Orzell S, Taboni S, de Almeida JR, Goldstein DP, Deganello A, Nicolai P, Gilbert RW, Irish JC. Augmented Reality and Intraoperative Navigation in Sinonasal Malignancies: A Preclinical Study. Front Oncol 2021; 11:723509. [PMID: 34790568 PMCID: PMC8591179 DOI: 10.3389/fonc.2021.723509] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 10/12/2021] [Indexed: 11/13/2022] Open
Abstract
Objective To report the first use of a novel projected augmented reality (AR) system in open sinonasal tumor resections in preclinical models and to compare the AR approach with an advanced intraoperative navigation (IN) system. Methods Four tumor models were created. Five head and neck surgeons participated in the study performing virtual osteotomies. Unguided, AR, IN, and AR + IN simulations were performed. Statistical comparisons between approaches were obtained. Intratumoral cut rate was the main outcome. The groups were also compared in terms of percentage of intratumoral, close, adequate, and excessive distances from the tumor. Information on a wearable gaze tracker headset and NASA Task Load Index questionnaire results were analyzed as well. Results A total of 335 cuts were simulated. Intratumoral cuts were observed in 20.7%, 9.4%, 1.2,% and 0% of the unguided, AR, IN, and AR + IN simulations, respectively (p < 0.0001). The AR was superior than the unguided approach in univariate and multivariate models. The percentage of time looking at the screen during the procedures was 55.5% for the unguided approaches and 0%, 78.5%, and 61.8% in AR, IN, and AR + IN, respectively (p < 0.001). The combined approach significantly reduced the screen time compared with the IN procedure alone. Conclusion We reported the use of a novel AR system for oncological resections in open sinonasal approaches, with improved margin delineation compared with unguided techniques. AR improved the gaze-toggling drawback of IN. Further refinements of the AR system are needed before translating our experience to clinical practice.
Collapse
Affiliation(s)
- Axel Sahovaler
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Harley H L Chan
- Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Tommaso Gualtieri
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada.,Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy
| | - Michael Daly
- Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Marco Ferrari
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada.,Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy.,Section of Otorhinolaryngology-Head and Neck Surgery, University of Padua-Azienda Ospedaliera di Padova, Padua, Italy
| | - Claire Vannelli
- Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Donovan Eu
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Mirko Manojlovic-Kolarski
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Susannah Orzell
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Stefano Taboni
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada.,Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy.,Section of Otorhinolaryngology-Head and Neck Surgery, University of Padua-Azienda Ospedaliera di Padova, Padua, Italy
| | - John R de Almeida
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - David P Goldstein
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Alberto Deganello
- Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy
| | - Piero Nicolai
- Section of Otorhinolaryngology-Head and Neck Surgery, University of Padua-Azienda Ospedaliera di Padova, Padua, Italy
| | - Ralph W Gilbert
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Jonathan C Irish
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| |
Collapse
|
11
|
Singh SP, Borthwick KG, Qureshi FM. Commentary: 3D Laparoscopy-Assisted Operation to Adult Intussusceptions During Perioperative Period of Liver Transplantation: Case Report and Literature Review. Front Surg 2021; 8:764741. [PMID: 34746226 PMCID: PMC8564035 DOI: 10.3389/fsurg.2021.764741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 09/24/2021] [Indexed: 11/22/2022] Open
Affiliation(s)
- Som P Singh
- Department of Biomedical Sciences, University of Missouri-Kansas City School of Medicine, Kansas, MO, United States
| | - Kiera G Borthwick
- Department of Neurosciences, Washington and Lee University, Lexington, VA, United States
| | - Fahad M Qureshi
- Department of Biomedical Sciences, University of Missouri-Kansas City School of Medicine, Kansas, MO, United States
| |
Collapse
|
12
|
Cunningham BW, Brooks DM, McAfee PC. Accuracy of Robotic-Assisted Spinal Surgery-Comparison to TJR Robotics, da Vinci Robotics, and Optoelectronic Laboratory Robotics. Int J Spine Surg 2021; 15:S38-S55. [PMID: 34607917 PMCID: PMC8532535 DOI: 10.14444/8139] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND The optoelectronic camera source and data interpolation serve as the foundation for navigational integrity in the robotic-assisted surgical platform. The objective of the current systematic review serves to provide a basis for the numerical disparity that exists when comparing the intrinsic accuracy of optoelectronic cameras: accuracy observed in the laboratory setting versus accuracy in the clinical operative environment. It is postulated that there exists a greater number of connections in the optoelectronic kinematic chain when analyzing the clinical operative environment to the laboratory setting. This increase in data interpolation, coupled with intraoperative workflow challenges, reduces the degree of accuracy based on surgical application and to that observed in controlled musculoskeletal kinematic laboratory investigations. METHODS Review of the PubMed and Cochrane Library research databases was performed. The exhaustive literature compilation obtained was then vetted to reduce redundancies and categorized into topics of intrinsic optoelectronic accuracy, registration accuracy, musculoskeletal kinematic platforms, and clinical operative platforms. RESULTS A total of 147 references make up the basis for the current analysis. Regardless of application, the common denominators affecting overall optoelectronic accuracy are intrinsic accuracy, registration accuracy, and application accuracy. Intrinsic accuracy of optoelectronic tracking equaled or was less than 0.1 mm of translation and 0.1° of rotation per fiducial. Controlled laboratory platforms reported 0.1 to 0.5 mm of translation and 0.1°-1.0° of rotation per array. There is a huge falloff in clinical applications: accuracy in robotic-assisted spinal surgery reported 1.5 to 6.0 mm of translation and 1.5° to 5.0° of rotation when comparing planned to final implant position. Total Joint Robotics and da Vinci urologic robotics computed accuracy, as predicted, lies between these two extremes-1.02 mm for da Vinci and 2 mm for MAKO. CONCLUSIONS Navigational integrity and maintenance of fidelity of optoelectronic data is the cornerstone of robotic-assisted spinal surgery. Transitioning from controlled laboratory to clinical operative environments requires an increased number of steps in the optoelectronic kinematic chain and error potential. Diligence in planning, fiducial positioning, system registration, and intraoperative workflow have the potential to improve accuracy and decrease disparity between planned and final implant position. The key determining factors limiting navigation resolution accuracy are highlighted by this Cochrane research analysis.
Collapse
Affiliation(s)
- Bryan W. Cunningham
- Musculoskeletal Education Center, Department of Orthopaedic Surgery, MedStar Union Memorial Hospital, Baltimore, Maryland
- Department of Orthopaedic Surgery, Georgetown University School of Medicine, Washington, D.C
| | - Daina M. Brooks
- Musculoskeletal Education Center, Department of Orthopaedic Surgery, MedStar Union Memorial Hospital, Baltimore, Maryland
| | - Paul C. McAfee
- Musculoskeletal Education Center, Department of Orthopaedic Surgery, MedStar Union Memorial Hospital, Baltimore, Maryland
- Department of Orthopaedic Surgery, Georgetown University School of Medicine, Washington, D.C
| |
Collapse
|
13
|
Bessen SY, Wu X, Sramek MT, Shi Y, Pastel D, Halter R, Paydarfar JA. Image-guided surgery in otolaryngology: A review of current applications and future directions in head and neck surgery. Head Neck 2021; 43:2534-2553. [PMID: 34032338 DOI: 10.1002/hed.26743] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 02/20/2021] [Accepted: 05/04/2021] [Indexed: 02/06/2023] Open
Abstract
Image-guided surgery (IGS) has become a widely adopted technology in otolaryngology. Since its introduction nearly three decades ago, IGS technology has developed rapidly and improved real-time intraoperative visualization for a diverse array of clinical indications. As usability, accessibility, and clinical experiences with IGS increase, its potential applications as an adjunct in many surgical procedures continue to expand. Here, we describe the basic components of IGS and review both the current state and future directions of IGS in otolaryngology, with attention to current challenges to its application in surgery of the nonrigid upper aerodigestive tract.
Collapse
Affiliation(s)
- Sarah Y Bessen
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Xiaotian Wu
- Massachussetts General Hospital, Boston, Massachusetts, USA
| | - Michael T Sramek
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Yuan Shi
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire, USA
| | - David Pastel
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA.,Department of Otolaryngology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA.,Department of Radiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Ryan Halter
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA.,Thayer School of Engineering at Dartmouth, Hanover, New Hampshire, USA
| | - Joseph A Paydarfar
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA.,Thayer School of Engineering at Dartmouth, Hanover, New Hampshire, USA.,Department of Otolaryngology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| |
Collapse
|
14
|
Guo ZY, Ding ZF, Miao C, Li CJ, Tang XF, Zhang Z. [Application of mixed reality in oromaxillofacial head and neck oncology surgery: a preliminary study]. HUA XI KOU QIANG YI XUE ZA ZHI = HUAXI KOUQIANG YIXUE ZAZHI = WEST CHINA JOURNAL OF STOMATOLOGY 2020; 38:470-474. [PMID: 32865371 DOI: 10.7518/hxkq.2020.04.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Mixed reality (MR), characterized by the ability to integrate digital data into human real feeling, is a new technique in medical imaging and surgical navigation. MR has tremendous value in surgery, but its application in oromaxillofacial head and neck oncology surgery is not yet reported. This paper reports the application of MR in oromaxillofacial head and neck oncology surgery. The merits, demerits, and present research situations and prospects of MR are further discussed.
Collapse
Affiliation(s)
- Zhi-Yong Guo
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Dept. of Head and Neck Oncology, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Zhang-Fan Ding
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Dept. of Head and Neck Oncology, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Cheng Miao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Dept. of Head and Neck Oncology, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Chun-Jie Li
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Dept. of Head and Neck Oncology, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Xiu-Fa Tang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Dept. of Head and Neck Oncology, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Zhuang Zhang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Dept. of Head and Neck Oncology, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| |
Collapse
|
15
|
Abstract
Augmented reality (AR) is used to enhance the perception of the real world by integrating virtual objects to an image sequence acquired from various camera technologies. Numerous AR applications in robotics have been developed in recent years. The aim of this paper is to provide an overview of AR research in robotics during the five year period from 2015 to 2019. We classified these works in terms of application areas into four categories: (1) Medical robotics: Robot-Assisted surgery (RAS), prosthetics, rehabilitation, and training systems; (2) Motion planning and control: trajectory generation, robot programming, simulation, and manipulation; (3) Human-robot interaction (HRI): teleoperation, collaborative interfaces, wearable robots, haptic interfaces, brain-computer interfaces (BCIs), and gaming; (4) Multi-agent systems: use of visual feedback to remotely control drones, robot swarms, and robots with shared workspace. Recent developments in AR technology are discussed followed by the challenges met in AR due to issues of camera localization, environment mapping, and registration. We explore AR applications in terms of how AR was integrated and which improvements it introduced to corresponding fields of robotics. In addition, we summarize the major limitations of the presented applications in each category. Finally, we conclude our review with future directions of AR research in robotics. The survey covers over 100 research works published over the last five years.
Collapse
|
16
|
The future of robotic surgery in otolaryngology – head and neck surgery. Oral Oncol 2020; 101:104510. [DOI: 10.1016/j.oraloncology.2019.104510] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2019] [Revised: 11/30/2019] [Accepted: 12/03/2019] [Indexed: 12/29/2022]
|
17
|
Qian L, Wu JY, DiMaio SP, Navab N, Kazanzides P. A Review of Augmented Reality in Robotic-Assisted Surgery. ACTA ACUST UNITED AC 2020. [DOI: 10.1109/tmrb.2019.2957061] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
Chan JYK, Holsinger FC, Liu S, Sorger JM, Azizian M, Tsang RKY. Augmented reality for image guidance in transoral robotic surgery. J Robot Surg 2019; 14:579-583. [PMID: 31555957 DOI: 10.1007/s11701-019-01030-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 09/16/2019] [Indexed: 11/26/2022]
Abstract
With the advent of precision surgery, there have been attempts to integrate imaging with robotic systems to guide sound oncologic surgical resections while preserving critical structures. In the confined space of transoral robotic surgery (TORS), this offers great potential given the proximity of structures. In this cadaveric experiment, we describe the use of a 3D virtual model displayed in the surgeon's console with the surgical field in view, to facilitate image-guided surgery at the oropharynx where there is significant soft tissue deformation. We also utilized the 3D model that was registered to the maxillary dentition, allowing for real-time image overlay of the internal carotid artery system. This allowed for real-time visualization of the internal carotid artery system that was qualitatively accurate on cadaveric dissection. Overall, this shows that virtual models and image overlays can be useful in image-guided surgery while approaching different sites in head and neck surgery with TORS.
Collapse
Affiliation(s)
- Jason Y K Chan
- Department of Otorhinolaryngology, Head and Neck Surgery, The Chinese University of Hong Kong, Room 84026, 6/F Lui Che Woo Clinical Sciences Building,, Shatin, N.T., Hong Kong SAR.
| | - F Christopher Holsinger
- Department of Otolaryngology, Head and Neck Surgery, Stanford University, Palo Alto, CA, USA
| | - Stanley Liu
- Department of Otolaryngology, Head and Neck Surgery, Stanford University, Palo Alto, CA, USA
| | | | | | - Raymond K Y Tsang
- Division of Otorhinolaryngology-Head and Neck Surgery, Department of Surgery, The University of Hong Kong, Pok Fu Lam, Hong Kong SAR.
| |
Collapse
|
19
|
Current state of the art in the use of augmented reality in dentistry: a systematic review of the literature. BMC Oral Health 2019; 19:135. [PMID: 31286904 PMCID: PMC6613250 DOI: 10.1186/s12903-019-0808-3] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 05/31/2019] [Indexed: 12/29/2022] Open
Abstract
Background The aim of the present systematic review was to screen the literature and to describe current applications of augmented reality. Materials and methods The protocol design was structured according to PRISMA-P guidelines and registered in PROSPERO. A review of the following databases was carried out: Medline, Ovid, Embase, Cochrane Library, Google Scholar and the Gray literature. Data was extracted, summarized and collected for qualitative analysis and evaluated for individual risk of bias (R.O.B.) assessment, by two independent examiners. Collected data included: year of publishing, journal with reviewing system and impact factor, study design, sample size, target of the study, hardware(s) and software(s) used or custom developed, primary outcomes, field of interest and quantification of the displacement error and timing measurements, when available. Qualitative evidence synthesis refers to SPIDER. Results From a primary research of 17,652 articles, 33 were considered in the review for qualitative synthesis. 16 among selected articles were eligible for quantitative synthesis of heterogenous data, 12 out of 13 judged the precision at least as acceptable, while 3 out of 6 described an increase in operation timing of about 1 h. 60% (n = 20) of selected studies refers to a camera-display augmented reality system while 21% (n = 7) refers to a head-mounted system. The software proposed in the articles were self-developed by 7 authors while the majority proposed commercially available ones. The applications proposed for augmented reality are: Oral and maxillo-facial surgery (OMS) in 21 studies, restorative dentistry in 5 studies, educational purposes in 4 studies and orthodontics in 1 study. The majority of the studies were carried on phantoms (51%) and those on patients were 11 (33%). Conclusions On the base of literature the current development is still insufficient for full validation process, however independent sources of customized software for augmented reality seems promising to help routinely procedures, complicate or specific interventions, education and learning. Oral and maxillofacial area is predominant, the results in precision are promising, while timing is still very controversial since some authors describe longer preparation time when using augmented reality up to 60 min while others describe a reduced operating time of 50/100%. Trial registration The following systematic review was registered in PROSPERO with RN: CRD42019120058.
Collapse
|
20
|
Golusiński W. Functional Organ Preservation Surgery in Head and Neck Cancer: Transoral Robotic Surgery and Beyond. Front Oncol 2019; 9:293. [PMID: 31058091 PMCID: PMC6479210 DOI: 10.3389/fonc.2019.00293] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 03/29/2019] [Indexed: 12/11/2022] Open
Abstract
In recent years, interest in functional organ preservation surgery (FOPS) in the treatment of head and neck cancer has increased dramatically as clinicians seek to minimize the adverse effects of treatment while maximizing survival and quality of life. In this context, the use of transoral robotic surgery (TORS) is becoming increasingly common. TORS is a relatively new and rapidly-evolving technique, with a growing range of treatment indications. A wide range of novel, flexible surgical robots are now in development and their commercialization is expected to significantly expand the current indications for TORS. In the present review, we discuss the current and future role of this organ-preserving modality as the central element in the multimodal treatment of head and neck cancer.
Collapse
Affiliation(s)
- Wojciech Golusiński
- Department of Head and Neck Surgery, Poznan University of Medical Sciences, Poznan, Poland
| |
Collapse
|
21
|
Paydarfar JA, Wu X, Halter RJ. Initial experience with image-guided surgical navigation in transoral surgery. Head Neck 2018; 41:E1-E10. [PMID: 30556235 DOI: 10.1002/hed.25380] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Revised: 05/08/2018] [Accepted: 05/28/2018] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND Surgical navigation using image guidance may improve the safety and efficacy of transoral surgery (TOS); however, preoperative imaging cannot be accurately registered to the intraoperative state due to deformations resulting from placement of the laryngoscope or retractor. This proof of concept study explores feasibility and registration accuracy of surgical navigation for TOS by utilizing intraoperative imaging. METHODS Four patients undergoing TOS were recruited. Suspension laryngoscopy was performed with a CT-compatible laryngoscope. An intraoperative contrast enhanced CT scan was obtained and registered to fiducials placed on the neck, face, and laryngoscope. RESULTS All patients were successfully scanned and registered. Registration accuracy within the pharynx and larynx was 1 mm or less. Target registration was confirmed by localizing endoscopic and surface structures to the CT images. Successful tracking was performed in all 4 patients. CONCLUSION For surgical navigation during TOS, although a high level of registration accuracy can be achieved by utilizing intraoperative imaging, significant limitations of the existing technology have been identified. These limitations, as well as areas for future investigation, are discussed.
Collapse
Affiliation(s)
- Joseph A Paydarfar
- Section of Otolaryngology, Audiology, and Maxillofacial Surgery, Department of Surgery, Dartmouth-Hitchcock Medical Center, Geisel School of Medicine, Lebanon, New Hampshire
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - Xiaotian Wu
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - Ryan J Halter
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
- Dartmouth College Geisel School of Medicine, Department of Surgery, Hanover, New Hampshire
| |
Collapse
|
22
|
Augmented Reality of the Middle Ear Combining Otoendoscopy and Temporal Bone Computed Tomography. Otol Neurotol 2018; 39:931-939. [DOI: 10.1097/mao.0000000000001922] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
23
|
Abstract
A look at the past, present and future.
Collapse
Affiliation(s)
- George Garas
- Department of Otorhinolaryngology - Head and Neck Surgery St Mary's Hospital, Imperial College London
| | - Neil Tolley
- Department of Otorhinolaryngology - Head and Neck Surgery St Mary's Hospital, Imperial College London
| |
Collapse
|
24
|
Wong K, Yee HM, Xavier BA, Grillone GA. Applications of Augmented Reality in Otolaryngology: A Systematic Review. Otolaryngol Head Neck Surg 2018; 159:956-967. [PMID: 30126323 DOI: 10.1177/0194599818796476] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE Augmented reality (AR) is a rapidly developing technology. The aim of this systematic review was to (1) identify and evaluate applications of AR in otolaryngology and (2) examine trends in publication over time. DATA SOURCES PubMed and EMBASE. REVIEW METHODS A systematic review was performed according to PRISMA guidelines without temporal limits. Studies were included if they reported otolaryngology-related applications of AR. Exclusion criteria included non-English articles, abstracts, letters/commentaries, and reviews. A linear regression model was used to compare publication trends over time. RESULTS Twenty-three articles representing 18 AR platforms were included. Publications increased between 1997 and 2018 ( P < .05). Twelve studies were level 5 evidence; 9 studies, level 4; 1 study, level 2; and 1 study, level 1. There was no trend toward increased level of evidence over time. The most common subspecialties represented were rhinology (52.2%), head and neck (30.4%), and neurotology (26%). The most common purpose of AR was intraoperative guidance (54.5%), followed by surgical planning (24.2%) and procedural simulations (9.1%). The most common source of visual inputs was endoscopes (50%), followed by eyewear (22.2%) and microscopes (4.5%). Computed tomography was the most common virtual input (83.3%). Optical trackers and fiducial markers were the most common forms of tracking and registration, respectively (38.9% and 44.4%). Mean registration error was 2.48 mm. CONCLUSION AR holds promise in simulation, surgical planning, and perioperative navigation. Although level of evidence remains modest, the role of AR in otolaryngology has grown rapidly and continues to expand.
Collapse
Affiliation(s)
- Kevin Wong
- 1 Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Halina M Yee
- 2 Department of Otolaryngology-Head and Neck Surgery, Boston Medical Center, Boston, Massachusetts, USA
| | - Brian A Xavier
- 3 Department of Radiology, University of Illinois College of Medicine at Chicago, Chicago, Illinois, USA
| | - Gregory A Grillone
- 2 Department of Otolaryngology-Head and Neck Surgery, Boston Medical Center, Boston, Massachusetts, USA
| |
Collapse
|
25
|
Chauvet D, Hans S, Missistrano A, Rebours C, Bakkouri WE, Lot G. Transoral robotic surgery for sellar tumors: first clinical study. J Neurosurg 2017; 127:941-948. [DOI: 10.3171/2016.9.jns161638] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
OBJECTIVEThe aim of this study was to confirm the feasibility of an innovative transoral robotic surgery (TORS), using the da Vinci Surgical System, for patients with sellar tumors. This technique was designed to offer a new minimally invasive approach, without soft-palate splitting, that avoids the rhinological side effects of classic endonasal approaches.METHODSThe authors performed a prospective study of TORS in patients with symptomatic sellar tumors. Specific anatomical features were required for inclusion in the study and were determined on the basis of preoperative open-mouth CT scans of the brain. The main outcome measure was sellar accessibility using the robot. Resection quality, mean operative time, postoperative changes in patients' vision, side effects, and complications were additionally reported.RESULTSBetween February and May 2016, 4 patients (all female, mean age 49.5 years) underwent TORS for resection of sellar tumors as participants in this study. All patients presented with symptomatic visual deficits confirmed as bitemporal hemianopsia. All tumors had a suprasellar portion and a cystic part. In all 4 cases, the operation was performed via TORS, without the need for a second surgery. Sella turcica accessibility was satisfactory in all cases. In 3 cases, tumor resection was complete. The mean operative time was 2 hours 43 minutes. Three patients had a significant visual improvement at Day 1. No rhinological side effects or complications in patients occurred. No pathological examination was performed regarding the fluid component of the tumors. There was 1 postoperative delayed CSF leak and 1 case of transient diabetes insipidus. Side effects specific to TORS included minor sore throat, transient hypernasal speech, and 1 case of delayed otitis media. The mean length of hospital stay and mean follow up were 8.25 days and 82 days, respectively.CONCLUSIONSTo our knowledge, this is the first report of the surgical treatment of sellar tumors by means of a minimally invasive TORS. This approach using the da Vinci Surgical System seems feasible and constitutes an innovative neurosurgical technique that may avoid the adverse side effects and technical disadvantages of the classic transsphenoidal route. Moreover, TORS allows an inferosuperior approach to the sella turcica, which is a key point, as the tumor is approached in the direction of its growth.
Collapse
Affiliation(s)
| | - Stéphane Hans
- 2Department of Head and Neck Surgery, Hôpital Européen Georges Pompidou, Paris, France; and
| | | | | | | | | |
Collapse
|