1
|
Taleb A, Leclerc S, Hussein R, Lalande A, Bozorg-Grayeli A. Registration of preoperative temporal bone CT-scan to otoendoscopic video for augmented-reality based on convolutional neural networks. Eur Arch Otorhinolaryngol 2024; 281:2921-2930. [PMID: 38200355 DOI: 10.1007/s00405-023-08403-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 12/04/2023] [Indexed: 01/12/2024]
Abstract
PURPOSE Patient-to-image registration is a preliminary step required in surgical navigation based on preoperative images. Human intervention and fiducial markers hamper this task as they are time-consuming and introduce potential errors. We aimed to develop a fully automatic 2D registration system for augmented reality in ear surgery. METHODS CT-scans and corresponding oto-endoscopic videos were collected from 41 patients (58 ears) undergoing ear examination (vestibular schwannoma before surgery, profound hearing loss requiring cochlear implant, suspicion of perilymphatic fistula, contralateral ears in cases of unilateral chronic otitis media). Two to four images were selected from each case. For the training phase, data from patients (75% of the dataset) and 11 cadaveric specimens were used. Tympanic membranes and malleus handles were contoured on both video images and CT-scans by expert surgeons. The algorithm used a U-Net network for detecting the contours of the tympanic membrane and the malleus on both preoperative CT-scans and endoscopic video frames. Then, contours were processed and registered through an iterative closest point algorithm. Validation was performed on 4 cases and testing on 6 cases. Registration error was measured by overlaying both images and measuring the average and Hausdorff distances. RESULTS The proposed registration method yielded a precision compatible with ear surgery with a 2D mean overlay error of 0.65 ± 0.60 mm for the incus and 0.48 ± 0.32 mm for the round window. The average Hausdorff distance for these 2 targets was 0.98 ± 0.60 mm and 0.78 ± 0.34 mm respectively. An outlier case with higher errors (2.3 mm and 1.5 mm average Hausdorff distance for incus and round window respectively) was observed in relation to a high discrepancy between the projection angle of the reconstructed CT-scan and the video image. The maximum duration for the overall process was 18 s. CONCLUSIONS A fully automatic 2D registration method based on a convolutional neural network and applied to ear surgery was developed. The method did not rely on any external fiducial markers nor human intervention for landmark recognition. The method was fast and its precision was compatible with ear surgery.
Collapse
Affiliation(s)
- Ali Taleb
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France.
| | - Sarah Leclerc
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
| | | | - Alain Lalande
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
- Medical Imaging Department, Dijon University Hospital, 21000, Dijon, France
| | - Alexis Bozorg-Grayeli
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
- ENT Department, Dijon University Hospital, 21000, Dijon, France
| |
Collapse
|
2
|
Saadoun A, Guigou C, Lavedrine A, Bozorg Grayeli A. Minimally invasive ossiculoplasty via an endoscopic transtympanic approach. Eur Ann Otorhinolaryngol Head Neck Dis 2024; 141:93-97. [PMID: 37620172 DOI: 10.1016/j.anorl.2023.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/26/2023]
Abstract
INTRODUCTION The aim of this study was to evaluate the feasibility of ossiculoplasty via a minimally invasive endoscopic transtympanic approach (ETTA). CASE SERIES We investigated the exposure of target structures (incus and stapes) on 4 human temporal bones by placing an endoscope into the middle ear cleft through the 4 tympanic quadrants. Then, on 3 additional specimens, we performed an incudostapedial disjunction and repaired it with a drop of hydroxyapatite cement via ETTA. We measured the size of tympanic perforation, and the acoustic transfer function of the middle ear (125-8000Hz) before and after repair by placing an insert in the external auditory canal and recording the acoustic signal in the utricle by a microphone. The acoustic signal gain was estimated in dB μV. Exposure was similar in all four quadrants but ergonomics was better with a posteroinferior myringotomy. Ossicular chain repair was conducted successfully in all cases and the acoustic transfer function of the middle ear was significantly improved. Residual tympanic perforation (n=3) was 2±0.3mm in diameter. CONCLUSION ETTA to reconstruct incudostapedial joint with bone cement was feasible and effective. It opens perspectives for robot-based procedures guided by augmented reality.
Collapse
Affiliation(s)
- A Saadoun
- Department of Oto-Rhino-Laryngology - Head and Neck Surgery, Dijon University Hospital, 2, boulevard du Maréchal-de-Lattre de Tassigny, 21000 Dijon, France
| | - C Guigou
- Department of Oto-Rhino-Laryngology - Head and Neck Surgery, Dijon University Hospital, 2, boulevard du Maréchal-de-Lattre de Tassigny, 21000 Dijon, France; ICMUB Laboratory, UMR CNRS 6302, University of Burgundy, 21000 Dijon, France.
| | - A Lavedrine
- Department of Oto-Rhino-Laryngology - Head and Neck Surgery, Dijon University Hospital, 2, boulevard du Maréchal-de-Lattre de Tassigny, 21000 Dijon, France
| | - A Bozorg Grayeli
- Department of Oto-Rhino-Laryngology - Head and Neck Surgery, Dijon University Hospital, 2, boulevard du Maréchal-de-Lattre de Tassigny, 21000 Dijon, France; ICMUB Laboratory, UMR CNRS 6302, University of Burgundy, 21000 Dijon, France
| |
Collapse
|
3
|
El Chemaly T, Athayde Neves C, Leuze C, Hargreaves B, H Blevins N. Stereoscopic calibration for augmented reality visualization in microscopic surgery. Int J Comput Assist Radiol Surg 2023; 18:2033-2041. [PMID: 37450175 DOI: 10.1007/s11548-023-02980-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/26/2023] [Indexed: 07/18/2023]
Abstract
PURPOSE Middle and inner ear procedures target hearing loss, infections, and tumors of the temporal bone and lateral skull base. Despite the advances in surgical techniques, these procedures remain challenging due to limited haptic and visual feedback. Augmented reality (AR) may improve operative safety by allowing the 3D visualization of anatomical structures from preoperative computed tomography (CT) scans on real intraoperative microscope video feed. The purpose of this work was to develop a real-time CT-augmented stereo microscope system using camera calibration and electromagnetic (EM) tracking. METHODS A 3D printed and electromagnetically tracked calibration board was used to compute the intrinsic and extrinsic parameters of the surgical stereo microscope. These parameters were used to establish a transformation between the EM tracker coordinate system and the stereo microscope image space such that any tracked 3D point can be projected onto the left and right images of the microscope video stream. This allowed the augmentation of the microscope feed of a 3D printed temporal bone with its corresponding CT-derived virtual model. Finally, the calibration board was also used for evaluating the accuracy of the calibration. RESULTS We evaluated the accuracy of the system by calculating the registration error (RE) in 2D and 3D in a microsurgical laboratory setting. Our calibration workflow achieved a RE of 0.11 ± 0.06 mm in 2D and 0.98 ± 0.13 mm in 3D. In addition, we overlaid a 3D CT model on the microscope feed of a 3D resin printed model of a segmented temporal bone. The system exhibited small latency and good registration accuracy. CONCLUSION We present the calibration of an electromagnetically tracked surgical stereo microscope for augmented reality visualization. The calibration method achieved accuracy within a range suitable for otologic procedures. The AR process introduces enhanced visualization of the surgical field while allowing depth perception.
Collapse
Affiliation(s)
- Trishia El Chemaly
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, USA.
- Department of Radiology, Stanford School of Medicine, Stanford, CA, USA.
| | - Caio Athayde Neves
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, USA
- Faculty of Medicine, University of Brasília, Brasília, Brazil
| | - Christoph Leuze
- Department of Radiology, Stanford School of Medicine, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Brian Hargreaves
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford School of Medicine, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Nikolas H Blevins
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, USA
| |
Collapse
|
4
|
Chen JX, Yu SE, Ding AS, Lee DJ, Welling DB, Carey JP, Gray ST, Creighton FX. Augmented Reality in Otology/Neurotology: A Scoping Review with Implications for Practice and Education. Laryngoscope 2022. [DOI: 10.1002/lary.30515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 10/29/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022]
Affiliation(s)
- Jenny X. Chen
- Department of Otolaryngology–Head and Neck Surgery Johns Hopkins University School of Medicine Baltimore Maryland USA
| | | | - Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery Johns Hopkins University School of Medicine Baltimore Maryland USA
| | - Daniel J. Lee
- Department of Otolaryngology–Head and Neck Surgery Massachusetts Eye and Ear Boston Massachusetts USA
- Department of Otolaryngology–Head and Neck Surgery Harvard Medical School Boston Massachusetts USA
| | - D. Brad Welling
- Department of Otolaryngology–Head and Neck Surgery Massachusetts Eye and Ear Boston Massachusetts USA
- Department of Otolaryngology–Head and Neck Surgery Harvard Medical School Boston Massachusetts USA
| | - John P. Carey
- Department of Otolaryngology–Head and Neck Surgery Johns Hopkins University School of Medicine Baltimore Maryland USA
| | - Stacey T. Gray
- Department of Otolaryngology–Head and Neck Surgery Massachusetts Eye and Ear Boston Massachusetts USA
- Department of Otolaryngology–Head and Neck Surgery Harvard Medical School Boston Massachusetts USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery Johns Hopkins University School of Medicine Baltimore Maryland USA
| |
Collapse
|
5
|
Kalaiarasan K, Prathap L, Ayyadurai M, Subhashini P, Tamilselvi T, Avudaiappan T, Infant Raj I, Alemayehu Mamo S, Mezni A. Clinical Application of Augmented Reality in Computerized Skull Base Surgery. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE : ECAM 2022; 2022:1335820. [PMID: 35600956 PMCID: PMC9117015 DOI: 10.1155/2022/1335820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 04/19/2022] [Indexed: 12/02/2022]
Abstract
Cranial base tactics comprise the regulation of tiny and complicated structures in the domains of otology, rhinology, neurosurgery, and maxillofacial medical procedure. Basic nerves and veins are in the nearness of these buildings. Increased the truth is a coming innovation that may reform the cerebral basis approach by supplying vital physical and navigational facts brought together in a solitary presentation. In any case, the awareness and acknowledgment of prospective results of expanding reality frameworks in the cerebral base region are really poor. This article targets examining the handiness of expanded reality frameworks in cranial foundation medical procedures and emphasizes the obstacles that present innovation encounters and their prospective adjustments. A specialized perspective on distinct strategies used being produced of an improved realty framework is furthermore offered. The newest item offers an expansion in interest in expanded reality frameworks that may motivate more secure and practical procedures. In any case, a couple of concerns have to be cared to before that can be for the vast part fused into normal practice.
Collapse
Affiliation(s)
- K. Kalaiarasan
- Department of Information Technology, M. Kumarasamy College of Engineering, Karur, India
| | - Lavanya Prathap
- Department of Anatomy, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu 600077, India
| | - M. Ayyadurai
- SG, Institute of ECE, Saveetha School of Engineering, SIMATS, Chennai, Tamil Nadu 600077, India
| | - P. Subhashini
- Department of Computer Science and Engineering, J.N.N Institute of Engineering, Kannigaipair, Tamil Nadu 601102, India
| | - T. Tamilselvi
- Department of Computer Science and Engineering, Panimalar Institute of Technology, Varadarajapuram, Tamil Nadu 600123, India
| | - T. Avudaiappan
- Computer Science and Engineering, K. Ramakrishnan College of Technology, Trichy 621112, India
| | - I. Infant Raj
- Department of Computer Science and Engineering, K. Ramakrishnan College of Engineering, Trichy, India
| | - Samson Alemayehu Mamo
- Department of Electrical and Computer Engineering, Faculty of Electrical and Biomedical Engineering, Institute of Technology, Hawassa University, Awasa, Ethiopia
| | - Amine Mezni
- Department of Chemistry, College of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| |
Collapse
|
6
|
Hussain R, Guigou C, Lalande A, Bozorg Grayeli A. Vision-Based Augmented Reality System for Middle Ear Surgery: Evaluation in Operating Room Environment. Otol Neurotol 2022; 43:385-394. [PMID: 34889824 DOI: 10.1097/mao.0000000000003441] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
HYPOTHESIS Augmented reality (AR) solely based on image features is achievable in operating room conditions and its precision is compatible with otological surgery. BACKGROUND The objective of this work was to evaluate the performance of a vision-based AR system for middle ear surgery in the operating room conditions. METHODS Nine adult patients undergoing ossicular procedures were included in this prospective study. AR was obtained by combining real-time video from the operating microscope with the virtual image obtained from the preoperative computed tomography (CT)-scan. Initial registration between the video and the virtual CT image was achieved using manual selection of six points on the tympanic sulcus. Patient-microscope movements during the procedure were tracked using image-feature matching algorithm. The microscope was randomly moved at an approximated speed of 5 mm/s in the three axes of space and rotation for 180 seconds. The accuracy of the system was assessed by calculating the distance between each fiducial point selected on the video image and its corresponding point on the scanner. RESULTS AR could be obtained for at least 3 minutes in seven out of nine patients. The overlay fiducial and target registration errors were 0.38 ± 0.23 mm (n = 7) and 0.36 ± 0.15 mm (n = 5) respectively, with a drift error of 1.2 ± 0.5 μm/s. The system was stable throughout the procedure and achieved a refresh rate of 12 fps. Moderate bleeding and introduction of surgical instruments did not compromise the performance of the system. CONCLUSION The AR system yielded sub-millimetric accuracy and remained stable throughout the experimental study despite patient-microscope movements and field of view obtrusions.
Collapse
Affiliation(s)
- Raabid Hussain
- ImVia, Laboratory of Imagery and Artificial Vision, EA 7535, University of Burgundy
| | - Caroline Guigou
- ImVia, Laboratory of Imagery and Artificial Vision, EA 7535, University of Burgundy
- Department Otolaryngology-Head and Neck Surgery
| | - Alain Lalande
- ImVia, Laboratory of Imagery and Artificial Vision, EA 7535, University of Burgundy
- Department of Radiology, Dijon University Hospital, Dijon, France
| | - Alexis Bozorg Grayeli
- ImVia, Laboratory of Imagery and Artificial Vision, EA 7535, University of Burgundy
- Department Otolaryngology-Head and Neck Surgery
| |
Collapse
|
7
|
Augmented Reality Based Transmodiolar Cochlear Implantation. Otol Neurotol 2021; 43:190-198. [PMID: 34855687 DOI: 10.1097/mao.0000000000003437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS Transmodiolar auditory implantation via the middle ear cavity could be possible using an augmented reality system (ARS). BACKGROUND There is no clear landmark to indicate the cochlear apex or the modiolar axis. The ARS seems to be a promising tool for transmodiolar implantation by combining information from the preprocedure computed tomography scan (CT-scan) images to the real-time video of the surgical field. METHODS Eight human temporal bone resin models were included (five adults and three children). The procedure started by the identification of the modiolar axis on the preprocedure CT-scan followed by a 3D reconstruction of the images. Information on modiolar location and navigational guidance was supplemented to the reconstructed model, which was then registered with the surgical video using a point-based approach. Relative movements between the phantom and the microscope were tracked using image feature-based motion tracking. Based on the information provided via the ARS, the surgeon implanted the electrode-array inside the modiolus after drilling the helicothrema. Postprocedure CT-scan images were acquired to evaluate the registration error and the implantation accuracy. RESULTS The implantation could be conducted in all cases with a 2D registration error of 0.4 ± 0.24 mm. The mean entry point error was 0.6 ± 1.00 mm and the implant angular error 13.5 ± 8.93 degrees (n = 8), compatible with the procedure requirements. CONCLUSION We developed an image-based ARS to identify the extremities and the axis of the cochlear modiolus on intraprocedure videos. The system yielded submillimetric accuracy for implantation and remained stable throughout the experimental study.
Collapse
|
8
|
Barber SR. New Navigation Approaches for Endoscopic Lateral Skull Base Surgery. Otolaryngol Clin North Am 2021; 54:175-187. [PMID: 33243374 DOI: 10.1016/j.otc.2020.09.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Image-guided navigation is well established for surgery of the brain and anterior skull base. Although navigation workstations have been used widely by neurosurgeons and rhinologists for decades, utilization in the lateral skull base (LSB) has been less due to stricter requirements for overall accuracy less than 1 mm in this region. Endoscopic approaches to the LSB facilitate minimally invasive surgeries with less morbidity, yet there are risks of injury to critical structures. With improvements in technology over the years, image-guided navigation for endoscopic LSB surgery can reduce operative time, optimize exposure for surgical corridors, and increase safety in difficult cases.
Collapse
Affiliation(s)
- Samuel R Barber
- Department of Otolaryngology-Head and Neck Surgery, University of Arizona College of Medicine, 1501 North Campbell Avenue, Tucson, AZ 85724, USA.
| |
Collapse
|
9
|
Scherl C, Stratemeier J, Rotter N, Hesser J, Schönberg SO, Servais JJ, Männle D, Lammert A. Augmented Reality with HoloLens® in Parotid Tumor Surgery: A Prospective Feasibility Study. ORL J Otorhinolaryngol Relat Spec 2021; 83:439-448. [PMID: 33784686 DOI: 10.1159/000514640] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 01/02/2021] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Augmented reality can improve planning and execution of surgical procedures. Head-mounted devices such as the HoloLens® (Microsoft, Redmond, WA, USA) are particularly suitable to achieve these aims because they are controlled by hand gestures and enable contactless handling in a sterile environment. OBJECTIVES So far, these systems have not yet found their way into the operating room for surgery of the parotid gland. This study explored the feasibility and accuracy of augmented reality-assisted parotid surgery. METHODS 2D MRI holographic images were created, and 3D holograms were reconstructed from MRI DICOM files and made visible via the HoloLens. 2D MRI slices were scrolled through, 3D images were rotated, and 3D structures were shown and hidden only using hand gestures. The 3D model and the patient were aligned manually. RESULTS The use of augmented reality with the HoloLens in parotic surgery was feasible. Gestures were recognized correctly. Mean accuracy of superimposition of the holographic model and patient's anatomy was 1.3 cm. Highly significant differences were seen in position error of registration between central and peripheral structures (p = 0.0059), with a least deviation of 10.9 mm (centrally) and highest deviation for the peripheral parts (19.6-mm deviation). CONCLUSION This pilot study offers a first proof of concept of the clinical feasibility of the HoloLens for parotid tumor surgery. Workflow is not affected, but additional information is provided. The surgical performance could become safer through the navigation-like application of reality-fused 3D holograms, and it improves ergonomics without compromising sterility. Superimposition of the 3D holograms with the surgical field was possible, but further invention is necessary to improve the accuracy.
Collapse
Affiliation(s)
- Claudia Scherl
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Johanna Stratemeier
- Institute of Experimental Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Nicole Rotter
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jürgen Hesser
- Institute of Experimental Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Stefan O Schönberg
- Department of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jérôme J Servais
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - David Männle
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Anne Lammert
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
10
|
Quantitative Augmented Reality-Assisted Free-Hand Orthognathic Surgery Using Electromagnetic Tracking and Skin-Attached Dynamic Reference. J Craniofac Surg 2020; 31:2175-2181. [PMID: 33136850 DOI: 10.1097/scs.0000000000006739] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
The purpose of this study was to develop a quantitative AR-assisted free-hand orthognathic surgery method using electromagnetic (EM) tracking and skin-attached dynamic reference. The authors proposed a novel, simplified, and convenient workflow for augmented reality (AR)-assisted orthognathic surgery based on optical marker-less tracking, a comfortable display, and a non-invasive, skin-attached dynamic reference frame. The 2 registrations between the physical (EM tracking) and CT image spaces and between the physical and AR camera spaces, essential processes in AR-assisted surgery, were pre-operatively performed using the registration body complex and 3D depth camera. The intraoperative model of the maxillary bone segment (MBS) was superimposed on the real patient image with the simulated goal model on a flat-panel display, and the MBS was freely handled for repositioning with respect to the skin-attached dynamic reference tool (SRT) with quantitative visualization of landmarks of interest using only EM tracking. To evaluate the accuracy of AR-assisted Le Fort I surgery, the MBS of the phantom was simulated and repositioned by 6 translational and three rotational movements. The mean absolute deviations (MADs) between the simulation and post-operative positions of MBS landmarks by the SRT were 0.20, 0.34, 0.29, and 0.55 mm in x- (left lateral, right lateral), y- (setback, advance), and z- (impaction, elongation) directions, and RMS, respectively, while those by the BRT were 0.23, 0.37, 0.30, and 0.60 mm. There were no significant differences between the translation and rotation surgeries or among surgeries in the x-, y-, and z-axes for the SRT. The MADs in the x-, y-, and z-axes exhibited no significant differences between the SRT and BRT. The developed method showed high accuracy and reliability in free-hand orthognathic surgery using EM tracking and skin-attached dynamic reference.
Collapse
|
11
|
Saxby AJ, Jufas N, Kong JHK, Newey A, Pitman AG, Patel NP. Novel Radiologic Approaches for Cholesteatoma Detection: Implications for Endoscopic Ear Surgery. Otolaryngol Clin North Am 2020; 54:89-109. [PMID: 33153729 DOI: 10.1016/j.otc.2020.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Technological advancement in computed tomography (CT) and MRI has improved cholesteatoma detection rates considerably in the past decade. Accurately predicting disease location and extension is essential for staging, planning, and preoperative counseling, in particular in the newer approach of endoscopic ear surgery. Improved sensitivity and specificity of these radiological methods may allow the surgeon to confidently monitor patients, therefore avoiding unnecessary surgery. This article outlines recent advances in CT and MRI technology and advantages and disadvantages of the newer techniques. Emphasis on improving the feedback loop between the radiologist and surgeon will increase the accuracy of these new technologies.
Collapse
Affiliation(s)
- Alexander J Saxby
- Department of Otolaryngology-Head and Neck Surgery, Royal Prince Alfred Hospital, Camperdown, NSW 2050, Sydney, Australia.
| | - Nicholas Jufas
- Department of Otolaryngology-Head and Neck Surgery, Royal North Shore Hospital, 1 Reserve Road, St. Leonards, NSW 2065, Sydney, Australia
| | - Jonathan H K Kong
- Department of Otolaryngology-Head and Neck Surgery, Royal Prince Alfred Hospital, Camperdown, NSW 2050, Sydney, Australia
| | - Allison Newey
- Department of Radiology, Royal North Shore Hospital, 1 Reserve Road, St. Leonards, NSW 2065, Sydney, Australia
| | - Alexander G Pitman
- Department of Radiology, Northern Beaches Hospital, 105 Frenchs Forest Road W, Frenchs Forest, NSW 2086, Sydney, Australia
| | - Nirmal P Patel
- Department of Otolaryngology-Head and Neck Surgery, Royal North Shore Hospital, 1 Reserve Road, St. Leonards, NSW 2065, Sydney, Australia
| |
Collapse
|
12
|
Liu T, Tai Y, Zhao C, Wei L, Zhang J, Pan J, Shi J. Augmented reality in neurosurgical navigation: a survey. Int J Med Robot 2020; 16:e2160. [PMID: 32890440 DOI: 10.1002/rcs.2160] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 08/19/2020] [Accepted: 08/29/2020] [Indexed: 11/12/2022]
Abstract
BACKGROUND Neurosurgery has exceptionally high requirements for minimally invasive and safety. This survey attempts to analyze the practical application of AR in neurosurgical navigation. Also, this survey describes future trends in augmented reality neurosurgical navigation systems. METHODS In this survey, we searched related keywords "augmented reality", "virtual reality", "neurosurgery", "surgical simulation", "brain tumor surgery", "neurovascular surgery", "temporal bone surgery", and "spinal surgery" through Google Scholar, World Neurosurgery, PubMed and Science Direct. We collected 85 articles published over the past five years in areas related to this survey. RESULTS Detailed study has been conducted on the application of AR in neurosurgery and found that AR is constantly improving the overall efficiency of doctor training and treatment, which can help neurosurgeons learn and practice surgical procedures with zero risks. CONCLUSIONS Neurosurgical navigation is essential in neurosurgery. Despite certain technical limitations, it is still a necessary tool for the pursuit of maximum security and minimal intrusiveness. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Tao Liu
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Yonghang Tai
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Chengming Zhao
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Lei Wei
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, VIC, Australia
| | - Jun Zhang
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Junsheng Shi
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| |
Collapse
|
13
|
Augmented reality for inner ear procedures: visualization of the cochlear central axis in microscopic videos. Int J Comput Assist Radiol Surg 2020; 15:1703-1711. [PMID: 32737858 DOI: 10.1007/s11548-020-02240-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 07/20/2020] [Indexed: 10/23/2022]
Abstract
PURPOSE Visualization of the cochlea is impossible due to the delicate and intricate ear anatomy. Augmented reality may be used to perform auditory nerve implantation by transmodiolar approach in patients with profound hearing loss. METHODS We present an augmented reality system for the visualization of the cochlear axis in surgical videos. The system starts with an automatic anatomical landmark detection in preoperative computed tomography images based on deep reinforcement learning. These landmarks are used to register the preoperative geometry with the real-time microscopic video captured inside the auditory canal. Three-dimensional pose of the cochlear axis is determined using the registration projection matrices. In addition, the patient microscope movements are tracked using an image feature-based tracking process. RESULTS The landmark detection stage yielded an average localization error of [Formula: see text] mm ([Formula: see text]). The target registration error was [Formula: see text] mm for the cochlear apex and [Formula: see text] for the cochlear axis. CONCLUSION We developed an augmented reality system to visualize the cochlear axis in intraoperative videos. The system yielded millimetric accuracy and remained stable throughout the experimental study despite camera movements throughout the procedure in experimental conditions.
Collapse
|
14
|
Early Feasibility Studies of Augmented Reality Navigation for Lateral Skull Base Surgery. Otol Neurotol 2020; 41:883-888. [DOI: 10.1097/mao.0000000000002724] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
15
|
Video-based augmented reality combining CT-scan and instrument position data to microscope view in middle ear surgery. Sci Rep 2020; 10:6767. [PMID: 32317726 PMCID: PMC7174368 DOI: 10.1038/s41598-020-63839-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 03/26/2020] [Indexed: 11/27/2022] Open
Abstract
The aim of the study was to develop and assess the performance of a video-based augmented reality system, combining preoperative computed tomography (CT) and real-time microscopic video, as the first crucial step to keyhole middle ear procedures through a tympanic membrane puncture. Six different artificial human temporal bones were included in this prospective study. Six stainless steel fiducial markers were glued on the periphery of the eardrum, and a high-resolution CT-scan of the temporal bone was obtained. Virtual endoscopy of the middle ear based on this CT-scan was conducted on Osirix software. Virtual endoscopy image was registered to the microscope-based video of the intact tympanic membrane based on fiducial markers and a homography transformation was applied during microscope movements. These movements were tracked using Speeded-Up Robust Features (SURF) method. Simultaneously, a micro-surgical instrument was identified and tracked using a Kalman filter. The 3D position of the instrument was extracted by solving a three-point perspective framework. For evaluation, the instrument was introduced through the tympanic membrane and ink droplets were injected on three middle ear structures. An average initial registration accuracy of 0.21 ± 0.10 mm (n = 3) was achieved with a slow propagation error during tracking (0.04 ± 0.07 mm). The estimated surgical instrument tip position error was 0.33 ± 0.22 mm. The target structures’ localization accuracy was 0.52 ± 0.15 mm. The submillimetric accuracy of our system without tracker is compatible with ear surgery.
Collapse
|
16
|
Leterme G, Guigou C, Oudot A, Collin B, Boudon J, Millot N, Geissler A, Belharet K, Bozorg Grayeli A. Superparamagnetic Nanoparticle Delivery to the Cochlea Through Round Window by External Magnetic Field: Feasibility and Toxicity. Surg Innov 2019; 26:646-655. [PMID: 31478462 DOI: 10.1177/1553350619867217] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Introduction. The objective of this study was to evaluate the feasibility and toxicity of superparamagnetic iron oxide nanoparticles (SPIONs) administered into the cochlea through the round window (RW) by an external magnetic field. Materials and Methods. In 5 Wistar rats, the left RW was punctured. SPIONs suspended in hyaluronic gel (5 mg/mL) were applied in the RW niche and covered by a muscle graft. The nanoparticles were mobilized using a rare earth magnet (0.54 T) held in 4 consecutive positions around the head. The right ear served as control. Hearing function was monitored by auditory brainstem responses (4-32 kHz tone bursts). Results. The auditory thresholds remained unchanged 1 month after the administration. The histological study of the cochleae showed that SPIONs were driven into the scala tympani in the basal turn, the second turn, and the apex. Conclusion. Superparamagnetic nanoparticles can be driven inside the cochlea toward the apex with a preserved hearing up to 1 month in rats.
Collapse
Affiliation(s)
- Gaëlle Leterme
- Otolaryngology Department, Dijon University Hospital, Dijon, France.,Laboratoire Imvia, Université Bourgogne-Franche-Comté, Dijon, France
| | - Caroline Guigou
- Otolaryngology Department, Dijon University Hospital, Dijon, France.,Laboratoire Imvia, Université Bourgogne-Franche-Comté, Dijon, France
| | | | - Bertrand Collin
- Centre Georges François Leclerc, Dijon, France.,ICMUB, UMR 6302 CNRS/Université Bourgogne Franche-Comté, Dijon, France
| | - Julien Boudon
- Laboratoire ICB, UMR 6303 CNRS/Université Bourgogne Franche-Comté, Dijon, France
| | - Nadine Millot
- Laboratoire ICB, UMR 6303 CNRS/Université Bourgogne Franche-Comté, Dijon, France
| | - Audrey Geissler
- Plateforme d'imagerie cellulaire CellImaP, Université Bourgogne-Franche-Comté, Dijon, France
| | - Karim Belharet
- Laboratoire PRISME, HEI Campus Centre, Châteauroux, France
| | - Alexis Bozorg Grayeli
- Otolaryngology Department, Dijon University Hospital, Dijon, France.,Laboratoire Imvia, Université Bourgogne-Franche-Comté, Dijon, France
| |
Collapse
|
17
|
Abstract
BACKGROUND The field of otology is increasingly at the forefront of innovation in science and medicine. The inner ear, one of the most challenging systems to study, has been rendered much more open to inquiry by recent developments in research methodology. Promising advances of potential clinical impact have occurred in recent years in biological fields such as auditory genetics, ototoxic chemoprevention and organ of Corti regeneration. The interface of the ear with digital technology to remediate hearing loss, or as a consumer device within an intelligent ecosystem of connected devices, is receiving enormous creative energy. Automation and artificial intelligence can enhance otological medical and surgical practice. Otology is poised to enter a new renaissance period, in which many previously untreatable ear diseases will yield to newly introduced therapies. OBJECTIVE This paper speculates on the direction otology will take in the coming decades. CONCLUSION Making predictions about the future of otology is a risky endeavour. If the predictions are found wanting, it will likely be because of unforeseen revolutionary methods.
Collapse
|
18
|
Rose AS, Kim H, Fuchs H, Frahm JM. Development of augmented-reality applications in otolaryngology-head and neck surgery. Laryngoscope 2019; 129 Suppl 3:S1-S11. [PMID: 31260127 DOI: 10.1002/lary.28098] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 05/16/2019] [Indexed: 11/11/2022]
Abstract
OBJECTIVES/HYPOTHESIS Augmented reality (AR) allows for the addition of transparent virtual images and video to one's view of a physical environment. Our objective was to develop a head-worn, AR system for accurate, intraoperative localization of pathology and normal anatomic landmarks during open head and neck surgery. STUDY DESIGN Face validity and case study. METHODS A protocol was developed for the creation of three-dimensional (3D) virtual models based on computed tomography scans. Using the HoloLens AR platform, a novel system of registration and tracking was developed. Accuracy was determined in relation to actual physical landmarks. A face validity study was then performed in which otolaryngologists were asked to evaluate the technology and perform a simulated surgical task using AR image guidance. A case study highlighting the potential usefulness of the technology is also presented. RESULTS An AR system was developed for intraoperative 3D visualization and localization. The average error in measurement of accuracy was 2.47 ± 0.46 millimeters (1.99, 3.30). The face validity study supports the potential of this system to improve safety and efficiency in open head and neck surgical procedures. CONCLUSIONS An AR system for accurate localization of pathology and normal anatomic landmarks of the head and neck is feasible with current technology. A face validity study reveals the potential value of the system in intraoperative image guidance. This application of AR, among others in the field of otolaryngology-head and neck surgery, promises to improve surgical efficiency and patient safety in the operating room. LEVEL OF EVIDENCE 2b Laryngoscope, 129:S1-S11, 2019.
Collapse
Affiliation(s)
- Austin S Rose
- Department of Otolaryngology-Head and Neck Surgery, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Hyounghun Kim
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Henry Fuchs
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Jan-Michael Frahm
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| |
Collapse
|
19
|
Hussain R, Lalande A, Guigou C, Bozorg Grayeli A. Contribution of Augmented Reality to Minimally Invasive Computer-Assisted Cranial Base Surgery. IEEE J Biomed Health Inform 2019; 24:2093-2106. [DOI: 10.1109/jbhi.2019.2954003] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|