1
|
Maglitto F, Copelli C, Manfuso A, Cocis S, Salzano G. Special Issue "New Updates in Oral and Maxillofacial Surgery". J Pers Med 2024; 14:705. [PMID: 39063959 PMCID: PMC11277887 DOI: 10.3390/jpm14070705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 06/29/2024] [Indexed: 07/28/2024] Open
Abstract
In the ever-evolving landscape of medical science, few fields have witnessed as profound a transformation as oral and maxillofacial surgery [...].
Collapse
Affiliation(s)
- Fabio Maglitto
- Maxillo-Facial Surgery, Interdisciplinary Department of Medicine, University of Bari, 70100 Bari, Italy; (C.C.); (A.M.); (S.C.)
| | - Chiara Copelli
- Maxillo-Facial Surgery, Interdisciplinary Department of Medicine, University of Bari, 70100 Bari, Italy; (C.C.); (A.M.); (S.C.)
| | - Alfonso Manfuso
- Maxillo-Facial Surgery, Interdisciplinary Department of Medicine, University of Bari, 70100 Bari, Italy; (C.C.); (A.M.); (S.C.)
| | - Stefan Cocis
- Maxillo-Facial Surgery, Interdisciplinary Department of Medicine, University of Bari, 70100 Bari, Italy; (C.C.); (A.M.); (S.C.)
| | - Giovanni Salzano
- Maxillofacial Surgery Unit, Department of Neurosciences, Reproductive and Odontostomatological Sciences, University of Naples Federico II, 80131 Naples, Italy;
| |
Collapse
|
2
|
Tong G, Xu J, Pfister M, Atoum J, Prasad K, Miller A, Topf M, Wu JY. Development of an augmented reality guidance system for head and neck cancer resection. Healthc Technol Lett 2024; 11:93-100. [PMID: 38638497 PMCID: PMC11022213 DOI: 10.1049/htl2.12062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 11/22/2023] [Indexed: 04/20/2024] Open
Abstract
The use of head-mounted augmented reality (AR) for surgeries has grown rapidly in recent years. AR aids in intraoperative surgical navigation through overlaying three-dimensional (3D) holographic reconstructions of medical data. However, performing AR surgeries on complex areas such as the head and neck region poses challenges in terms of accuracy and speed. This study explores the feasibility of an AR guidance system for resections of positive tumour margins in a cadaveric specimen. The authors present an intraoperative solution that enables surgeons to upload and visualize holographic reconstructions of resected cadaver tissues. The solution involves using a 3D scanner to capture detailed scans of the resected tissue, which are subsequently uploaded into our software. The software converts the scans of resected tissues into specimen holograms that are viewable through a head-mounted AR display. By re-aligning these holograms with cadavers with gestures or voice commands, surgeons can navigate the head and neck tumour site. This workflow can run concurrently with frozen section analysis. On average, the authors achieve an uploading time of 2.98 min, visualization time of 1.05 min, and re-alignment time of 4.39 min, compared to the 20 to 30 min typical for frozen section analysis. The authors achieve a mean re-alignment error of 3.1 mm. The authors' software provides a foundation for new research and product development for using AR to navigate complex 3D anatomy in surgery.
Collapse
Affiliation(s)
- Guansen Tong
- Computer Science DepartmentVanderbilt UniversityNashvilleTennesseeUSA
| | - Jiayi Xu
- Computer Science DepartmentVanderbilt UniversityNashvilleTennesseeUSA
| | - Michael Pfister
- Computer Science DepartmentVanderbilt UniversityNashvilleTennesseeUSA
| | - Jumanh Atoum
- Computer Science DepartmentVanderbilt UniversityNashvilleTennesseeUSA
| | - Kavita Prasad
- Vanderbilt University Medical CenterNashvilleTennesseeUSA
| | - Alexis Miller
- Vanderbilt University Medical CenterNashvilleTennesseeUSA
| | - Michael Topf
- Vanderbilt University Medical CenterNashvilleTennesseeUSA
| | - Jie Ying Wu
- Computer Science DepartmentVanderbilt UniversityNashvilleTennesseeUSA
| |
Collapse
|
3
|
Verhellen A, Elprama SA, Scheerlinck T, Van Aerschot F, Duerinck J, Van Gestel F, Frantz T, Jansen B, Vandemeulebroucke J, Jacobs A. Exploring technology acceptance of head-mounted device-based augmented reality surgical navigation in orthopaedic surgery. Int J Med Robot 2023:e2585. [PMID: 37830305 DOI: 10.1002/rcs.2585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 09/18/2023] [Accepted: 09/28/2023] [Indexed: 10/14/2023]
Abstract
BACKGROUND This study used the Unified Theory of Acceptance and Use of Technology (UTAUT) to investigate the acceptance of HMD-based AR surgical navigation. METHODS An experiment was conducted in which participants drilled 12 predefined holes using freehand drilling, proprioceptive control, and AR assistance. Technology acceptance was assessed through a survey and non-participant observations. RESULTS Participants' intention to use AR correlated (p < 0.05) with social influence (Spearman's rho (rs) = 0.599), perceived performance improvement (rs = 0.592) and attitude towards AR (rs = 0.542). CONCLUSIONS While most participants acknowledged the potential of AR, they also highlighted persistent barriers to adoption, such as issues related to user-friendliness, time efficiency and device discomfort. To overcome these challenges, future AR surgical navigation systems should focus on enhancing surgical performance while minimising disruptions to workflows and operating times. Engaging orthopaedic surgeons in the development process can facilitate the creation of tailored solutions and accelerate adoption.
Collapse
Affiliation(s)
| | | | - Thierry Scheerlinck
- Department of Orthopedic Surgery and Traumatology - Research Group BEFY-ORTHO, Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel, Brussel, Belgium
| | - Fiene Van Aerschot
- Department of Orthopedic Surgery and Traumatology - Research Group BEFY-ORTHO, Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel, Brussel, Belgium
| | - Johnny Duerinck
- Department of Neurosurgery-Research Group Center for Neurosciences (C4N-NEUR), Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel, Brussel, Belgium
| | - Frederick Van Gestel
- Department of Neurosurgery-Research Group Center for Neurosciences (C4N-NEUR), Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel, Brussel, Belgium
| | - Taylor Frantz
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussel, Belgium
| | - Bart Jansen
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussel, Belgium
| | - Jef Vandemeulebroucke
- Department of Radiology - Department of Electronics and Informatics (ETRO), Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel - Imec, Brussel, Belgium
| | - An Jacobs
- IMEC-SMIT, Vrije Universiteit, Brussel, Belgium
| |
Collapse
|
4
|
Ding AS, Lu A, Li Z, Sahu M, Galaiya D, Siewerdsen JH, Unberath M, Taylor RH, Creighton FX. A Self-Configuring Deep Learning Network for Segmentation of Temporal Bone Anatomy in Cone-Beam CT Imaging. Otolaryngol Head Neck Surg 2023; 169:988-998. [PMID: 36883992 PMCID: PMC11060418 DOI: 10.1002/ohn.317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 01/19/2023] [Accepted: 02/19/2023] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Preoperative planning for otologic or neurotologic procedures often requires manual segmentation of relevant structures, which can be tedious and time-consuming. Automated methods for segmenting multiple geometrically complex structures can not only streamline preoperative planning but also augment minimally invasive and/or robot-assisted procedures in this space. This study evaluates a state-of-the-art deep learning pipeline for semantic segmentation of temporal bone anatomy. STUDY DESIGN A descriptive study of a segmentation network. SETTING Academic institution. METHODS A total of 15 high-resolution cone-beam temporal bone computed tomography (CT) data sets were included in this study. All images were co-registered, with relevant anatomical structures (eg, ossicles, inner ear, facial nerve, chorda tympani, bony labyrinth) manually segmented. Predicted segmentations from no new U-Net (nnU-Net), an open-source 3-dimensional semantic segmentation neural network, were compared against ground-truth segmentations using modified Hausdorff distances (mHD) and Dice scores. RESULTS Fivefold cross-validation with nnU-Net between predicted and ground-truth labels were as follows: malleus (mHD: 0.044 ± 0.024 mm, dice: 0.914 ± 0.035), incus (mHD: 0.051 ± 0.027 mm, dice: 0.916 ± 0.034), stapes (mHD: 0.147 ± 0.113 mm, dice: 0.560 ± 0.106), bony labyrinth (mHD: 0.038 ± 0.031 mm, dice: 0.952 ± 0.017), and facial nerve (mHD: 0.139 ± 0.072 mm, dice: 0.862 ± 0.039). Comparison against atlas-based segmentation propagation showed significantly higher Dice scores for all structures (p < .05). CONCLUSION Using an open-source deep learning pipeline, we demonstrate consistently submillimeter accuracy for semantic CT segmentation of temporal bone anatomy compared to hand-segmented labels. This pipeline has the potential to greatly improve preoperative planning workflows for a variety of otologic and neurotologic procedures and augment existing image guidance and robot-assisted systems for the temporal bone.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Manish Sahu
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jeffrey H. Siewerdsen
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
5
|
Saadya A, Chegini S, Morley S, McGurk M. Augmented reality presentation of the extracranial facial nerve: an innovation in parotid surgery. Br J Oral Maxillofac Surg 2023; 61:428-436. [PMID: 37328316 DOI: 10.1016/j.bjoms.2023.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 05/12/2023] [Accepted: 05/17/2023] [Indexed: 06/18/2023]
Abstract
Surgeons used to be unaware of the facial nerve's position during parotid surgery. Now, with special magnetic resonance imaging (MRI) sequences, it can be located and converted into a 3D model displayed on an augmented reality (AR) device for surgeons to study and manipulate. This study explores the accuracy and usefulness of the technique for the treatment of benign and malignant parotid tumours. A total of 20 patients with parotid tumours had 3-Tesla MRI scans, and their anatomical structures were segmented using Slicer software. The structures were imported into a Microsoft HoloLens 2® device, displayed in 3D, and shown to the patient for consent. Intraoperative video recording was used to record the position of the facial nerve in relation to the tumour. The predicted path of the nerve taken from the 3D model was combined with surgical observation and video recording in all cases. The imaging proved to have application in both benign and malignant disease. It also improved the process of informed patient consent. Three-dimensional MRI imaging of the facial nerve within the parotid gland and its display in a 3D model is an innovative technique for parotid surgery. Surgeons can now see the nerve's position and tailor their approach to each patient's tumour, providing personalised care. The technique eliminates the surgeon's blind spot and is a significant advantage in parotid surgery.
Collapse
Affiliation(s)
- Ahmad Saadya
- University College London Hospital, United Kingdom.
| | | | - Simon Morley
- University College London Hospital, United Kingdom.
| | - Mark McGurk
- University College London Hospital, United Kingdom.
| |
Collapse
|
6
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
7
|
Stebnev VS, Zhuravlev AV. Traditional analogue vs. three-dimensional digital visualization used in ophthalmic surgery. RUSSIAN OPHTHALMOLOGICAL JOURNAL 2023. [DOI: 10.21516/2072-0076-2023-16-1-168-174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/22/2023]
Abstract
The visualization of the surgical process remains a topical issue in cataract surgery. The review presents the history of visualization technique in ophthalmic surgery and compares the main current analogue and 3D digital technologies. The advent of 3D imaging systems in clinical practice has helped solve many issues associated with the use of standard analogue microscopes. These issues include limited focus and field of vision, the need to use a large amount of light, which increases the risk of iatrogenic retinal phototoxicity, the surgeon's attachment to the microscope and, consequently, a high load on the surgeon's visual apparatus when using eyepieces, as well as on their back and neck muscles.
Collapse
Affiliation(s)
- V. S. Stebnev
- Samara State Medical University, Institute of Vocational Education; “Eye Surgery” Ophthalmological Clinic
| | - A. V. Zhuravlev
- Samara State Medical University, Institute of Vocational Education; Kinel Central Regional Hospital
| |
Collapse
|
8
|
Baashar Y, Alkawsi G, Wan Ahmad WN, Alomari MA, Alhussian H, Tiong SK. Towards Wearable Augmented Reality in Healthcare: A Comparative Survey and Analysis of Head-Mounted Displays. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3940. [PMID: 36900951 PMCID: PMC10002206 DOI: 10.3390/ijerph20053940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 02/16/2023] [Accepted: 02/19/2023] [Indexed: 06/18/2023]
Abstract
Head-mounted displays (HMDs) have the potential to greatly impact the surgical field by maintaining sterile conditions in healthcare environments. Google Glass (GG) and Microsoft HoloLens (MH) are examples of optical HMDs. In this comparative survey related to wearable augmented reality (AR) technology in the medical field, we examine the current developments in wearable AR technology, as well as the medical aspects, with a specific emphasis on smart glasses and HoloLens. The authors searched recent articles (between 2017 and 2022) in the PubMed, Web of Science, Scopus, and ScienceDirect databases and a total of 37 relevant studies were considered for this analysis. The selected studies were divided into two main groups; 15 of the studies (around 41%) focused on smart glasses (e.g., Google Glass) and 22 (59%) focused on Microsoft HoloLens. Google Glass was used in various surgical specialities and preoperative settings, namely dermatology visits and nursing skill training. Moreover, Microsoft HoloLens was used in telepresence applications and holographic navigation of shoulder and gait impairment rehabilitation, among others. However, some limitations were associated with their use, such as low battery life, limited memory size, and possible ocular pain. Promising results were obtained by different studies regarding the feasibility, usability, and acceptability of using both Google Glass and Microsoft HoloLens in patient-centric settings as well as medical education and training. Further work and development of rigorous research designs are required to evaluate the efficacy and cost-effectiveness of wearable AR devices in the future.
Collapse
Affiliation(s)
- Yahia Baashar
- Faculty of Computing and Informatics, Universiti Malaysia Sabah (UMS), Labuan 87000, Malaysia
| | - Gamal Alkawsi
- Institute of Sustainable Energy (ISE), Universiti Tenaga Nasional, Kajang 43000, Malaysia
- Faculty of Computer Science and Information Systems, Thamar University, Thamar 87246, Yemen
| | | | - Mohammad Ahmed Alomari
- Institute of Informatics and Computing in Energy, Universiti Tenaga Nasional (UNITEN), Kajang 43000, Malaysia
| | - Hitham Alhussian
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia
| | - Sieh Kiong Tiong
- Institute of Sustainable Energy (ISE), Universiti Tenaga Nasional, Kajang 43000, Malaysia
| |
Collapse
|
9
|
Grad P, Przeklasa-Bierowiec AM, Malinowski KP, Witowski J, Proniewska K, Tatoń G. Application of HoloLens-based augmented reality and three-dimensional printed anatomical tooth reference models in dental education. ANATOMICAL SCIENCES EDUCATION 2022. [PMID: 36524288 DOI: 10.1002/ase.2241] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 11/29/2022] [Accepted: 12/01/2022] [Indexed: 06/17/2023]
Abstract
Tooth anatomy is fundamental knowledge used in everyday dental practice to reconstruct the occlusal surface during cavity fillings. The main objective of this project was to evaluate the suitability of two types of anatomical tooth reference models used to support reconstruction of the occlusal anatomy of the teeth: (1) a three-dimensional (3D)-printed model and (2) a model displayed in augmented reality (AR) using Microsoft HoloLens. The secondary objective was to evaluate three aspects impacting the outcome: clinical experience, comfort of work, and other variables. The tertiary objective was to evaluate the usefulness of AR in dental education. Anatomical models of crowns of three different molars were made using cone beam computed tomography image segmentation, printed with a stereolithographic 3D-printer, and then displayed in the HoloLens. Each participant reconstructed the occlusal anatomy of three teeth. One without any reference materials and two with an anatomical reference model, either 3D-printed or holographic. The reconstruction work was followed by the completion of an evaluation questionnaire. The maximum Hausdorff distances (Hmax) between the superimposed images of the specimens after the procedures and the anatomical models were then calculated. The results showed that the most accurate but slowest reconstruction was achieved with the use of 3D-printed reference models and that the results were not affected by other aspects considered. For this method, the Hmax was observed to be 630 μm (p = 0.004). It was concluded that while AR models can be helpful in dental anatomy education, they are not suitable replacements for physical models.
Collapse
Affiliation(s)
- Piotr Grad
- Department of Integrated Dentistry, Institute of Dentistry, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| | - Anna M Przeklasa-Bierowiec
- Department of Integrated Dentistry, Institute of Dentistry, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| | - Krzysztof P Malinowski
- Department of Bioinformatics and Telemedicine, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| | - Jan Witowski
- Department of Radiology, New York University Grossman School of Medicine, New York, New York, USA
| | - Klaudia Proniewska
- Department of Bioinformatics and Telemedicine, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| | - Grzegorz Tatoń
- Department of Biophysics, Chair of Physiology, Faculty of Medicine, Jagiellonian University Medical College, Kraków, Poland
| |
Collapse
|
10
|
Online learning: an effective option for teaching ENT to medical students? J Laryngol Otol 2022; 137:560-564. [PMID: 35811429 DOI: 10.1017/s0022215122001542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
OBJECTIVE ENT is underrepresented in the curriculum, and this has been compounded by coronavirus disease 2019. Recent restructures have removed ENT placements from the curriculum. This lack of exposure needs to be addressed, and increased use of online learning represents an opportunity to facilitate this. This study aimed to evaluate whether online learning can effectively deliver undergraduate ENT teaching. METHODS An online ENT module was created; content was structured on the Sheffield Medical School curriculum. Pre- and post-module tests and 5-point Likert scales were used to assess student knowledge and confidence, respectively. RESULTS A total of 115 participants were recruited. Test scores improved by 29 per cent (p < 0.001) and confidence by 66 per cent. Anatomy and ENT conditions demonstrated significant improvement in confidence, with a lower confidence score for examination. CONCLUSION This study showed improved knowledge and confidence, whilst highlighting greater efficacy in content over practical skills teaching. Online learning is a validated educational tool; however, it should not be used as a replacement but as an adjunct to supplement learning.
Collapse
|
11
|
Abstract
Augmented reality (AR) is an innovative system that enhances the real world by superimposing virtual objects on reality. The aim of this study was to analyze the application of AR in medicine and which of its technical solutions are the most used. We carried out a scoping review of the articles published between 2019 and February 2022. The initial search yielded a total of 2649 articles. After applying filters, removing duplicates and screening, we included 34 articles in our analysis. The analysis of the articles highlighted that AR has been traditionally and mainly used in orthopedics in addition to maxillofacial surgery and oncology. Regarding the display application in AR, the Microsoft HoloLens Optical Viewer is the most used method. Moreover, for the tracking and registration phases, the marker-based method with a rigid registration remains the most used system. Overall, the results of this study suggested that AR is an innovative technology with numerous advantages, finding applications in several new surgery domains. Considering the available data, it is not possible to clearly identify all the fields of application and the best technologies regarding AR.
Collapse
|
12
|
Key Ergonomics Requirements and Possible Mechanical Solutions for Augmented Reality Head-Mounted Displays in Surgery. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6020015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
In the context of a European project, we identified over 150 requirements for the development of an augmented reality (AR) head-mounted display (HMD) specifically tailored to support highly challenging manual surgical procedures. The requirements were established by surgeons from different specialties and by industrial players working in the surgical field who had strong commitments to the exploitation of this technology. Some of these requirements were specific to the project, while others can be seen as key requirements for the implementation of an efficient and reliable AR headset to be used to support manual activities in the peripersonal space. The aim of this work is to describe these ergonomic requirements that impact the mechanical design of the HMDs, the possible innovative solutions to these requirements, and how these solutions have been used to implement the AR headset in surgical navigation. We also report the results of a preliminary qualitative evaluation of the AR headset by three surgeons.
Collapse
|
13
|
Yang R, Li C, Tu P, Ahmed A, Ji T, Chen X. Development and Application of Digital Maxillofacial Surgery System Based on Mixed Reality Technology. Front Surg 2022; 8:719985. [PMID: 35174201 PMCID: PMC8841731 DOI: 10.3389/fsurg.2021.719985] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 12/16/2021] [Indexed: 11/23/2022] Open
Abstract
Objective To realize the three-dimensional visual output of surgical navigation information by studying the cross-linking of mixed reality display devices and high-precision optical navigators. Methods Applying quaternion-based point alignment algorithms to realize the positioning configuration of mixed reality display devices, high-precision optical navigators, real-time patient tracking and calibration technology; based on open source SDK and development tools, developing mixed reality surgery based on visual positioning and tracking system. In this study, four patients were selected for mixed reality-assisted tumor resection and reconstruction and re-examined 1 month after the operation. We reconstructed postoperative CT and use 3DMeshMetric to form the error distribution map, and completed the error analysis and quality control. Results Realized the cross-linking of mixed reality display equipment and high-precision optical navigator, developed a digital maxillofacial surgery system based on mixed reality technology and successfully implemented mixed reality-assisted tumor resection and reconstruction in 4 cases. Conclusions The maxillofacial digital surgery system based on mixed reality technology can superimpose and display three-dimensional navigation information in the surgeon's field of vision. Moreover, it solves the problem of visual conversion and space conversion of the existing navigation system. It improves the work efficiency of digitally assisted surgery, effectively reduces the surgeon's dependence on spatial experience and imagination, and protects important anatomical structures during surgery. It is a significant clinical application value and potential.
Collapse
Affiliation(s)
- Rong Yang
- Shanghai Key Laboratory of Stomatology/Shanghai Institute of Stomatology, Department of Oral and Maxillofacial Head and Neck Oncology, National Clinical Research Center for Oral Diseases, School of Medicine, The Ninth People's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Chenyao Li
- Shanghai Key Laboratory of Stomatology/Shanghai Institute of Stomatology, Department of Oral and Maxillofacial Head and Neck Oncology, National Clinical Research Center for Oral Diseases, School of Medicine, The Ninth People's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Puxun Tu
- School of Mechanical and Engineering, Shanghai Jiaotong University, Shanghai, China
| | - Abdelrehem Ahmed
- Department of Craniomaxillofacial and Plastic Surgery, Faculty of Dentistry, Alexandria University, Alexandria, Egypt
| | - Tong Ji
- Shanghai Key Laboratory of Stomatology/Shanghai Institute of Stomatology, Department of Oral and Maxillofacial Head and Neck Oncology, National Clinical Research Center for Oral Diseases, School of Medicine, The Ninth People's Hospital, Shanghai Jiao Tong University, Shanghai, China
- *Correspondence: Tong Ji
| | - Xiaojun Chen
- School of Mechanical and Engineering, Shanghai Jiaotong University, Shanghai, China
- Xiaojun Chen
| |
Collapse
|
14
|
Sahovaler A, Chan HHL, Gualtieri T, Daly M, Ferrari M, Vannelli C, Eu D, Manojlovic-Kolarski M, Orzell S, Taboni S, de Almeida JR, Goldstein DP, Deganello A, Nicolai P, Gilbert RW, Irish JC. Augmented Reality and Intraoperative Navigation in Sinonasal Malignancies: A Preclinical Study. Front Oncol 2021; 11:723509. [PMID: 34790568 PMCID: PMC8591179 DOI: 10.3389/fonc.2021.723509] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 10/12/2021] [Indexed: 11/13/2022] Open
Abstract
Objective To report the first use of a novel projected augmented reality (AR) system in open sinonasal tumor resections in preclinical models and to compare the AR approach with an advanced intraoperative navigation (IN) system. Methods Four tumor models were created. Five head and neck surgeons participated in the study performing virtual osteotomies. Unguided, AR, IN, and AR + IN simulations were performed. Statistical comparisons between approaches were obtained. Intratumoral cut rate was the main outcome. The groups were also compared in terms of percentage of intratumoral, close, adequate, and excessive distances from the tumor. Information on a wearable gaze tracker headset and NASA Task Load Index questionnaire results were analyzed as well. Results A total of 335 cuts were simulated. Intratumoral cuts were observed in 20.7%, 9.4%, 1.2,% and 0% of the unguided, AR, IN, and AR + IN simulations, respectively (p < 0.0001). The AR was superior than the unguided approach in univariate and multivariate models. The percentage of time looking at the screen during the procedures was 55.5% for the unguided approaches and 0%, 78.5%, and 61.8% in AR, IN, and AR + IN, respectively (p < 0.001). The combined approach significantly reduced the screen time compared with the IN procedure alone. Conclusion We reported the use of a novel AR system for oncological resections in open sinonasal approaches, with improved margin delineation compared with unguided techniques. AR improved the gaze-toggling drawback of IN. Further refinements of the AR system are needed before translating our experience to clinical practice.
Collapse
Affiliation(s)
- Axel Sahovaler
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Harley H L Chan
- Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Tommaso Gualtieri
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada.,Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy
| | - Michael Daly
- Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Marco Ferrari
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada.,Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy.,Section of Otorhinolaryngology-Head and Neck Surgery, University of Padua-Azienda Ospedaliera di Padova, Padua, Italy
| | - Claire Vannelli
- Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Donovan Eu
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Mirko Manojlovic-Kolarski
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Susannah Orzell
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Stefano Taboni
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada.,Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy.,Section of Otorhinolaryngology-Head and Neck Surgery, University of Padua-Azienda Ospedaliera di Padova, Padua, Italy
| | - John R de Almeida
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - David P Goldstein
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Alberto Deganello
- Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy
| | - Piero Nicolai
- Section of Otorhinolaryngology-Head and Neck Surgery, University of Padua-Azienda Ospedaliera di Padova, Padua, Italy
| | - Ralph W Gilbert
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Jonathan C Irish
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| |
Collapse
|
15
|
Intraoperative Use of Mixed Reality Technology in Median Neck and Branchial Cyst Excision. FUTURE INTERNET 2021. [DOI: 10.3390/fi13080214] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The paper considers the possibilities, prospects, and drawbacks of the mixed reality (MR) technology application using mixed reality smartglasses Microsoft HoloLens 2. The main challenge was to find and develop an approach that would allow surgeons to conduct operations using mixed reality on a large scale, reducing the preparation time required for the procedure and without having to create custom solutions for each patient. Research was conducted in three clinical cases: two median neck and one branchial cyst excisions. In each case, we applied a unique approach of hologram positioning in space based on mixed reality markers. As a result, we listed a series of positive and negative aspects related to MR surgery, along with proposed solutions for using MR in surgery on a daily basis.
Collapse
|
16
|
Wasserzug O, Fishman G, Carmel-Neiderman N, Oestreicher-Kedem Y, Saada M, Dadia S, Golden E, Berman P, Handzel O, DeRowe A. Three dimensional printed models of the airway for preoperative planning of open Laryngotracheal surgery in children: Surgeon's perception of utility. J Otolaryngol Head Neck Surg 2021; 50:47. [PMID: 34256870 PMCID: PMC8278656 DOI: 10.1186/s40463-021-00524-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 06/13/2021] [Indexed: 11/20/2022] Open
Abstract
BACKGROUND Preoperative planning of open laryngotracheal surgery is important for achieving good results. This study examines the surgeon's perception of the importance of using life size 3D printed models of the pediatric airway on surgical decision making. METHODS Life-size three-dimensional models of the upper airway were created based on CT images of children scheduled for laryngotracheal-reconstruction and cricotracheal resection with anastomosis. Five pediatric airway surgeons evaluated the three-dimensional models for determining the surgical approach, incision location and length, graft length, and need for single or double-stage surgery of seven children (median age 4.4 years, M:F ratio 4:3). They rated the importance of the three-dimensional model findings compared to the direct laryngoscopy videos and CT findings for each domain on a validated Likert scale of 1-5. RESULTS The mean rating for all domains was 3.6 ± 0.63 ("moderately important" to "very important"), and the median rating was 4 ("very important"). There was full agreement between raters for length of incision and length of graft. The between-rater agreement was 0.608 ("good") for surgical approach, 0.585 ("moderate") for incision location, and 0.429 ("moderate") for need for single- or two-stage surgery. CONCLUSION Patient-specific three-dimensional printed models of children's upper airways were scored by pediatric airway surgeons as being moderately to very important for preoperative planning of open laryngotracheal surgery. Large-scale, objective outcome studies are warranted to establish the reliability and efficiency of these models.
Collapse
Affiliation(s)
- Oshri Wasserzug
- Pediatric Otolaryngology Unit, Dana-Dwek Children's Hospital, Tel Aviv Sourasky Medical Center, 6 Weizman Street, 6423906, Tel Aviv, Israel
- Department of Otolaryngology, Head & Neck and Maxillofacial Surgery, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Gadi Fishman
- Pediatric Otolaryngology Unit, Dana-Dwek Children's Hospital, Tel Aviv Sourasky Medical Center, 6 Weizman Street, 6423906, Tel Aviv, Israel
- Department of Otolaryngology, Head & Neck and Maxillofacial Surgery, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Narin Carmel-Neiderman
- Department of Otolaryngology, Head & Neck and Maxillofacial Surgery, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Yael Oestreicher-Kedem
- Department of Otolaryngology, Head & Neck and Maxillofacial Surgery, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Maher Saada
- Pediatric Otolaryngology Unit, Dana-Dwek Children's Hospital, Tel Aviv Sourasky Medical Center, 6 Weizman Street, 6423906, Tel Aviv, Israel
| | - Solomon Dadia
- The Surgical 3D Printing Lab, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Eran Golden
- The Surgical 3D Printing Lab, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Philip Berman
- The Surgical 3D Printing Lab, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Ophir Handzel
- Department of Otolaryngology, Head & Neck and Maxillofacial Surgery, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Ari DeRowe
- Pediatric Otolaryngology Unit, Dana-Dwek Children's Hospital, Tel Aviv Sourasky Medical Center, 6 Weizman Street, 6423906, Tel Aviv, Israel.
- Department of Otolaryngology, Head & Neck and Maxillofacial Surgery, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
17
|
Wachter A, Kost J, Nahm W. Simulation-Based Estimation of the Number of Cameras Required for 3D Reconstruction in a Narrow-Baseline Multi-Camera Setup. J Imaging 2021; 7:jimaging7050087. [PMID: 34460683 PMCID: PMC8321353 DOI: 10.3390/jimaging7050087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 04/29/2021] [Accepted: 05/06/2021] [Indexed: 10/27/2022] Open
Abstract
Graphical visualization systems are a common clinical tool for displaying digital images and three-dimensional volumetric data. These systems provide a broad spectrum of information to support physicians in their clinical routine. For example, the field of radiology enjoys unrestricted options for interaction with the data, since information is pre-recorded and available entirely in digital form. However, some fields, such as microsurgery, do not benefit from this yet. Microscopes, endoscopes, and laparoscopes show the surgical site as it is. To allow free data manipulation and information fusion, 3D digitization of surgical sites is required. We aimed to find the number of cameras needed to add this functionality to surgical microscopes. For this, we performed in silico simulations of the 3D reconstruction of representative models of microsurgical sites with different numbers of cameras in narrow-baseline setups. Our results show that eight independent camera views are preferable, while at least four are necessary for a digital surgical site. In most cases, eight cameras allow the reconstruction of over 99% of the visible part. With four cameras, still over 95% can be achieved. This answers one of the key questions for the development of a prototype microscope. In future, such a system can provide functionality which is unattainable today.
Collapse
|
18
|
Effect of marker position and size on the registration accuracy of HoloLens in a non-clinical setting with implications for high-precision surgical tasks. Int J Comput Assist Radiol Surg 2021; 16:955-966. [PMID: 33856643 PMCID: PMC8166698 DOI: 10.1007/s11548-021-02354-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 03/16/2021] [Indexed: 01/26/2023]
Abstract
Purpose Emerging holographic headsets can be used to register patient-specific virtual models obtained from medical scans with the patient’s body. Maximising accuracy of the virtual models’ inclination angle and position (ideally, ≤ 2° and ≤ 2 mm, respectively, as in currently approved navigation systems) is vital for this application to be useful. This study investigated the accuracy with which a holographic headset registers virtual models with real-world features based on the position and size of image markers. Methods HoloLens® and the image-pattern-recognition tool Vuforia Engine™ were used to overlay a 5-cm-radius virtual hexagon on a monitor’s surface in a predefined position. The headset’s camera detection of an image marker (displayed on the monitor) triggered the rendering of the virtual hexagon on the headset’s lenses. 4 × 4, 8 × 8 and 12 × 12 cm image markers displayed at nine different positions were used. In total, the position and dimensions of 114 virtual hexagons were measured on photographs captured by the headset’s camera. Results Some image marker positions and the smallest image marker (4 × 4 cm) led to larger errors in the perceived dimensions of the virtual models than other image marker positions and larger markers (8 × 8 and 12 × 12 cm). ≤ 2° and ≤ 2 mm errors were found in 70.7% and 76% of cases, respectively. Conclusion Errors obtained in a non-negligible percentage of cases are not acceptable for certain surgical tasks (e.g. the identification of correct trajectories of surgical instruments). Achieving sufficient accuracy with image marker sizes that meet surgical needs and regardless of image marker position remains a challenge. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-021-02354-9.
Collapse
|
19
|
Scherl C, Stratemeier J, Rotter N, Hesser J, Schönberg SO, Servais JJ, Männle D, Lammert A. Augmented Reality with HoloLens® in Parotid Tumor Surgery: A Prospective Feasibility Study. ORL J Otorhinolaryngol Relat Spec 2021; 83:439-448. [PMID: 33784686 DOI: 10.1159/000514640] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 01/02/2021] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Augmented reality can improve planning and execution of surgical procedures. Head-mounted devices such as the HoloLens® (Microsoft, Redmond, WA, USA) are particularly suitable to achieve these aims because they are controlled by hand gestures and enable contactless handling in a sterile environment. OBJECTIVES So far, these systems have not yet found their way into the operating room for surgery of the parotid gland. This study explored the feasibility and accuracy of augmented reality-assisted parotid surgery. METHODS 2D MRI holographic images were created, and 3D holograms were reconstructed from MRI DICOM files and made visible via the HoloLens. 2D MRI slices were scrolled through, 3D images were rotated, and 3D structures were shown and hidden only using hand gestures. The 3D model and the patient were aligned manually. RESULTS The use of augmented reality with the HoloLens in parotic surgery was feasible. Gestures were recognized correctly. Mean accuracy of superimposition of the holographic model and patient's anatomy was 1.3 cm. Highly significant differences were seen in position error of registration between central and peripheral structures (p = 0.0059), with a least deviation of 10.9 mm (centrally) and highest deviation for the peripheral parts (19.6-mm deviation). CONCLUSION This pilot study offers a first proof of concept of the clinical feasibility of the HoloLens for parotid tumor surgery. Workflow is not affected, but additional information is provided. The surgical performance could become safer through the navigation-like application of reality-fused 3D holograms, and it improves ergonomics without compromising sterility. Superimposition of the 3D holograms with the surgical field was possible, but further invention is necessary to improve the accuracy.
Collapse
Affiliation(s)
- Claudia Scherl
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Johanna Stratemeier
- Institute of Experimental Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Nicole Rotter
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jürgen Hesser
- Institute of Experimental Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Stefan O Schönberg
- Department of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jérôme J Servais
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - David Männle
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Anne Lammert
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
20
|
Timonen T, Iso-Mustajärvi M, Linder P, Lehtimäki A, Löppönen H, Elomaa AP, Dietz A. Virtual reality improves the accuracy of simulated preoperative planning in temporal bones: a feasibility and validation study. Eur Arch Otorhinolaryngol 2020; 278:2795-2806. [PMID: 32964264 PMCID: PMC8266780 DOI: 10.1007/s00405-020-06360-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 09/08/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE Consumer-grade virtual reality (VR) has recently enabled various medical applications, but more evidence supporting their validity is needed. We investigated the accuracy of simulated surgical planning in a VR environment (VR) with temporal bones and compared it to conventional cross-sectional image viewing in picture archiving and communication system (PACS) interface. METHODS Five experienced otologic surgeons measured significant anatomic structures and fiducials on five fresh-frozen cadaveric temporal bones in VR and cross-sectional viewing. Primary image data were acquired by computed tomography. In total, 275 anatomical landmark measurements and 250 measurements of the distance between fiducials were obtained with both methods. Distance measurements between the fiducials were confirmed by physical measurement obtained by Vernier caliper. The experts evaluated the subjective validity of both methods on a 5-point Likert scale qualitative survey. RESULTS A strong correlation based on intraclass coefficient was found between the methods on both the anatomical (r > 0.900) and fiducial measurements (r > 0.916). Two-tailed paired t-test and Bland-Altman plots demonstrated high equivalences between the VR and cross-sectional viewing with mean differences of 1.9% (p = 0.396) and 0.472 mm (p = 0.065) for anatomical and fiducial measurements, respectively. Gross measurement errors due to the misidentification of fiducials occurred more frequently in the cross-sectional viewing. The mean face and content validity rating for VR were significantly better compared to cross-sectional viewing (total mean score 4.11 vs 3.39, p < 0.001). CONCLUSION Our study supports good accuracy and reliability of VR environment for simulated surgical planning in temporal bones compared to conventional cross-sectional visualization.
Collapse
Affiliation(s)
- Tomi Timonen
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland.
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland.
| | - Matti Iso-Mustajärvi
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland
- Microsurgery Centre of Eastern Finland, Kuopio, Finland
| | - Pia Linder
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland
| | - Antti Lehtimäki
- Department of Radiology, Kuopio University Hospital, Kuopio, Finland
| | - Heikki Löppönen
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | | | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|