1
|
Ye J, Chen Q, Zhong T, Liu J, Gao H. Is Overlain Display a Right Choice for AR Navigation? A Qualitative Study of Head-Mounted Augmented Reality Surgical Navigation on Accuracy for Large-Scale Clinical Deployment. CNS Neurosci Ther 2025; 31:e70217. [PMID: 39817491 PMCID: PMC11736426 DOI: 10.1111/cns.70217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 06/14/2024] [Revised: 12/24/2024] [Accepted: 01/03/2025] [Indexed: 01/18/2025] Open
Abstract
BACKGROUND During the course of the past two decades, head-mounted augmented reality surgical navigation (HMARSN) systems have been increasingly employed in a variety of surgical specialties as a result of both advancements in augmented reality-related technologies and surgeons' desires to overcome some drawbacks inherent to conventional surgical navigation systems. In the present time, most experimental HMARSN systems adopt overlain display (OD) that overlay virtual models and planned routes of surgical tools on corresponding physical tissues, organs, lesions, and so forth, in a surgical field so as to provide surgeons with an intuitive and direct view to gain better hand-eye coordination as well as avoid attention shift and loss of sight (LOS), among other benefits during procedures. Yet, its system accuracy, which is the most crucial performance indicator of any surgical navigation system, is difficult to ascertain because it is highly subjective and user-dependent. Therefore, the aim of this study was to review presently available experimental OD HMARSN systems qualitatively, explore how their system accuracy is affected by overlain display, and find out if such systems are suited to large-scale clinical deployment. METHOD We searched PubMed and ScienceDirect with the following terms: head mounted augmented reality surgical navigation, and 445 records were returned in total. After screening and eligibility assessment, 60 papers were finally analyzed. Specifically, we focused on how their accuracies were defined and measured, as well as whether such accuracies are stable in clinical practice and competitive with corresponding commercially available systems. RESULTS AND CONCLUSIONS The primary findings are that the system accuracy of OD HMARSN systems is seriously affected by a transformation between the spaces of the user's eyes and the surgical field, because measurement of the transformation is heavily individualized and user-dependent. Additionally, the transformation itself is potentially subject to changes during surgical procedures, and hence unstable. Therefore, OD HMARSN systems are not suitable for large-scale clinical deployment.
Collapse
Affiliation(s)
- Jian Ye
- Department of Neurosurgery, Affiliated Qingyuan HospitalGuangzhou Medical University, Qingyuan People's HospitalQiangyuanChina
| | - Qingwen Chen
- Department of NeurosurgeryThe First Affiliated Hospital of Guangdong Pharmaceutical UniversityGuangzhouChina
| | - Tao Zhong
- Department of NeurosurgeryThe First Affiliated Hospital of Guangdong Pharmaceutical UniversityGuangzhouChina
| | - Jian Liu
- Department of Neurosurgery, Affiliated Qingyuan HospitalGuangzhou Medical University, Qingyuan People's HospitalQiangyuanChina
| | - Han Gao
- Department of Neurosurgery, Affiliated Qingyuan HospitalGuangzhou Medical University, Qingyuan People's HospitalQiangyuanChina
| |
Collapse
|
2
|
Prasad K, Fassler C, Miller A, Aweeda M, Pruthi S, Fusco JC, Daniel B, Miga M, Wu JY, Topf MC. More than meets the eye: Augmented reality in surgical oncology. J Surg Oncol 2024; 130:405-418. [PMID: 39155686 DOI: 10.1002/jso.27790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 06/30/2024] [Accepted: 07/09/2024] [Indexed: 08/20/2024]
Abstract
BACKGROUND AND OBJECTIVES In the field of surgical oncology, there has been a desire for innovative techniques to improve tumor visualization, resection, and patient outcomes. Augmented reality (AR) technology superimposes digital content onto the real-world environment, enhancing the user's experience by blending digital and physical elements. A thorough examination of AR technology in surgical oncology has yet to be performed. METHODS A scoping review of intraoperative AR in surgical oncology was conducted according to the guidelines and recommendations of The Preferred Reporting Items for Systematic Review and Meta-analyzes Extension for Scoping Reviews (PRISMA-ScR) framework. All original articles examining the use of intraoperative AR during surgical management of cancer were included. Exclusion criteria included virtual reality applications only, preoperative use only, fluorescence, AR not specific to surgical oncology, and study design (reviews, commentaries, abstracts). RESULTS A total of 2735 articles were identified of which 83 were included. Most studies (52) were performed on animals or phantom models, while the remaining included patients. A total of 1112 intraoperative AR surgical cases were performed across the studies. The most common anatomic site was brain (20 articles), followed by liver (16), renal (9), and head and neck (8). AR was most often used for intraoperative navigation or anatomic visualization of tumors or critical structures but was also used to identify osteotomy or craniotomy planes. CONCLUSIONS AR technology has been applied across the field of surgical oncology to aid in localization and resection of tumors.
Collapse
Affiliation(s)
- Kavita Prasad
- Department of Otolaryngology-Head & Neck Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - Carly Fassler
- Department of Otolaryngology-Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Alexis Miller
- Department of Otolaryngology-Head & Neck Surgery, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Marina Aweeda
- Department of Otolaryngology-Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Sumit Pruthi
- Department of Radiology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Joseph C Fusco
- Department of Pediatric Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Bruce Daniel
- Department of Radiology, Stanford Health Care, Palo Alto, California, USA
| | - Michael Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Jie Ying Wu
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Michael C Topf
- Department of Otolaryngology-Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
3
|
Wilkat M, Liu S, Schwerter M, Schrader F, Saigo L, Karnatz N, Kübler NR, Rana M. A New Approach to Virtual Occlusion in Orthognathic Surgery Planning Using Mixed Reality-A Technical Note and Review of the Literature. J Pers Med 2023; 13:1709. [PMID: 38138936 PMCID: PMC10744857 DOI: 10.3390/jpm13121709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 11/05/2023] [Revised: 12/11/2023] [Accepted: 12/11/2023] [Indexed: 12/24/2023] Open
Abstract
Orthognathic surgery plays a vital role in correcting various skeletal discrepancies of the maxillofacial region. Achieving optimal occlusion is a fundamental aspect of orthognathic surgery planning, as it directly influences postoperative outcomes and patient satisfaction. Traditional methods for setting final occlusion involve the use of dental casts which are time-consuming, prone to errors and cannot be easily shared among collaborating specialties. In recent years, advancements in digital technology have introduced innovative approaches, such as virtual occlusion, which may offer enhanced accuracy and efficiency in orthognathic surgery planning. Furthermore, the emergence of mixed reality devices and their 3D visualization capabilities have brought about novel benefits in the medical field, particularly in computer-assisted planning. This paper presents for the first time a prototype tool for setting virtual occlusion during orthognathic surgery planning using mixed reality technology. A complete walkthrough of the workflow is presented including an explanation of the implicit advantages of this novel tool. The new approach to defining virtual occlusion is set into context with other published methods of virtual occlusion setting, discussing advantages and limitations as well as concepts of surgical occlusion for orthognathic surgery.
Collapse
Affiliation(s)
- Max Wilkat
- Department of Oral and Plastic Maxillofacial Surgery, Heinrich Heine University Hospital Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany
| | - Shufang Liu
- Brainlab AG, Olof-Palme-Str. 9, 81829 München, Germany
| | | | - Felix Schrader
- Department of Oral and Plastic Maxillofacial Surgery, Heinrich Heine University Hospital Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany
| | - Leonardo Saigo
- Department of Oral and Maxillofacial Surgery, National Dental Centre Singapore, 5 Second Hospital Ave., Singapore 168938, Singapore
| | - Nadia Karnatz
- Department of Oral and Plastic Maxillofacial Surgery, Heinrich Heine University Hospital Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany
| | - Norbert R. Kübler
- Department of Oral and Plastic Maxillofacial Surgery, Heinrich Heine University Hospital Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany
| | - Majeed Rana
- Department of Oral and Plastic Maxillofacial Surgery, Heinrich Heine University Hospital Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany
| |
Collapse
|
4
|
Prasad K, Miller A, Sharif K, Colazo JM, Ye W, Necker F, Baik F, Lewis JS, Rosenthal E, Wu JY, Topf MC. Augmented-Reality Surgery to Guide Head and Neck Cancer Re-resection: A Feasibility and Accuracy Study. Ann Surg Oncol 2023; 30:4994-5000. [PMID: 37133570 PMCID: PMC11563582 DOI: 10.1245/s10434-023-13532-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 02/07/2023] [Accepted: 04/03/2023] [Indexed: 05/04/2023]
Abstract
BACKGROUND Given the complex three-dimensional (3D) anatomy of head and neck cancer specimens, head and neck surgeons often have difficulty relocating the site of an initial positive margin to perform re-resection. This cadaveric study aimed to determine the feasibility and accuracy of augmented reality surgery to guide head and neck cancer re-resections. METHODS This study investigated three cadaveric specimens. The head and neck resection specimen was 3D scanned and exported to the HoloLens augmented reality environment. The surgeon manually aligned the 3D specimen hologram into the resection bed. Accuracy of manual alignment and time intervals throughout the protocol were recorded. RESULTS The 20 head and neck cancer resections performed in this study included 13 cutaneous and 7 oral cavity resections. The mean relocation error was 4 mm (range, 1-15 mm) with a standard deviation of 3.9 mm. The mean overall protocol time, from the start of 3D scanning to alignment into the resection bed, was 25.3 ± 8.9 min (range, 13.2-43.2 min). Relocation error did not differ significantly when stratified by greatest dimension of the specimen. The mean relocation error of complex oral cavity composite specimens (maxillectomy and mandibulectomy) differed significantly from that of all the other specimen types (10.7 vs 2.8; p < 0.01). CONCLUSIONS This cadaveric study demonstrated the feasibility and accuracy of augmented reality to guide re-resection of initial positive margins in head and neck cancer surgery.
Collapse
Affiliation(s)
- Kavita Prasad
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Alexis Miller
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Kayvon Sharif
- School of Medicine, Vanderbilt University, Nashville, TN, USA
| | - Juan M Colazo
- School of Medicine, Vanderbilt University, Nashville, TN, USA
| | - Wenda Ye
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Fabian Necker
- Institute for Functional and Clinical Anatomy, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany
| | - Fred Baik
- Department of Otolaryngology-Head and Neck Surgery, Stanford University, Palo Alto, CA, USA
| | - James S Lewis
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Eben Rosenthal
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Michael C Topf
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
5
|
Si J, Zhang C, Tian M, Jiang T, Zhang L, Yu H, Shi J, Wang X. Intraoral Condylectomy with 3D-Printed Cutting Guide versus with Surgical Navigation: An Accuracy and Effectiveness Comparison. J Clin Med 2023; 12:jcm12113816. [PMID: 37298011 DOI: 10.3390/jcm12113816] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 03/06/2023] [Revised: 04/22/2023] [Accepted: 05/13/2023] [Indexed: 06/12/2023] Open
Abstract
This study compares the accuracy and effectiveness of our novel 3D-printed titanium cutting guides with intraoperative surgical navigation for performing intraoral condylectomy in patients with mandibular condylar osteochondroma (OC). A total of 21 patients with mandibular condylar OC underwent intraoral condylectomy with either 3D-printed cutting guides (cutting guide group) or with surgical navigation (navigation group). The condylectomy accuracy in the cutting guide group and navigation group was determined by analyzing the three-dimensional (3D) discrepancies between the postoperative computed tomography (CT) images and the preoperative virtual surgical plan (VSP). Moreover, the improvement of the mandibular symmetry in both groups was determined by evaluating the chin deviation, chin rotation and mandibular asymmetry index (AI). The superimposition of the condylar osteotomy area showed that the postoperative results were very close to the VSP in both groups. The mean 3D deviation and maximum 3D deviation between the planned condylectomy and the actual result were 1.20 ± 0.60 mm and 2.36 ± 0.51 mm in the cutting guide group, and 1.33 ± 0.76 mm and 4.27 ± 1.99 mm in the navigation group. Moreover, the facial symmetry was greatly improved in both groups, indicated by significantly decreased chin deviation, chin rotation and AI. In conclusion, our results show that both 3D-printed cutting-guide-assisted and surgical-navigation-assisted methods of intraoral condylectomy have high accuracy and efficiency, while using a cutting guide can generate a relatively higher surgical accuracy. Moreover, our cutting guides exhibit user-friendly features and simplicity, which represents a promising prospect in everyday clinical practice.
Collapse
Affiliation(s)
- Jiawen Si
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, No. 639, Zhizaoju Road, Shanghai 200011, China
| | - Chenglong Zhang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, No. 639, Zhizaoju Road, Shanghai 200011, China
| | - Ming Tian
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, No. 639, Zhizaoju Road, Shanghai 200011, China
| | - Tengfei Jiang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, No. 639, Zhizaoju Road, Shanghai 200011, China
| | - Lei Zhang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, No. 639, Zhizaoju Road, Shanghai 200011, China
| | - Hongbo Yu
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, No. 639, Zhizaoju Road, Shanghai 200011, China
| | - Jun Shi
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, No. 639, Zhizaoju Road, Shanghai 200011, China
| | - Xudong Wang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, No. 639, Zhizaoju Road, Shanghai 200011, China
| |
Collapse
|
6
|
Ruggiero F, Cercenelli L, Emiliani N, Badiali G, Bevini M, Zucchelli M, Marcelli E, Tarsitano A. Preclinical Application of Augmented Reality in Pediatric Craniofacial Surgery: An Accuracy Study. J Clin Med 2023; 12:jcm12072693. [PMID: 37048777 PMCID: PMC10095377 DOI: 10.3390/jcm12072693] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 02/25/2023] [Revised: 03/29/2023] [Accepted: 03/31/2023] [Indexed: 04/08/2023] Open
Abstract
Background: Augmented reality (AR) allows the overlapping and integration of virtual information with the real environment. The camera of the AR device reads the object and integrates the virtual data. It has been widely applied to medical and surgical sciences in recent years and has the potential to enhance intraoperative navigation. Materials and methods: In this study, the authors aim to assess the accuracy of AR guidance when using the commercial HoloLens 2 head-mounted display (HMD) in pediatric craniofacial surgery. The Authors selected fronto-orbital remodeling (FOR) as the procedure to test (specifically, frontal osteotomy and nasal osteotomy were considered). Six people (three surgeons and three engineers) were recruited to perform the osteotomies on a 3D printed stereolithographic model under the guidance of AR. By means of calibrated CAD/CAM cutting guides with different grooves, the authors measured the accuracy of the osteotomies that were performed. We tested accuracy levels of ±1.5 mm, ±1 mm, and ±0.5 mm. Results: With the HoloLens 2, the majority of the individuals involved were able to successfully trace the trajectories of the frontal and nasal osteotomies with an accuracy level of ±1.5 mm. Additionally, 80% were able to achieve an accuracy level of ±1 mm when performing a nasal osteotomy, and 52% were able to achieve an accuracy level of ±1 mm when performing a frontal osteotomy, while 61% were able to achieve an accuracy level of ±0.5 mm when performing a nasal osteotomy, and 33% were able to achieve an accuracy level of ±0.5 mm when performing a frontal osteotomy. Conclusions: despite this being an in vitro study, the authors reported encouraging results for the prospective use of AR on actual patients.
Collapse
Affiliation(s)
- Federica Ruggiero
- Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Maxillo-Facial Surgery Unit, AUSL Bologna, 40124 Bologna, Italy
| | - Laura Cercenelli
- Laboratory of Bioengineering—eDIMES Lab, Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40138 Bologna, Italy
| | - Nicolas Emiliani
- Laboratory of Bioengineering—eDIMES Lab, Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40138 Bologna, Italy
| | - Giovanni Badiali
- Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
| | - Mirko Bevini
- Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
| | - Mino Zucchelli
- Pediatric Neurosurgery, IRCCS Istituto delle Scienze Neurologiche di Bologna, Via Altura 3, 40138 Bologna, Italy
| | - Emanuela Marcelli
- Laboratory of Bioengineering—eDIMES Lab, Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40138 Bologna, Italy
| | - Achille Tarsitano
- Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
| |
Collapse
|
7
|
Benmahdjoub M, Thabit A, van Veelen MLC, Niessen WJ, Wolvius EB, Walsum TV. Evaluation of AR visualization approaches for catheter insertion into the ventricle cavity. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:2434-2445. [PMID: 37027733 DOI: 10.1109/tvcg.2023.3247042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Academic Contribution Register] [Indexed: 06/19/2023]
Abstract
Augmented reality (AR) has shown potential in computer-aided surgery. It allows for the visualization of hidden anatomical structures as well as assists in navigating and locating surgical instruments at the surgical site. Various modalities (devices and/or visualizations) have been used in the literature, but few studies investigated the adequacy/superiority of one modality over the other. For instance, the use of optical see-through (OST) HMDs has not always been scientifically justified. Our goal is to compare various visualization modalities for catheter insertion in external ventricular drain and ventricular shunt procedures. We investigate two AR approaches: (1) 2D approaches consisting of a smartphone and a 2D window visualized through an OST (Microsoft HoloLens 2), and (2) 3D approaches consisting of a fully aligned patient model and a model that is adjacent to the patient and is rotationally aligned using an OST. 32 participants joined this study. For each visualization approach, participants were asked to perform five insertions after which they filled NASA-TLX and SUS forms. Moreover, the position and orientation of the needle with respect to the planning during the insertion task were collected. The results show that participants achieved a better insertion performance significantly under 3D visualizations, and the NASA-TLX and SUS forms reflected the preference of participants for these approaches compared to 2D approaches.
Collapse
|
8
|
Peng X, Acero J, Yu GY. Application and prospects of computer-assisted surgery in oral and maxillofacial oncology. Sci Bull (Beijing) 2023; 68:236-239. [PMID: 36710150 DOI: 10.1016/j.scib.2023.01.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Indexed: 01/21/2023]
Affiliation(s)
- Xin Peng
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, National Clinical Research Center for Oral Diseases, National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | - Julio Acero
- Department of Oral and Maxillofacial Surgery, Ramón y Cajal and Puerta de Hierro University Hospitals, University of Alcala, Ramón y Cajal Research Institute (IRYCIS), Madrid 28034, Spain.
| | - Guang-Yan Yu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, National Clinical Research Center for Oral Diseases, National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| |
Collapse
|
9
|
Zary N, Eysenbach G, Van Doormaal TPC, Ruurda JP, Van der Kaaij NP, De Heer LM. Mixed Reality in Modern Surgical and Interventional Practice: Narrative Review of the Literature. JMIR Serious Games 2023; 11:e41297. [PMID: 36607711 PMCID: PMC9947976 DOI: 10.2196/41297] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 07/21/2022] [Revised: 10/17/2022] [Accepted: 10/31/2022] [Indexed: 11/07/2022] Open
Abstract
BACKGROUND Mixed reality (MR) and its potential applications have gained increasing interest within the medical community over the recent years. The ability to integrate virtual objects into a real-world environment within a single video-see-through display is a topic that sparks imagination. Given these characteristics, MR could facilitate preoperative and preinterventional planning, provide intraoperative and intrainterventional guidance, and aid in education and training, thereby improving the skills and merits of surgeons and residents alike. OBJECTIVE In this narrative review, we provide a broad overview of the different applications of MR within the entire spectrum of surgical and interventional practice and elucidate on potential future directions. METHODS A targeted literature search within the PubMed, Embase, and Cochrane databases was performed regarding the application of MR within surgical and interventional practice. Studies were included if they met the criteria for technological readiness level 5, and as such, had to be validated in a relevant environment. RESULTS A total of 57 studies were included and divided into studies regarding preoperative and interventional planning, intraoperative and interventional guidance, as well as training and education. CONCLUSIONS The overall experience with MR is positive. The main benefits of MR seem to be related to improved efficiency. Limitations primarily seem to be related to constraints associated with head-mounted display. Future directions should be aimed at improving head-mounted display technology as well as incorporation of MR within surgical microscopes, robots, and design of trials to prove superiority.
Collapse
Affiliation(s)
| | | | - Tristan P C Van Doormaal
- University Medical Center Utrecht, Utrecht, Netherlands.,University Hospital Zurich, Zurich, Switzerland
| | | | | | | |
Collapse
|
10
|
Koyama Y, Sugahara K, Koyachi M, Tachizawa K, Iwasaki A, Wakita I, Nishiyama A, Matsunaga S, Katakura A. Mixed reality for extraction of maxillary mesiodens. Maxillofac Plast Reconstr Surg 2023; 45:1. [PMID: 36602618 PMCID: PMC9816364 DOI: 10.1186/s40902-022-00370-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 10/23/2022] [Accepted: 12/25/2022] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND Mesiodentes are the most common supernumerary teeth. The cause is not fully understood, although proliferations of genetic factors and the dental lamina have been implicated. Mesiodentes can cause delayed or ectopic eruption of permanent incisors, which can further alter occlusion and appearance. Careful attention should be paid to the position and direction of the mesiodentes because of possible damage to adjacent roots in the permanent dentition period, errant extraction in the deciduous and mixed dentition periods, and damage to the permanent tooth embryo. To avoid these complications, we applied mixed reality (MR) technology using the HoloLens® (Microsoft, California). In this study, we report on three cases of mesiodentes extraction under general anesthesia using MR technology. RESULTS The patients ranged in age from 6 to 11 years, all three were boys, and the direction of eruption was inverted in all cases. The extraction approach was palatal in two cases and labial in one case. The average operative time was 32 min, and bleeding was minimal in all cases. No intraoperative or postoperative complications occurred. An image was shared preoperatively with all the surgeons using an actual situation model. Three surgeons used Microsoft HoloLens® during surgery, shared MR, and operated while superimposing the application image in the surgical field. CONCLUSIONS The procedure was performed safely; further development of MR surgery support systems in the future is suggested.
Collapse
Affiliation(s)
- Yu Koyama
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Keisuke Sugahara
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan ,grid.265070.60000 0001 1092 3624Oral Health Science Center, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Masahide Koyachi
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Kotaro Tachizawa
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Akira Iwasaki
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Ichiro Wakita
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Akihiro Nishiyama
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Satoru Matsunaga
- grid.265070.60000 0001 1092 3624Department of Anatomy, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| | - Akira Katakura
- grid.265070.60000 0001 1092 3624Department of Oral Pathobiological Science and Surgery, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan ,grid.265070.60000 0001 1092 3624Oral Health Science Center, Tokyo Dental College, 2-9-18 Kanda Misaki-Cho, Chiyoda-Ku, Tokyo, Japan
| |
Collapse
|
11
|
Antonelli M, Lucignani M, Parrillo C, Grassi F, Figà Talamanca L, Rossi Espagnet MC, Gandolfo C, Secinaro A, Pasquini L, De Benedictis A, Placidi E, De Palma L, Marras CE, Marasi A, Napolitano A. Magnetic resonance imaging based neurosurgical planning on hololens 2: A feasibility study in a paediatric hospital. Digit Health 2023; 9:20552076231214066. [PMID: 38025111 PMCID: PMC10656794 DOI: 10.1177/20552076231214066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 10/27/2023] [Accepted: 01/31/2023] [Indexed: 12/01/2023] Open
Abstract
Objective The goal of this work is to show how to implement a mixed reality application (app) for neurosurgery planning based on neuroimaging data, highlighting the strengths and weaknesses of its design. Methods Our workflow explains how to handle neuroimaging data, including how to load morphological, functional and diffusion tensor imaging data into a mixed reality environment, thus creating a first guide of this kind. Brain magnetic resonance imaging data from a paediatric patient were acquired using a 3 T Siemens Magnetom Skyra scanner. Initially, this raw data underwent specific software pre-processing and were subsequently transformed to ensure seamless integration with the mixed reality app. After that, we created three-dimensional models of brain structures and the mixed reality environment using Unity™ engine together with Microsoft® HoloLens 2™ device. To get an evaluation of the app we submitted a questionnaire to four neurosurgeons. To collect data concerning the performance of a user session we used Unity Performance Profiler. Results The use of the interactive features, such as rotating, scaling and moving models and browsing through menus, provided by the app had high scores in the questionnaire, and their use can still be improved as suggested by the performance data collected. The questionnaire's average scores were high, so the overall experiences of using our mixed reality app were positive. Conclusion We have successfully created a valuable and easy-to-use neuroimaging data mixed reality app, laying the foundation for more future clinical uses, as more models and data derived from various biomedical images can be imported.
Collapse
Affiliation(s)
- Martina Antonelli
- Medical Physics Department, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Martina Lucignani
- Medical Physics Department, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Chiara Parrillo
- Medical Physics Department, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Francesco Grassi
- Medical Physics Department, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Lorenzo Figà Talamanca
- Neuroradiology Unit, Imaging Department, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Maria C Rossi Espagnet
- Neuroradiology Unit, Imaging Department, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
- Department of Neuroscience, Mental Health and Sensory Organs (NESMOS), Sant’Andrea Hospital, Sapienza University, Roma, Italy
| | - Carlo Gandolfo
- Neuroradiology Unit, Imaging Department, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Aurelio Secinaro
- Advanced Cardiovascular Imaging Unit, Department of Imaging, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Luca Pasquini
- Neuroradiology Unit, Imaging Department, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
- Department of Neuroscience, Mental Health and Sensory Organs (NESMOS), Sant’Andrea Hospital, Sapienza University, Roma, Italy
| | - Alessandro De Benedictis
- Pediatric Neurosurgery Unit, Department of Neuroscience and Neurorehabilitation, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Elisa Placidi
- Medical Physics UOC, Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Roma, Italy
| | - Luca De Palma
- Rare and Complex Epilepsies, Department of Neuroscience and Neurorehabilitation, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Carlo E Marras
- Pediatric Neurosurgery Unit, Department of Neuroscience and Neurorehabilitation, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Alessandra Marasi
- Pediatric Neurosurgery Unit, Department of Neuroscience and Neurorehabilitation, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| | - Antonio Napolitano
- Medical Physics Department, Bambino Gesù Children's Hospital, IRCCS, Roma, Italy
| |
Collapse
|
12
|
Advances and Innovations in Ablative Head and Neck Oncologic Surgery Using Mixed Reality Technologies in Personalized Medicine. J Clin Med 2022; 11:jcm11164767. [PMID: 36013006 PMCID: PMC9410374 DOI: 10.3390/jcm11164767] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 07/29/2022] [Revised: 08/10/2022] [Accepted: 08/12/2022] [Indexed: 11/17/2022] Open
Abstract
The benefit of computer-assisted planning in head and neck ablative and reconstructive surgery has been extensively documented over the last decade. This approach has been proven to offer a more secure surgical procedure. In the treatment of cancer of the head and neck, computer-assisted surgery can be used to visualize and estimate the location and extent of the tumor mass. Nowadays, some software tools even allow the visualization of the structures of interest in a mixed reality environment. However, the precise integration of mixed reality systems into a daily clinical routine is still a challenge. To date, this technology is not yet fully integrated into clinical settings such as the tumor board, surgical planning for head and neck tumors, or medical and surgical education. As a consequence, the handling of these systems is still of an experimental nature, and decision-making based on the presented data is not yet widely used. The aim of this paper is to present a novel, user-friendly 3D planning and mixed reality software and its potential application for ablative and reconstructive head and neck surgery.
Collapse
|
13
|
Doughty M, Ghugre NR, Wright GA. Augmenting Performance: A Systematic Review of Optical See-Through Head-Mounted Displays in Surgery. J Imaging 2022; 8:jimaging8070203. [PMID: 35877647 PMCID: PMC9318659 DOI: 10.3390/jimaging8070203] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 06/24/2022] [Revised: 07/15/2022] [Accepted: 07/18/2022] [Indexed: 02/01/2023] Open
Abstract
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were identified. Selected articles were then categorized based on a taxonomy that described the required components of an effective AR-based navigation system: data, processing, overlay, view, and validation. Our findings indicated a focus on orthopedic (n=20) and maxillofacial surgeries (n=8). For preoperative input data, computed tomography (CT) (n=34), and surface rendered models (n=39) were most commonly used to represent image information. Virtual content was commonly directly superimposed with the target site (n=47); this was achieved by surface tracking of fiducials (n=30), external tracking (n=16), or manual placement (n=11). Microsoft HoloLens devices (n=24 in 2021, n=7 in 2022) were the most frequently used OST-HMDs; gestures and/or voice (n=32) served as the preferred interaction paradigm. Though promising system accuracy in the order of 2–5 mm has been demonstrated in phantom models, several human factors and technical challenges—perception, ease of use, context, interaction, and occlusion—remain to be addressed prior to widespread adoption of OST-HMD led surgical navigation.
Collapse
Affiliation(s)
- Mitchell Doughty
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Correspondence:
| | - Nilesh R. Ghugre
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| | - Graham A. Wright
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada; (N.R.G.); (G.A.W.)
- Schulich Heart Program, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|