1
|
Begagić E, Bečulić H, Pugonja R, Memić Z, Balogun S, Džidić-Krivić A, Milanović E, Salković N, Nuhović A, Skomorac R, Sefo H, Pojskić M. Augmented Reality Integration in Skull Base Neurosurgery: A Systematic Review. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:335. [PMID: 38399622 PMCID: PMC10889940 DOI: 10.3390/medicina60020335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 02/05/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Background and Objectives: To investigate the role of augmented reality (AR) in skull base (SB) neurosurgery. Materials and Methods: Utilizing PRISMA methodology, PubMed and Scopus databases were explored to extract data related to AR integration in SB surgery. Results: The majority of 19 included studies (42.1%) were conducted in the United States, with a focus on the last five years (77.8%). Categorization included phantom skull models (31.2%, n = 6), human cadavers (15.8%, n = 3), or human patients (52.6%, n = 10). Microscopic surgery was the predominant modality in 10 studies (52.6%). Of the 19 studies, surgical modality was specified in 18, with microscopic surgery being predominant (52.6%). Most studies used only CT as the data source (n = 9; 47.4%), and optical tracking was the prevalent tracking modality (n = 9; 47.3%). The Target Registration Error (TRE) spanned from 0.55 to 10.62 mm. Conclusion: Despite variations in Target Registration Error (TRE) values, the studies highlighted successful outcomes and minimal complications. Challenges, such as device practicality and data security, were acknowledged, but the application of low-cost AR devices suggests broader feasibility.
Collapse
Affiliation(s)
- Emir Begagić
- Department of General Medicine, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Hakija Bečulić
- Department of Neurosurgery, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina; (H.B.)
- Department of Anatomy, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Ragib Pugonja
- Department of Anatomy, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Zlatan Memić
- Department of General Medicine, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Simon Balogun
- Division of Neurosurgery, Department of Surgery, Obafemi Awolowo University Teaching Hospitals Complex, Ilesa Road PMB 5538, Ile-Ife 220282, Nigeria
| | - Amina Džidić-Krivić
- Department of Neurology, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina
| | - Elma Milanović
- Neurology Clinic, Clinical Center University of Sarajevo, Bolnička 25, 71000 Sarajevo, Bosnia and Herzegovina
| | - Naida Salković
- Department of General Medicine, School of Medicine, University of Tuzla, Univerzitetska 1, 75000 Tuzla, Bosnia and Herzegovina;
| | - Adem Nuhović
- Department of General Medicine, School of Medicine, University of Sarajevo, Univerzitetska 1, 71000 Sarajevo, Bosnia and Herzegovina;
| | - Rasim Skomorac
- Department of Neurosurgery, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina; (H.B.)
- Department of Surgery, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina
| | - Haso Sefo
- Neurosurgery Clinic, Clinical Center University of Sarajevo, Bolnička 25, 71000 Sarajevo, Bosnia and Herzegovina
| | - Mirza Pojskić
- Department of Neurosurgery, University Hospital Marburg, Baldingerstr., 35033 Marburg, Germany
| |
Collapse
|
2
|
Kos TM, Colombo E, Bartels LW, Robe PA, van Doormaal TPC. Evaluation Metrics for Augmented Reality in Neurosurgical Preoperative Planning, Surgical Navigation, and Surgical Treatment Guidance: A Systematic Review. Oper Neurosurg (Hagerstown) 2023; 26:01787389-990000000-01007. [PMID: 38146941 PMCID: PMC11008635 DOI: 10.1227/ons.0000000000001009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 10/10/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Recent years have shown an advancement in the development of augmented reality (AR) technologies for preoperative visualization, surgical navigation, and intraoperative guidance for neurosurgery. However, proving added value for AR in clinical practice is challenging, partly because of a lack of standardized evaluation metrics. We performed a systematic review to provide an overview of the reported evaluation metrics for AR technologies in neurosurgical practice and to establish a foundation for assessment and comparison of such technologies. METHODS PubMed, Embase, and Cochrane were searched systematically for publications on assessment of AR for cranial neurosurgery on September 22, 2022. The findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. RESULTS The systematic search yielded 830 publications; 114 were screened full text, and 80 were included for analysis. Among the included studies, 5% dealt with preoperative visualization using AR, with user perception as the most frequently reported metric. The majority (75%) researched AR technology for surgical navigation, with registration accuracy, clinical outcome, and time measurements as the most frequently reported metrics. In addition, 20% studied the use of AR for intraoperative guidance, with registration accuracy, task outcome, and user perception as the most frequently reported metrics. CONCLUSION For quality benchmarking of AR technologies in neurosurgery, evaluation metrics should be specific to the risk profile and clinical objectives of the technology. A key focus should be on using validated questionnaires to assess user perception; ensuring clear and unambiguous reporting of registration accuracy, precision, robustness, and system stability; and accurately measuring task performance in clinical studies. We provided an overview suggesting which evaluation metrics to use per AR application and innovation phase, aiming to improve the assessment of added value of AR for neurosurgical practice and to facilitate the integration in the clinical workflow.
Collapse
Affiliation(s)
- Tessa M. Kos
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Elisa Colombo
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
| | - L. Wilbert Bartels
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Pierre A. Robe
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Tristan P. C. van Doormaal
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
3
|
Gu W, Knopf J, Cast J, Higgins LD, Knopf D, Unberath M. Nail it! vision-based drift correction for accurate mixed reality surgical guidance. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02950-x. [PMID: 37231201 DOI: 10.1007/s11548-023-02950-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 05/02/2023] [Indexed: 05/27/2023]
Abstract
PURPOSE Mixed reality-guided surgery through head-mounted displays (HMDs) is gaining interest among surgeons. However, precise tracking of HMDs relative to the surgical environment is crucial for successful outcomes. Without fiducial markers, spatial tracking of the HMD suffers from millimeter- to centimeter-scale drift, resulting in misaligned visualization of registered overlays. Methods and workflows capable of automatically correcting for drift after patient registration are essential to assuring accurate execution of surgical plans. METHODS We present a mixed reality surgical navigation workflow that continuously corrects for drift after patient registration using only image-based methods. We demonstrate its feasibility and capabilities using the Microsoft HoloLens on glenoid pin placement in total shoulder arthroplasty. A phantom study was conducted involving five users with each user placing pins on six glenoids of different deformity, followed by a cadaver study by an attending surgeon. RESULTS In both studies, all users were satisfied with the registration overlay before drilling the pin. Postoperative CT scans showed 1.5 mm error in entry point deviation and 2.4[Formula: see text] error in pin orientation on average in the phantom study and 2.5 mm and 1.5[Formula: see text] in the cadaver study. A trained user takes around 90 s to complete the workflow. Our method also outperformed HoloLens native tracking in drift correction. CONCLUSION Our findings suggest that image-based drift correction can provide mixed reality environments precisely aligned with patient anatomy, enabling pin placement with consistently high accuracy. These techniques constitute a next step toward purely image-based mixed reality surgical guidance, without requiring patient markers or external tracking hardware.
Collapse
Affiliation(s)
- Wenhao Gu
- Johns Hopkins University, Baltimore, MD, USA.
| | | | - John Cast
- Johns Hopkins University, Baltimore, MD, USA
| | | | - David Knopf
- Arthrex Inc., 1 Arthrex Way, Naples, FL, USA
| | | |
Collapse
|
4
|
Bounajem MT, Cameron B, Sorensen K, Parr R, Gibby W, Prashant G, Evans JJ, Karsy M. Improved Accuracy and Lowered Learning Curve of Ventricular Targeting Using Augmented Reality-Phantom and Cadaveric Model Testing. Neurosurgery 2023; 92:884-891. [PMID: 36562619 DOI: 10.1227/neu.0000000000002293] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/23/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Augmented reality (AR) has demonstrated significant potential in neurosurgical cranial, spine, and teaching applications. External ventricular drain (EVD) placement remains a common procedure, but with error rates in targeting between 10% and 40%. OBJECTIVE To evaluate Novarad VisAR guidance system for the placement of EVDs in phantom and cadaveric models. METHODS Two synthetic ventricular phantom models and a third cadaver model underwent computerized tomography imaging and registration with the VisAR system (Novarad). Root mean square (RMS), angular error (γ), and Euclidian distance were measured by multiple methods for various standard EVD placements. RESULTS Computerized tomography measurements on a phantom model (0.5-mm targets showed a mean Euclidean distance error of 1.20 ± 0.98 mm and γ of 1.25° ± 1.02°. Eight participants placed EVDs in lateral and occipital burr holes using VisAR in a second phantom anatomic ventricular model (mean RMS: 3.9 ± 1.8 mm, γ: 3.95° ± 1.78°). There were no statistically significant differences in accuracy for postgraduate year level, prior AR experience, prior EVD experience, or experience with video games ( P > .05). In comparing EVDs placed with anatomic landmarks vs VisAR navigation in a cadaver, VisAR demonstrated significantly better RMS and γ, 7.47 ± 0.94 mm and 7.12° ± 0.97°, respectively ( P ≤ .05). CONCLUSION The novel VisAR AR system resulted in accurate placement of EVDs with a rapid learning curve, which may improve clinical treatment and patient safety. Future applications of VisAR can be expanded to other cranial procedures.
Collapse
Affiliation(s)
- Michael T Bounajem
- Department of Neurosurgery, Clinical Neurosciences Center, University of Utah, Salt Lake City, Utah, USA
| | | | | | | | - Wendell Gibby
- Novarad, Provo, Utah, USA
- Department of Radiology, University of California-San Diego, San Diego, California, USA
| | - Giyarpuram Prashant
- Department of Neurosurgery, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, USA
| | - James J Evans
- Department of Neurosurgery, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, USA
| | - Michael Karsy
- Department of Neurosurgery, Clinical Neurosciences Center, University of Utah, Salt Lake City, Utah, USA
| |
Collapse
|
5
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
6
|
Ma L, Huang T, Wang J, Liao H. Visualization, registration and tracking techniques for augmented reality guided surgery: a review. Phys Med Biol 2023; 68. [PMID: 36580681 DOI: 10.1088/1361-6560/acaf23] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Augmented reality (AR) surgical navigation has developed rapidly in recent years. This paper reviews and analyzes the visualization, registration, and tracking techniques used in AR surgical navigation systems, as well as the application of these AR systems in different surgical fields. The types of AR visualization are divided into two categories ofin situvisualization and nonin situvisualization. The rendering contents of AR visualization are various. The registration methods include manual registration, point-based registration, surface registration, marker-based registration, and calibration-based registration. The tracking methods consist of self-localization, tracking with integrated cameras, external tracking, and hybrid tracking. Moreover, we describe the applications of AR in surgical fields. However, most AR applications were evaluated through model experiments and animal experiments, and there are relatively few clinical experiments, indicating that the current AR navigation methods are still in the early stage of development. Finally, we summarize the contributions and challenges of AR in the surgical fields, as well as the future development trend. Despite the fact that AR-guided surgery has not yet reached clinical maturity, we believe that if the current development trend continues, it will soon reveal its clinical utility.
Collapse
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Jie Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| |
Collapse
|
7
|
Chen JX, Yu SE, Ding AS, Lee DJ, Welling DB, Carey JP, Gray ST, Creighton FX. Augmented Reality in Otology/Neurotology: A Scoping Review with Implications for Practice and Education. Laryngoscope 2022. [DOI: 10.1002/lary.30515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 10/29/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022]
Affiliation(s)
- Jenny X. Chen
- Department of Otolaryngology–Head and Neck Surgery Johns Hopkins University School of Medicine Baltimore Maryland USA
| | | | - Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery Johns Hopkins University School of Medicine Baltimore Maryland USA
| | - Daniel J. Lee
- Department of Otolaryngology–Head and Neck Surgery Massachusetts Eye and Ear Boston Massachusetts USA
- Department of Otolaryngology–Head and Neck Surgery Harvard Medical School Boston Massachusetts USA
| | - D. Brad Welling
- Department of Otolaryngology–Head and Neck Surgery Massachusetts Eye and Ear Boston Massachusetts USA
- Department of Otolaryngology–Head and Neck Surgery Harvard Medical School Boston Massachusetts USA
| | - John P. Carey
- Department of Otolaryngology–Head and Neck Surgery Johns Hopkins University School of Medicine Baltimore Maryland USA
| | - Stacey T. Gray
- Department of Otolaryngology–Head and Neck Surgery Massachusetts Eye and Ear Boston Massachusetts USA
- Department of Otolaryngology–Head and Neck Surgery Harvard Medical School Boston Massachusetts USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery Johns Hopkins University School of Medicine Baltimore Maryland USA
| |
Collapse
|
8
|
Ravindra VM, Tadlock MD, Gurney JM, Kraus KL, Dengler BA, Gordon J, Cooke J, Porensky P, Belverud S, Milton JO, Cardoso M, Carroll CP, Tomlin J, Champagne R, Bell RS, Viers AG, Ikeda DS. Attitudes Toward Neurosurgery Education for the Nonneurosurgeon: A Survey Study and Critical Analysis of U.S. Military Training Techniques and Future Prospects. World Neurosurg 2022; 167:e1335-e1344. [PMID: 36103986 DOI: 10.1016/j.wneu.2022.09.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 11/27/2022]
Abstract
BACKGROUND The U.S. military requires medical readiness to support forward-deployed combat operations. Because time and distance to neurosurgical capabilities vary within the deployed trauma system, nonneurosurgeons are required to perform emergent cranial procedures in select cases. It is unclear whether these surgeons have sufficient training in these procedures. METHODS This quality-improvement study involved a voluntary, anonymized specialty-specific survey of active-duty surgeons about their experience and attitudes toward U.S. military emergency neurosurgical training. RESULTS Survey responses were received from 104 general surgeons and 26 neurosurgeons. Among general surgeons, 81% have deployed and 53% received training in emergency neurosurgical procedures before deployment. Only 16% of general surgeons reported participating in craniotomy/craniectomy procedures in the last year. Nine general surgeons reported performing an emergency neurosurgical procedure while on deployment/humanitarian mission, and 87% of respondents expressed interest in further predeployment emergency neurosurgery training. Among neurosurgeons, 81% had participated in training nonneurosurgeons and 73% believe that more comprehensive training for nonneurosurgeons before deployment is needed. General surgeons proposed lower procedure minimums for competency for external ventricular drain placement and craniotomy/craniectomy than did neurosurgeons. Only 37% of general surgeons had used mixed/augmented reality in any capacity previously; for combat procedures, most (90%) would prefer using synchronous supervision via high-fidelity video teleconferencing over mixed reality. CONCLUSIONS These survey results show a gap in readiness for neurosurgical procedures for forward-deployed general surgeons. Capitalizing on capabilities such as mixed/augmented reality would be a force multiplier and a potential means of improving neurosurgical capabilities in the forward-deployed environments.
Collapse
Affiliation(s)
- Vijay M Ravindra
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA; Department of Neurosurgery, University of California San Diego, San Diego, California, USA; Department of Neurosurgery, University of Utah, Salt Lake City, Utah, USA
| | - Matthew D Tadlock
- Department of Surgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA; Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA; 1st Medical Battalion, 1st Marine Logistics Group, Camp Pendleton, California, USA
| | - Jennifer M Gurney
- U.S. Army Institute of Surgical Research, Joint Base San Antonio, San Antonio, Texas, USA
| | - Kristin L Kraus
- Department of Neurosurgery, University of Utah, Salt Lake City, Utah, USA
| | - Bradley A Dengler
- Department of Neurosurgery, Walter Reed National Military Medical Center, Bethesda, Maryland, USA
| | - Jennifer Gordon
- Department of Surgery, U.S. Naval Hospital Okinawa, Okinawa, Japan
| | - Jonathon Cooke
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Paul Porensky
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Shawn Belverud
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Jason O Milton
- Department of Neurosurgery, Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Mario Cardoso
- Department of Brain and Spine Surgery, Naval Medical Center, Portsmouth, Virginia, USA
| | - Christopher P Carroll
- Department of Brain and Spine Surgery, Naval Medical Center, Portsmouth, Virginia, USA
| | - Jeffrey Tomlin
- Department of Brain and Spine Surgery, Naval Medical Center, Portsmouth, Virginia, USA
| | - Roland Champagne
- Bioskills Training Center, Naval Medical Readiness Training Command, San Diego, California, USA
| | - Randy S Bell
- Department of Neurosurgery, Walter Reed National Military Medical Center, Bethesda, Maryland, USA
| | - Angela G Viers
- Department of Surgery, U.S. Naval Hospital Okinawa, Okinawa, Japan
| | - Daniel S Ikeda
- Department of Neurosurgery, Walter Reed National Military Medical Center, Bethesda, Maryland, USA.
| |
Collapse
|
9
|
Ding AS, Lu A, Li Z, Galaiya D, Ishii M, Siewerdsen JH, Taylor RH, Creighton FX. Statistical Shape Model of the Temporal Bone Using Segmentation Propagation. Otol Neurotol 2022; 43:e679-e687. [PMID: 35761465 PMCID: PMC10072910 DOI: 10.1097/mao.0000000000003554] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS Automated image registration techniques can successfully determine anatomical variation in human temporal bones with statistical shape modeling. BACKGROUND There is a lack of knowledge about inter-patient anatomical variation in the temporal bone. Statistical shape models (SSMs) provide a powerful method for quantifying variation of anatomical structures in medical images but are time-intensive to manually develop. This study presents SSMs of temporal bone anatomy using automated image-registration techniques. METHODS Fifty-three cone-beam temporal bone CTs were included for SSM generation. The malleus, incus, stapes, bony labyrinth, and facial nerve were automatically segmented using 3D Slicer and a template-based segmentation propagation technique. Segmentations were then used to construct SSMs using MATLAB. The first three principal components of each SSM were analyzed to describe shape variation. RESULTS Principal component analysis of middle and inner ear structures revealed novel modes of anatomical variation. The first three principal components for the malleus represented variability in manubrium length (mean: 4.47 mm; ±2-SDs: 4.03-5.03 mm) and rotation about its long axis (±2-SDs: -1.6° to 1.8° posteriorly). The facial nerve exhibits variability in first and second genu angles. The bony labyrinth varies in the angle between the posterior and superior canals (mean: 88.9°; ±2-SDs: 83.7°-95.7°) and cochlear orientation (±2-SDs: -4.0° to 3.0° anterolaterally). CONCLUSIONS SSMs of temporal bone anatomy can inform surgeons on clinically relevant inter-patient variability. Anatomical variation elucidated by these models can provide novel insight into function and pathophysiology. These models also allow further investigation of anatomical variation based on age, BMI, sex, and geographical location.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Alexander Lu
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Deepa Galaiya
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Masaru Ishii
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Francis X. Creighton
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
10
|
Bartella AK, Hoshal SG, Lethaus B, Strong EB. Computer assisted skull base surgery: a contemporary review. Innov Surg Sci 2022. [DOI: 10.1515/iss-2021-0020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Skull base surgery has evolved significantly since Harvey Cushing‘s first descriptions in the early 1900s. Computer aided surgery (CAS) applications continue to expand; they include virtual surgical planning, augmented and virtual reality, 3D printing of models/cutting guides/implants, surgical navigation, and intraoperative imaging. The authors will review the current skull base CAS literature and propose a computer aided surgical workflow categorizing these applications into 3 phases: 1) Virtual planning, 2) Surgical execution, 3) Intraoperative verification.
Collapse
Affiliation(s)
| | - Steven G. Hoshal
- Department of Otolaryngology – Head and Neck Surgery , University of California, Davis , Sacramento , CA , USA
| | - Bernd Lethaus
- Department of Oral and Maxillofacial Surgery , Leipzig University Leipzig , Germany
| | - E. Bradley Strong
- Department of Otolaryngology – Head and Neck Surgery , University of California, Davis , Sacramento , CA , USA
| |
Collapse
|
11
|
Microscope-Based Augmented Reality with Intraoperative Computed Tomography-Based Navigation for Resection of Skull Base Meningiomas in Consecutive Series of 39 Patients. Cancers (Basel) 2022; 14:cancers14092302. [PMID: 35565431 PMCID: PMC9101634 DOI: 10.3390/cancers14092302] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/27/2022] [Accepted: 05/04/2022] [Indexed: 11/16/2022] Open
Abstract
Background: The aim of surgery for skull base meningiomas is maximal resection with minimal damage to the involved cranial nerves and cerebral vessels; thus, implementation of technologies for improved orientation in the surgical field, such as neuronavigation and augmented reality (AR), is of interest. Methods: Included in the study were 39 consecutive patients (13 male, 26 female, mean age 64.08 ± 13.5 years) who underwent surgery for skull base meningiomas using microscope-based AR and automatic patient registration using intraoperative computed tomography (iCT). Results: Most common were olfactory meningiomas (6), cavernous sinus (6) and clinoidal (6) meningiomas, meningiomas of the medial (5) and lateral (5) sphenoid wing and meningiomas of the sphenoidal plane (5), followed by suprasellar (4), falcine (1) and middle fossa (1) meningiomas. There were 26 patients (66.6%) who underwent gross total resection (GTR) of the meningioma. Automatic registration applying iCT resulted in high accuracy (target registration error, 0.82 ± 0.37 mm). The effective radiation dose of the registration iCT scans was 0.58 ± 1.05 mSv. AR facilitated orientation in the resection of skull base meningiomas with encasement of cerebral vessels and compression of the optic chiasm, as well as in reoperations, increasing surgeon comfort. No injuries to critical neurovascular structures occurred. Out of 35 patients who lived to follow-up, 33 could ambulate at their last presentation. Conclusion: A microscope-based AR facilitates surgical orientation for resection of skull base meningiomas. Registration accuracy is very high using automatic registration with intraoperative imaging.
Collapse
|
12
|
Gu W, Shah K, Knopf J, Josewski C, Unberath M. A calibration-free workflow for image-based mixed reality navigation of total shoulder arthroplasty. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2021.2009378] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Wenhao Gu
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Kinjal Shah
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | | | | | - Mathias Unberath
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
13
|
Cote DJ, Ruzevick J, Strickland B, Donoho DA, Zada G. Commentary: Three-Dimensional Modeling for Augmented and Virtual Reality–Based Posterior Fossa Approach Selection Training: Technical Overview of Novel Open-Source Materials. Oper Neurosurg (Hagerstown) 2022; 22:e261. [DOI: 10.1227/ons.0000000000000236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 02/16/2022] [Indexed: 11/19/2022] Open
|
14
|
Steiert C, Behringer SP, Kraus LM, Bissolo M, Demerath T, Beck J, Grauvogel J, Reinacher PC. Augmented reality-assisted craniofacial reconstruction in skull base lesions - an innovative technique for single-step resection and cranioplasty in neurosurgery. Neurosurg Rev 2022; 45:2745-2755. [PMID: 35441994 PMCID: PMC9349131 DOI: 10.1007/s10143-022-01784-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/19/2022] [Accepted: 03/30/2022] [Indexed: 10/31/2022]
Abstract
Defects of the cranial vault often require cosmetic reconstruction with patient-specific implants, particularly in cases of craniofacial involvement. However, fabrication takes time and is expensive; therefore, efforts must be made to develop more rapidly available and more cost-effective alternatives. The current study investigated the feasibility of an augmented reality (AR)-assisted single-step procedure for repairing bony defects involving the facial skeleton and the skull base. In an experimental setting, nine neurosurgeons fabricated AR-assisted and conventionally shaped ("freehand") implants from polymethylmethacrylate (PMMA) on a skull model with a craniofacial bony defect. Deviations of the surface profile in comparison with the original model were quantified by means of volumetry, and the cosmetic results were evaluated using a multicomponent scoring system, each by two blinded neurosurgeons. Handling the AR equipment proved to be quite comfortable. The median volume deviating from the surface profile of the original model was low in the AR-assisted implants (6.40 cm3) and significantly reduced in comparison with the conventionally shaped implants (13.48 cm3). The cosmetic appearance of the AR-assisted implants was rated as very good (median 25.00 out of 30 points) and significantly improved in comparison with the conventionally shaped implants (median 14.75 out of 30 points). Our experiments showed outstanding results regarding the possibilities of AR-assisted procedures for single-step reconstruction of craniofacial defects. Although patient-specific implants still represent the gold standard in esthetic aspects, AR-assisted procedures hold high potential for an immediately and widely available, cost-effective alternative providing excellent cosmetic outcomes.
Collapse
Affiliation(s)
- Christine Steiert
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
| | - Simon Phillipp Behringer
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Luisa Mona Kraus
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marco Bissolo
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Theo Demerath
- Department of Neuroradiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Juergen Beck
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Juergen Grauvogel
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Peter Christoph Reinacher
- Department of Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,Fraunhofer Institute for Laser Technology, Aachen, Germany
| |
Collapse
|
15
|
Birlo M, Edwards PJE, Clarkson M, Stoyanov D. Utility of optical see-through head mounted displays in augmented reality-assisted surgery: A systematic review. Med Image Anal 2022; 77:102361. [PMID: 35168103 PMCID: PMC10466024 DOI: 10.1016/j.media.2022.102361] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 11/17/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
Abstract
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n=40). Experiments mainly involve phantoms (n=43) or system setup (n=21), with patient case studies ranking third (n=19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Collapse
Affiliation(s)
- Manuel Birlo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK.
| | - P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Matthew Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| |
Collapse
|
16
|
Visualization, navigation, augmentation. The ever-changing perspective of the neurosurgeon. BRAIN AND SPINE 2022; 2:100926. [PMID: 36248169 PMCID: PMC9560703 DOI: 10.1016/j.bas.2022.100926] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/23/2022] [Accepted: 08/10/2022] [Indexed: 11/22/2022]
|
17
|
Augmented Reality Based Transmodiolar Cochlear Implantation. Otol Neurotol 2021; 43:190-198. [PMID: 34855687 DOI: 10.1097/mao.0000000000003437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS Transmodiolar auditory implantation via the middle ear cavity could be possible using an augmented reality system (ARS). BACKGROUND There is no clear landmark to indicate the cochlear apex or the modiolar axis. The ARS seems to be a promising tool for transmodiolar implantation by combining information from the preprocedure computed tomography scan (CT-scan) images to the real-time video of the surgical field. METHODS Eight human temporal bone resin models were included (five adults and three children). The procedure started by the identification of the modiolar axis on the preprocedure CT-scan followed by a 3D reconstruction of the images. Information on modiolar location and navigational guidance was supplemented to the reconstructed model, which was then registered with the surgical video using a point-based approach. Relative movements between the phantom and the microscope were tracked using image feature-based motion tracking. Based on the information provided via the ARS, the surgeon implanted the electrode-array inside the modiolus after drilling the helicothrema. Postprocedure CT-scan images were acquired to evaluate the registration error and the implantation accuracy. RESULTS The implantation could be conducted in all cases with a 2D registration error of 0.4 ± 0.24 mm. The mean entry point error was 0.6 ± 1.00 mm and the implant angular error 13.5 ± 8.93 degrees (n = 8), compatible with the procedure requirements. CONCLUSION We developed an image-based ARS to identify the extremities and the axis of the cochlear modiolus on intraprocedure videos. The system yielded submillimetric accuracy for implantation and remained stable throughout the experimental study.
Collapse
|
18
|
Sahovaler A, Chan HHL, Gualtieri T, Daly M, Ferrari M, Vannelli C, Eu D, Manojlovic-Kolarski M, Orzell S, Taboni S, de Almeida JR, Goldstein DP, Deganello A, Nicolai P, Gilbert RW, Irish JC. Augmented Reality and Intraoperative Navigation in Sinonasal Malignancies: A Preclinical Study. Front Oncol 2021; 11:723509. [PMID: 34790568 PMCID: PMC8591179 DOI: 10.3389/fonc.2021.723509] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 10/12/2021] [Indexed: 11/13/2022] Open
Abstract
Objective To report the first use of a novel projected augmented reality (AR) system in open sinonasal tumor resections in preclinical models and to compare the AR approach with an advanced intraoperative navigation (IN) system. Methods Four tumor models were created. Five head and neck surgeons participated in the study performing virtual osteotomies. Unguided, AR, IN, and AR + IN simulations were performed. Statistical comparisons between approaches were obtained. Intratumoral cut rate was the main outcome. The groups were also compared in terms of percentage of intratumoral, close, adequate, and excessive distances from the tumor. Information on a wearable gaze tracker headset and NASA Task Load Index questionnaire results were analyzed as well. Results A total of 335 cuts were simulated. Intratumoral cuts were observed in 20.7%, 9.4%, 1.2,% and 0% of the unguided, AR, IN, and AR + IN simulations, respectively (p < 0.0001). The AR was superior than the unguided approach in univariate and multivariate models. The percentage of time looking at the screen during the procedures was 55.5% for the unguided approaches and 0%, 78.5%, and 61.8% in AR, IN, and AR + IN, respectively (p < 0.001). The combined approach significantly reduced the screen time compared with the IN procedure alone. Conclusion We reported the use of a novel AR system for oncological resections in open sinonasal approaches, with improved margin delineation compared with unguided techniques. AR improved the gaze-toggling drawback of IN. Further refinements of the AR system are needed before translating our experience to clinical practice.
Collapse
Affiliation(s)
- Axel Sahovaler
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Harley H L Chan
- Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Tommaso Gualtieri
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada.,Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy
| | - Michael Daly
- Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Marco Ferrari
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada.,Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy.,Section of Otorhinolaryngology-Head and Neck Surgery, University of Padua-Azienda Ospedaliera di Padova, Padua, Italy
| | - Claire Vannelli
- Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Donovan Eu
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| | - Mirko Manojlovic-Kolarski
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Susannah Orzell
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Stefano Taboni
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada.,Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy.,Section of Otorhinolaryngology-Head and Neck Surgery, University of Padua-Azienda Ospedaliera di Padova, Padua, Italy
| | - John R de Almeida
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - David P Goldstein
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Alberto Deganello
- Unit of Otorhinolaryngology-Head and Neck Surgery, University of Brescia-ASST "Spedali Civili di Brescia, Brescia, Italy
| | - Piero Nicolai
- Section of Otorhinolaryngology-Head and Neck Surgery, University of Padua-Azienda Ospedaliera di Padova, Padua, Italy
| | - Ralph W Gilbert
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada
| | - Jonathan C Irish
- Department of Otolaryngology-Head and Neck Surgery/Surgical Oncology, Princess Margaret Cancer Centre/University Health Network, Toronto, ON, Canada.,Guided Therapeutics (GTx) Program, Techna Institute, University Health Network, Toronto, ON, Canada
| |
Collapse
|
19
|
Patient-specific virtual and mixed reality for immersive, experiential anatomy education and for surgical planning in temporal bone surgery. Auris Nasus Larynx 2021; 48:1081-1091. [PMID: 34059399 DOI: 10.1016/j.anl.2021.03.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 02/13/2021] [Accepted: 03/16/2021] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The recent development of extended reality technology has attracted interest in medicine. We explored the use of patient-specific virtual reality (VR) and mixed reality (MR) temporal bone models in anatomical teaching, pre-operative surgical planning and intra-operative surgical referencing. METHODS VR and MR temporal bone models were created and visualized on head-mounted display (HMD) and MR headset respectively, by a novel webservice that allows users to convert computed tomography images to VR and MR images without specific knowledge of programming. Eleven otorhinolaryngology trainees and specialists were asked to manipulate the healthy VR temporal bone model and to assess its validity by filling out a questionnaire. Additionally, VR and MR pathological models of petrous apex cholesteatoma were utilized for surgical planning pre-operatively and for referring to the anatomy during the surgery. RESULTS Most participants were favorable about the VR model and considered HMD as superior to a flat computer screen. 91% of the participants agreed or somewhat agreed that VR through HMD is cost effective. In addition, the VR pathological model was used for planning and sharing the surgical approach during a pre-operative surgical conference. The MR headset was worn intra-operatively to clarify the relationship between the pathological lesion and vital anatomical structures. CONCLUSION Regardless of the participants' training level in otorhinolaryngology or VR experience, all participants agreed that the VR temporal bone model is useful for anatomical education. Furthermore, the creation of patient-specific VR and MR models using the webservice and their pre- and intra-operative usages indicated the potential of innovative adjunctive surgical instrument.
Collapse
|
20
|
Gu W, Shah K, Knopf J, Navab N, Unberath M. Feasibility of image-based augmented reality guidance of total shoulder arthroplasty using microsoft HoloLens 1. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2021. [DOI: 10.1080/21681163.2020.1835556] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Affiliation(s)
- Wenhao Gu
- Johns Hopkins University, Baltimore, USA
| | | | | | | | | |
Collapse
|
21
|
Augmented Reality, Virtual Reality and Artificial Intelligence in Orthopedic Surgery: A Systematic Review. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11073253] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Background: The application of virtual and augmented reality technologies to orthopaedic surgery training and practice aims to increase the safety and accuracy of procedures and reducing complications and costs. The purpose of this systematic review is to summarise the present literature on this topic while providing a detailed analysis of current flaws and benefits. Methods: A comprehensive search on the PubMed, Cochrane, CINAHL, and Embase database was conducted from inception to February 2021. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used to improve the reporting of the review. The Cochrane Risk of Bias Tool and the Methodological Index for Non-Randomized Studies (MINORS) was used to assess the quality and potential bias of the included randomized and non-randomized control trials, respectively. Results: Virtual reality has been proven revolutionary for both resident training and preoperative planning. Thanks to augmented reality, orthopaedic surgeons could carry out procedures faster and more accurately, improving overall safety. Artificial intelligence (AI) is a promising technology with limitless potential, but, nowadays, its use in orthopaedic surgery is limited to preoperative diagnosis. Conclusions: Extended reality technologies have the potential to reform orthopaedic training and practice, providing an opportunity for unidirectional growth towards a patient-centred approach.
Collapse
|
22
|
Liu PR, Lu L, Zhang JY, Huo TT, Liu SX, Ye ZW. Application of Artificial Intelligence in Medicine: An Overview. Curr Med Sci 2021; 41:1105-1115. [PMID: 34874486 PMCID: PMC8648557 DOI: 10.1007/s11596-021-2474-3] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/01/2020] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) is a new technical discipline that uses computer technology to research and develop the theory, method, technique, and application system for the simulation, extension, and expansion of human intelligence. With the assistance of new AI technology, the traditional medical environment has changed a lot. For example, a patient's diagnosis based on radiological, pathological, endoscopic, ultrasonographic, and biochemical examinations has been effectively promoted with a higher accuracy and a lower human workload. The medical treatments during the perioperative period, including the preoperative preparation, surgical period, and postoperative recovery period, have been significantly enhanced with better surgical effects. In addition, AI technology has also played a crucial role in medical drug production, medical management, and medical education, taking them into a new direction. The purpose of this review is to introduce the application of AI in medicine and to provide an outlook of future trends.
Collapse
Affiliation(s)
- Peng-ran Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Lin Lu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Jia-yao Zhang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Tong-tong Huo
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Song-xiang Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Zhe-wei Ye
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| |
Collapse
|
23
|
Timonen T, Iso-Mustajärvi M, Linder P, Lehtimäki A, Löppönen H, Elomaa AP, Dietz A. Virtual reality improves the accuracy of simulated preoperative planning in temporal bones: a feasibility and validation study. Eur Arch Otorhinolaryngol 2020; 278:2795-2806. [PMID: 32964264 PMCID: PMC8266780 DOI: 10.1007/s00405-020-06360-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 09/08/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE Consumer-grade virtual reality (VR) has recently enabled various medical applications, but more evidence supporting their validity is needed. We investigated the accuracy of simulated surgical planning in a VR environment (VR) with temporal bones and compared it to conventional cross-sectional image viewing in picture archiving and communication system (PACS) interface. METHODS Five experienced otologic surgeons measured significant anatomic structures and fiducials on five fresh-frozen cadaveric temporal bones in VR and cross-sectional viewing. Primary image data were acquired by computed tomography. In total, 275 anatomical landmark measurements and 250 measurements of the distance between fiducials were obtained with both methods. Distance measurements between the fiducials were confirmed by physical measurement obtained by Vernier caliper. The experts evaluated the subjective validity of both methods on a 5-point Likert scale qualitative survey. RESULTS A strong correlation based on intraclass coefficient was found between the methods on both the anatomical (r > 0.900) and fiducial measurements (r > 0.916). Two-tailed paired t-test and Bland-Altman plots demonstrated high equivalences between the VR and cross-sectional viewing with mean differences of 1.9% (p = 0.396) and 0.472 mm (p = 0.065) for anatomical and fiducial measurements, respectively. Gross measurement errors due to the misidentification of fiducials occurred more frequently in the cross-sectional viewing. The mean face and content validity rating for VR were significantly better compared to cross-sectional viewing (total mean score 4.11 vs 3.39, p < 0.001). CONCLUSION Our study supports good accuracy and reliability of VR environment for simulated surgical planning in temporal bones compared to conventional cross-sectional visualization.
Collapse
Affiliation(s)
- Tomi Timonen
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland.
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland.
| | - Matti Iso-Mustajärvi
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland
- Microsurgery Centre of Eastern Finland, Kuopio, Finland
| | - Pia Linder
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland
| | - Antti Lehtimäki
- Department of Radiology, Kuopio University Hospital, Kuopio, Finland
| | - Heikki Löppönen
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | | | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, PL 100, 70210, Kuopio, Finland
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|