101
|
Iizuka K, Sato Y, Imaizumi Y, Mizutani T. Potential Efficacy of Multimodal Mixed Reality in Epilepsy Surgery. Oper Neurosurg (Hagerstown) 2021; 20:276-281. [PMID: 33382064 DOI: 10.1093/ons/opaa341] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 08/25/2020] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Mixed reality (MR) technology, which can fuse things in real and virtual space in real time, has been used mainly for simulation in neurosurgical training. OBJECTIVE To develop MR technology into multimodal MR for intraoperative guidance during epilepsy surgery. METHODS A 33-yr-old male patient suffered from intractable general tonic seizures due to left temporal meningoencephalocele. Preoperative scalp electroencephalograms localized interictal epileptic activity on the left temporal lobe. Iomazenil single photon emission tomography revealed temporal lobe lateralization. Magnetic resonance imaging (MRI) demonstrated left basal temporal meningoencephalocele extending into the pterygopalatine fossa through a bone defect at the base of the greater sphenoid wing. A 3-dimensional model was created for MR based on multimodal data including computed tomography, MRI tractography, and digital subtraction angiography, which enabled 3-dimensional visualization of abnormal subcortical fiber connections between the meningoencephalocele and the epileptic focus. RESULTS By using intraoperative multimodal MR, we were able to safely remove the meningoencephalocele and perform epileptic focus resection. The patient was seizure-free postoperatively, and no adverse effects were noted. CONCLUSION Intraoperative multimodal MR was a feasible and effective technique, and it can be applied for a wide range of epilepsy surgeries.
Collapse
|
102
|
Hilt AD, Hierck BP, Eijkenduijn J, Wesselius FJ, Albayrak A, Melles M, Schalij MJ, Scherptong RWC. Development of a patient-oriented Hololens application to illustrate the function of medication after myocardial infarction. EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2021; 2:511-520. [PMID: 36713611 PMCID: PMC9707881 DOI: 10.1093/ehjdh/ztab053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/25/2021] [Accepted: 06/08/2021] [Indexed: 02/07/2023]
Abstract
Aims Statin treatment is one of the hallmarks of secondary prevention after myocardial infarction. Adherence to statins tends to be difficult and can be improved by patient education. Novel technologies such as mixed reality (MR) expand the possibilities to support this process. To assess if an MR medication-application supports patient education focused on function of statins after myocardial infarction. Methods and results A human-centred design-approach was used to develop an MR statin tool for Microsoft HoloLens™. Twenty-two myocardial infarction patients were enrolled; 12 tested the application, 10 patients were controls. Clinical, demographic, and qualitative data were obtained. All patients performed a test on statin knowledge. To test if patients with a higher tendency to become involved in virtual environments affected test outcome in the intervention group, validated Presence- and Immersive Tendency Questionnaires (PQ and ITQ) were used. Twenty-two myocardial infarction patients (ST-elevation myocardial infarction, 18/22, 82%) completed the study. Ten out of 12 (83%) patients in the intervention group improved their statin knowledge by using the MR application (median 8 points, IQR 8). Test improvement was mainly the result of increased understanding of statin mechanisms in the body and secondary preventive effects. A high tendency to get involved and focused in virtual environments was moderately positive correlated with better test improvement (r = 0.57, P < 0.05). The median post-test score in the control group was poor (median 6 points, IQR 4). Conclusions An MR statin education application can be applied effectively in myocardial infarction patients to explain statin function and importance.
Collapse
|
103
|
Weeks JK, Pakpoor J, Park BJ, Robinson NJ, Rubinstein NA, Prouty SM, Nachiappan AC. Harnessing Augmented Reality and CT to Teach First-Year Medical Students Head and Neck Anatomy. Acad Radiol 2021; 28:871-876. [PMID: 32828663 DOI: 10.1016/j.acra.2020.07.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 06/30/2020] [Accepted: 07/05/2020] [Indexed: 11/30/2022]
Abstract
RATIONALE AND OBJECTIVES Three-dimensional (3D) visualization has been shown to benefit new generations of medical students and physicians-in-training in a variety of contexts. However, there is limited research directly comparing student performance after using 3D tools to those using two-dimensional (2D) screens. MATERIALS AND METHODS A CT was performed on a donated cadaver and a 3D CT hologram was created. A total of 30 first-year medical students were randomly assigned into two groups to review head and neck anatomy in a teaching session that incorporated CT. The first group used an augmented reality headset, while the second group used a laptop screen. The students were administered a five-question anatomy test before and after the session. Two-tailed t-tests were used for statistical comparison of pretest and posttest performance within and between groups. A feedback survey was distributed for qualitative data. RESULTS Pretest vs. posttest comparison of average percentage of questions answered correctly demonstrated both groups showing significant in-group improvement (p < 0.05), from 59% to 95% in the augmented reality group, and from 57% to 80% in the screen group. Between-group analysis indicated that posttest performance was significantly better in the augmented reality group (p = 0.022, effect size = 0.73). CONCLUSION Immersive 3D visualization has the potential to improve short-term anatomic recall in the head and neck compared to traditional 2D screen-based review, as well as engage millennial learners to learn better in anatomy laboratory. Our findings may reflect additional benefit gained from the stereoscopic depth cues present in augmented reality-based visualization.
Collapse
|
104
|
Li G, Cao Z, Wang J, Zhang X, Zhang L, Dong J, Lu G. Mixed reality models based on low-dose computed tomography technology in nephron-sparing surgery are better than models based on normal-dose computed tomography. Quant Imaging Med Surg 2021; 11:2658-2668. [PMID: 34079731 DOI: 10.21037/qims-20-956] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Background Nephron-sparing surgery has been widely applied in the treatment of renal tumors. Previous studies have confirmed the advantages of mixed reality technology in surgery. The study aimed to explore the optimization of mixed reality technology and its application value in nephron-sparing surgery. Methods In this prospective study of 150 patients with complex renal tumors (RENAL nephrometry score ≥7) who underwent nephron-sparing surgery, patients were randomly divided into Group A (the normal-dose mixed reality group, n=50), Group B (the low-dose mixed reality group, n=50), and Group C (the traditional computed tomography image group, n=50). Group A and Group C received the normal-dose computed tomography scan protocol: 120 kVp, 400 mA, and 350 mgI/mL, while Group B received the low-dose computed tomography scan protocol: 80 kVp, automatic tube current modulation, and 320 mgI/mL. All computed tomography data were transmitted to a three-dimensional visualization workstation and underwent modeling and mixed reality imaging. Two senior surgeons evaluated mixed reality quality. Objective indexes and perioperative indexes were calculated and compared. Results Compared with Group A, the radiation effective dose in Group B was decreased by 39.6%. The subjective scores of mixed reality quality in Group B were significantly higher than those of Group A (Z=-4.186, P<0.001). The inter-observer agreement between the two senior surgeons in mixed reality quality was excellent (K=0.840, P<0.001). The perioperative indexes showed that the mixed reality groups were significantly different from the computed tomography image group (all P<0.017). More cases underwent nephron-sparing surgery in the mixed reality groups than in the computed tomography image group (P<0.0017). Conclusions Low-dose computed tomography technology can be effectively applied to mixed reality optimization, reducing the effective dose and improving mixed reality quality. Optimized mixed reality can significantly increase the cases of successful nephron-sparing surgery and improve perioperative indexes.
Collapse
|
105
|
Patient-specific virtual and mixed reality for immersive, experiential anatomy education and for surgical planning in temporal bone surgery. Auris Nasus Larynx 2021; 48:1081-1091. [PMID: 34059399 DOI: 10.1016/j.anl.2021.03.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 02/13/2021] [Accepted: 03/16/2021] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The recent development of extended reality technology has attracted interest in medicine. We explored the use of patient-specific virtual reality (VR) and mixed reality (MR) temporal bone models in anatomical teaching, pre-operative surgical planning and intra-operative surgical referencing. METHODS VR and MR temporal bone models were created and visualized on head-mounted display (HMD) and MR headset respectively, by a novel webservice that allows users to convert computed tomography images to VR and MR images without specific knowledge of programming. Eleven otorhinolaryngology trainees and specialists were asked to manipulate the healthy VR temporal bone model and to assess its validity by filling out a questionnaire. Additionally, VR and MR pathological models of petrous apex cholesteatoma were utilized for surgical planning pre-operatively and for referring to the anatomy during the surgery. RESULTS Most participants were favorable about the VR model and considered HMD as superior to a flat computer screen. 91% of the participants agreed or somewhat agreed that VR through HMD is cost effective. In addition, the VR pathological model was used for planning and sharing the surgical approach during a pre-operative surgical conference. The MR headset was worn intra-operatively to clarify the relationship between the pathological lesion and vital anatomical structures. CONCLUSION Regardless of the participants' training level in otorhinolaryngology or VR experience, all participants agreed that the VR temporal bone model is useful for anatomical education. Furthermore, the creation of patient-specific VR and MR models using the webservice and their pre- and intra-operative usages indicated the potential of innovative adjunctive surgical instrument.
Collapse
|
106
|
Comparing the effectiveness of augmented reality-based and conventional instructions during single ECMO cannulation training. Int J Comput Assist Radiol Surg 2021; 16:1171-1180. [PMID: 34023976 PMCID: PMC8260416 DOI: 10.1007/s11548-021-02408-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 05/11/2021] [Indexed: 11/06/2022]
Abstract
Purpose Effective training of extracorporeal membrane oxygenation (ECMO) cannulation is key to fighting the persistently high mortality rate of ECMO interventions. Though augmented reality (AR) is a promising technology for improving information display, only a small percentage of AR projects have addressed training procedures. The present study investigates the potential benefits of AR-based, contextual instructions for ECMO cannulation training as compared to instructions used during conventional training at a university hospital. Methodology An AR step-by-step guide was developed for the Microsoft HoloLens 2 that combines text, images, and videos from the conventional training program with simple 3D models. A study was conducted with 21 medical students performing two surgical procedures on a simulator. Participants were divided into two groups, with one group using the conventional instructions for the first procedure and AR instructions for the second and the other group using instructions in reverse order. Training times, a detailed error protocol, and a standardized user experience questionnaire (UEQ) were evaluated. Results AR-based execution was associated with slightly higher training times and with significantly fewer errors for the more complex second procedure (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p<0.05$$\end{document}p<0.05, Mann–Whitney U). These differences in errors were most present for knowledge-related errors, resulting in a 66% reduction in the number of errors. AR instructions also led to significantly better ratings on 5 out of the 6 scales used in the UEQ, pointing to higher perceived clarify of information, information acquisition speed, and stimulation. Conclusion The results extend previous research on AR instructions to ECMO cannulation training, indicating its high potential to improve training outcomes as a result of better information acquisition by participants during task execution. Future work should investigate how better performance in a single training session relates to better performance in the long run.
Collapse
|
107
|
Reis G, Yilmaz M, Rambach J, Pagani A, Suarez-Ibarrola R, Miernik A, Lesur P, Minaskan N. Mixed reality applications in urology: Requirements and future potential. Ann Med Surg (Lond) 2021; 66:102394. [PMID: 34040777 PMCID: PMC8141462 DOI: 10.1016/j.amsu.2021.102394] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 04/28/2021] [Accepted: 05/09/2021] [Indexed: 12/20/2022] Open
Abstract
Background Mixed reality (MR), the computer-supported augmentation of a real environment with virtual elements, becomes ever more relevant in the medical domain, especially in urology, ranging from education and training over surgeries. We aimed to review existing MR technologies and their applications in urology. Methods A non-systematic review of current literature was performed using the PubMed-Medline database using the medical subject headings (MeSH) term “mixed reality”, combined with one of the following terms: “virtual reality”, “augmented reality”, ‘’urology’’ and “augmented virtuality”. The relevant studies were utilized. Results MR applications such as MR guided systems, immersive VR headsets, AR models, MR-simulated ureteroscopy and smart glasses have enormous potential in education, training and surgical interventions of urology. Medical students, urology residents and inexperienced urologists can gain experience thanks to MR technologies. MR applications are also used in patient education before interventions. Conclusions For surgical support, the achievable accuracy is often not sufficient. The main challenges are the non-rigid nature of the genitourinary organs, intraoperative data acquisition, online and multimodal registration and calibration of devices. However, the progress made in recent years is tremendous in all respects and the gap is constantly shrinking. MR, including AV and AR, is an intriguing technology with tremendous potential in urology field. ∙The main challenges lie in intraoperative data acquisition, online and multimodal registration and calibration of devices and data, appropriate display hardware, as well as cooperative devices and tools in the operation theatres. ∙Medical experts should feel encouraged to experience MR solutions and to communicate their specific needs and effects they aim at.
Collapse
|
108
|
Stromberga Z, Phelps C, Smith J, Moro C. Teaching with Disruptive Technology: The Use of Augmented, Virtual, and Mixed Reality (HoloLens) for Disease Education. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2021; 1317:147-162. [PMID: 33945136 DOI: 10.1007/978-3-030-61125-5_8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Modern technologies are often utilised in schools or universities with a variety of educational goals in mind. Of particular interest is the enhanced interactivity and engagement offered by mixed reality devices such as the HoloLens, as well as the ability to explore anatomical models of disease using augmented and virtual realities. As the students are required to learn an ever-increasing number of diseases within a university health science or medical degree, it is crucial to consider which technologies provide value to educators and students. This chapter explores the opportunities for using modern disruptive technologies to teach a curriculum surrounding disease. For relevant examples, a focus will be placed on asthma as a respiratory disease which is increasing in prevalence, and stroke as a neurological and cardiovascular disease. The complexities of creating effective educational curricula around these diseases will be explored, along with the benefits of using augmented reality and mixed reality as viable teaching technologies in a range of use cases.
Collapse
|
109
|
Fu R, Zhang C, Zhang T, Chu XP, Tang WF, Yang XN, Huang MP, Zhuang J, Wu YL, Zhong WZ. A three-dimensional printing navigational template combined with mixed reality technique for localizing pulmonary nodules. Interact Cardiovasc Thorac Surg 2021; 32:552-559. [PMID: 33751118 PMCID: PMC8923295 DOI: 10.1093/icvts/ivaa300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 10/20/2020] [Accepted: 10/27/2020] [Indexed: 02/05/2023] Open
Abstract
OBJECTIVES Localizing non-palpable pulmonary nodules is challenging for thoracic surgeons. Here, we investigated the accuracy of three-dimensional (3D) printing technology combined with mixed reality (MR) for localizing ground glass opacity-dominant pulmonary nodules. METHODS In this single-arm study, we prospectively enrolled patients with small pulmonary nodules (<2 cm) that required accurate localization. A 3D-printing physical navigational template was designed based on the reconstruction of computed tomography images, and a 3D model was generated through the MR glasses. We set the deviation distance as the primary end point for efficacy evaluation. Clinicopathological and surgical data were obtained for further analysis. RESULTS Sixteen patients with 17 non-palpable pulmonary nodules were enrolled in this study. Sixteen nodules were localized successfully (16/17; 94.1%) using this novel approach with a median deviation of 9 mm. The mean time required for localization was 25 ± 5.2 min. For the nodules in the upper/middle and lower lobes, the median deviation was 6 mm (range, 0-12.0) and 16 mm (range, 15.0-20.0), respectively. The deviation difference between the groups was significant (Z = -2.957, P = 0.003). The pathological evaluation of resection margins was negative. CONCLUSIONS The 3D printing navigational template combined with MR can be a feasible approach for localizing pulmonary nodules.
Collapse
|
110
|
Penczek J, Boynton PA, Beams R, Sriram RD. Measurement Challenges for Medical Image Display Devices. J Digit Imaging 2021; 34:458-472. [PMID: 33846889 DOI: 10.1007/s10278-021-00438-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 01/27/2021] [Accepted: 02/24/2021] [Indexed: 12/25/2022] Open
Abstract
Visual information is a critical component in the evaluation and communication of patient medical information. As display technologies have evolved, the medical community has sought to take advantage of advances in wider color gamuts, greater display portability, and more immersive imagery. These image quality enhancements have shown improvements in the quality of healthcare through greater efficiency, higher diagnostic accuracy, added functionality, enhanced training, and better health records. However, the display technology advances typically introduce greater complexity in the image workflow and display evaluation. This paper highlights some of the optical measurement challenges created by these new display technologies and offers possible pathways to address them.
Collapse
|
111
|
Takata T, Nakabayashi S, Kondo H, Yamamoto M, Furui S, Shiraishi K, Kobayashi T, Oba H, Okamoto T, Kotoku J. Mixed Reality Visualization of Radiation Dose for Health Professionals and Patients in Interventional Radiology. J Med Syst 2021; 45:38. [PMID: 33594609 PMCID: PMC7886835 DOI: 10.1007/s10916-020-01700-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/10/2020] [Indexed: 11/29/2022]
Abstract
For interventional radiology, dose management has persisted as a crucially important issue to reduce radiation exposure to patients and medical staff. This study designed a real-time dose visualization system for interventional radiology designed with mixed reality technology and Monte Carlo simulation. An earlier report described a Monte-Carlo-based estimation system, which simulates a patient's skin dose and air dose distributions, adopted for our system. We also developed a system of acquiring fluoroscopic conditions to input them into the Monte Carlo system. Then we combined the Monte Carlo system with a wearable device for three-dimensional holographic visualization. The estimated doses were transferred sequentially to the device. The patient's dose distribution was then projected on the patient body. The visualization system also has a mechanism to detect one's position in a room to estimate the user's exposure dose to detect and display the exposure level. Qualitative tests were conducted to evaluate the workload and usability of our mixed reality system. An end-to-end system test was performed using a human phantom. The acquisition system accurately recognized conditions that were necessary for real-time dose estimation. The dose hologram represents the patient dose. The user dose was changed correctly, depending on conditions and positions. The perceived overall workload score (33.50) was lower than the scores reported in the literature for medical tasks (50.60) for computer activities (54.00). Mixed reality dose visualization is expected to improve exposure dose management for patients and health professionals by exhibiting the invisible radiation exposure in real space.
Collapse
|
112
|
Teatini A, Kumar RP, Elle OJ, Wiig O. Mixed reality as a novel tool for diagnostic and surgical navigation in orthopaedics. Int J Comput Assist Radiol Surg 2021; 16:407-414. [PMID: 33555563 PMCID: PMC7946663 DOI: 10.1007/s11548-020-02302-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Accepted: 12/14/2020] [Indexed: 12/15/2022]
Abstract
Purpose This study presents a novel surgical navigation tool developed in mixed reality environment for orthopaedic surgery. Joint and skeletal deformities affect all age groups and greatly reduce the range of motion of the joints. These deformities are notoriously difficult to diagnose and to correct through surgery. Method We have developed a surgical tool which integrates surgical instrument tracking and augmented reality through a head mounted display. This allows the surgeon to visualise bones with the illusion of possessing “X-ray” vision. The studies presented below aim to assess the accuracy of the surgical navigation tool in tracking a location at the tip of the surgical instrument in holographic space. Results Results show that the average accuracy provided by the navigation tool is around 8 mm, and qualitative assessment by the orthopaedic surgeons provided positive feedback in terms of the capabilities for diagnostic use. Conclusions More improvements are necessary for the navigation tool to be accurate enough for surgical applications, however, this new tool has the potential to improve diagnostic accuracy and allow for safer and more precise surgeries, as well as provide for better learning conditions for orthopaedic surgeons in training.
Collapse
|
113
|
Meng FH, Zhu ZH, Lei ZH, Zhang XH, Shao L, Zhang HZ, Zhang T. Feasibility of the application of mixed reality in mandible reconstruction with fibula flap: A cadaveric specimen study. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2021; 122:e45-e49. [PMID: 33434746 DOI: 10.1016/j.jormas.2021.01.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 12/02/2020] [Accepted: 01/04/2021] [Indexed: 11/16/2022]
Abstract
BACKGROUND In recent years, a new technology, mixed reality (MR), has emerged and surpassed the limitations of augmented reality (AR) with its inability to interact with hologram. This study aimed to investigate the feasibility of the application of MR in mandible reconstruction with fibula flap. METHODS Computed tomography (CT) examination was performed for one cadaveric mandible and ten fibula bones. Using professional software Proplan CMF 3.0 (Materialize, Leuven, Belgium), we created a defected mandibular model and simulated the reconstruction design with these 10 fibula bones. The surgical plans were transferred to the HoloLens. We used HoloLens to guide the osteotomy and shaping of the fibular bone. After fixing the fibular segments using the Ti template, all segments underwent a CT examination. Before and after objects were compared for measurements of the location of fibular osteotomies, angular deviation of fibular segments, and intergonial angle distances. RESULTS The mean location of the fibular osteotomies, angular deviation of the fibular segments, and intergonial angle distances were 2.11 ± 1.31 mm, 2.85°± 1.97°, and 7.24 ± 3.42 mm, respectively. CONCLUSION The experimental results revealed that slight deviations remained in the accuracy of fibular osteotomy. With the further development of technology, it has the potential to improve the efficiency and precision of the reconstructive surgery.
Collapse
|
114
|
Abstract
Augmented reality (AR) technology enhances a user's perception through the superimposition of digital information on physical images while still allowing for interaction with the physical world. The tracking, data processing, and display technology of traditional computer-assisted surgery (CAS) navigation have the potential to be consolidated to an AR headset equipped with high-fidelity cameras, microcomputers, and optical see-through lenses that create digital holographic images. This article evaluates AR applications specific to total knee arthroplasty, total hip arthroplasty, and the opportunities for AR to enhance arthroplasty education and professional development.
Collapse
|
115
|
Karbasi Z, Niakan Kalhori SR. Application and evaluation of virtual technologies for anatomy education to medical students: A review. Med J Islam Repub Iran 2020; 34:163. [PMID: 33816362 PMCID: PMC8004573 DOI: 10.47176/mjiri.34.163] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Indexed: 11/26/2022] Open
Abstract
To learn anatomy, medical students need to look at body structures and manipulate anatomical structures. Simulation-based education is a promising opportunity for the upgrade and sharing of knowledge. The purpose of this review is to investigate the evaluation of virtual technologies in teaching anatomy to medical students.
Methods: In this review, we searched PubMed, Web of Sciences, Scopus, and Embase for relevant articles in November 2018. Information retrieval was done without time limitation. The search was based on the following keywords: virtual reality, medical education, and anatomy. Results: 2483 articles were identified by searching databases. Finally, the fulltext of 12 articles was reviewed. The results of the review showed that virtual technologies had been used to train internal human anatomy, ear anatomy, nose anatomy, temporal bone anatomy, surgical anatomy, neuroanatomy, and cardiac anatomy. Conclusion: Virtual reality, augmented reality, and games can enhance students' anatomical learning skills and are proper alternatives to traditional methods in case of no access to the cadavers and mannequin.
Collapse
|
116
|
Robinson BL, Mitchell TR, Brenseke BM. Evaluating the Use of Mixed Reality to Teach Gross and Microscopic Respiratory Anatomy. MEDICAL SCIENCE EDUCATOR 2020; 30:1745-1748. [PMID: 32837799 PMCID: PMC7433990 DOI: 10.1007/s40670-020-01064-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Advances in technology often evolve into instructional platforms. This study evaluated the applicability of mixed reality (MR) in anatomy instruction. First-year medical students were randomized into a control group using a cadaver and light microscopes, or an experimental group using HoloLens, to complete a learning activity on gross and microscopic respiratory anatomy. Compared with the control group, the experimental group reached an equivalent score on the post-activity knowledge assessment, performed better on follow-up assessment, had consistently higher perceived understanding, and rated the activity higher. Findings suggest MR is an effective teaching tool and provides a favorable learning experience.
Collapse
|
117
|
Moro C, Phelps C, Jones D, Stromberga Z. Using Holograms to Enhance Learning in Health Sciences and Medicine. MEDICAL SCIENCE EDUCATOR 2020; 30:1351-1352. [PMID: 34457800 PMCID: PMC8368738 DOI: 10.1007/s40670-020-01051-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
With the increasing volume of information for students to learn in a health sciences and medicine degree, tertiary educators need teaching resources that can maintain up-to-date information and educate effectively across a range of diseases and illnesses. Holograms may be the disruptive technology that can assist in this goal.
Collapse
|
118
|
Mixed Reality Interaction and Presentation Techniques for Medical Visualisations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020. [PMID: 33211310 DOI: 10.1007/978-3-030-47483-6_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
Mixed, Augmented and Virtual reality technologies are burgeoning with new applications and use cases appearing rapidly. This chapter provides a brief overview of the fundamental display presentation methods; head-worn, hand-held and projector-based displays. We present a summary of visualisation methods that employ these technologies in the medical domain with a diverse range of examples presented including diagnostic and exploration, intervention and clinical, interaction and gestures, and education.
Collapse
|
119
|
Condon C, Lam WT, Mosley C, Gough S. A systematic review and meta-analysis of the effectiveness of virtual reality as an exercise intervention for individuals with a respiratory condition. Adv Simul (Lond) 2020; 5:33. [PMID: 33292807 PMCID: PMC7678297 DOI: 10.1186/s41077-020-00151-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 11/02/2020] [Indexed: 01/31/2023] Open
Abstract
BACKGROUND Respiratory diseases impose an immense health burden worldwide and affect millions of people on a global scale. Reduction of exercise tolerance poses a huge health issue affecting patients with a respiratory condition, which is caused by skeletal muscle dysfunction and weakness and by lung function impairment. Virtual reality systems are emerging technologies that have drawn scientists' attention to its potential benefit for rehabilitation. METHODS A systematic review and meta-analysis following the PRISMA guidelines was performed to explore the effectiveness of virtual reality gaming and exergaming-based interventions on individuals with respiratory conditions. RESULTS Differences between the virtual reality intervention and traditional exercise rehabilitation revealed weak to insignificant effect size for mean heart rate (standardized mean difference, SMD = 0.17; p = 0.002), peak heart rate (SMD = 0.36; p = 0.27), dyspnea (SMD = 0.32; p = 0.13), and oxygen saturation SpO2 (SMD = 0.26; p = 0.096). In addition, other measures were collected, however, to the heterogeneity of reporting, could not be included in the meta-analysis. These included adherence, enjoyment, and drop-out rates. CONCLUSIONS The use of VRS as an intervention can provide options for rehabilitation, given their moderate effect for dyspnea and equivalent to weak effect for mean and maximum peak HR and SpO2. However, the use of virtual reality systems, as an intervention, needs further study since the literature lacks standardized methods to accurately analyze the effects of virtual reality for individuals with respiratory conditions, especially for duration, virtual reality system type, adherence, adverse effects, feasibility, enjoyment, and quality of life.
Collapse
|
120
|
Kim AS, Cheng WC, Beams R, Badano A. Color Rendering in Medical Extended-Reality Applications. J Digit Imaging 2020; 34:16-26. [PMID: 33205296 DOI: 10.1007/s10278-020-00392-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Revised: 09/15/2020] [Accepted: 09/30/2020] [Indexed: 10/23/2022] Open
Abstract
Cross-platform development of medical applications in extended-reality (XR) head-mounted displays (HMDs) often relies on game engines with rendering capabilities currently not standardized in the context of medical visualizations. Many aspects of the visualization pipeline including the characterization of color have yet to be consistently defined across rendering models and platforms. We examined the transfer of color properties from digital objects, through the rendering and image processing steps, to the RGB values sent to the display device. Five rendering pipeline configurations within the Unity engine were evaluated using 24 digital color patches. In the second experiment, the same configurations were evaluated with a tissue slide sample image. Measurements of the change in color associated with each configuration were characterized using the CIE 1976 color difference ([Formula: see text]). We found that the distribution of [Formula: see text] for the first experiment ranges from zero, as in the case using an Unlit Shader, to 25.97, as in the case using default configurations. The default Unity configuration consistently returned the highest [Formula: see text] across all 24 colors and also the largest range of color differences. In the second experiment, [Formula: see text]E ranged from 7.49 to 34.18. The Unlit configuration resulted in the highest [Formula: see text] in three of four selected pixels in the tissue sample image. Changes in color image properties associated with texture import settings were then evaluated in a third experiment using the TG18-QC test pattern. Differences in pixel values were found in all nine of the investigated texture import settings. The findings provide an initial characterization of color transfer and a basis for future work on standardization, consistency, and optimization of color in medical XR applications.
Collapse
|
121
|
Toward interprofessional team training for surgeons and anesthesiologists using virtual reality. Int J Comput Assist Radiol Surg 2020; 15:2109-2118. [PMID: 33083969 PMCID: PMC7671979 DOI: 10.1007/s11548-020-02276-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 10/01/2020] [Indexed: 01/09/2023]
Abstract
Purpose In this work, a virtual environment for interprofessional team training in laparoscopic surgery is proposed. Our objective is to provide a tool to train and improve intraoperative communication between anesthesiologists and surgeons during laparoscopic procedures. Methods An anesthesia simulation software and laparoscopic simulation software are combined within a multi-user virtual reality (VR) environment. Furthermore, two medical training scenarios for communication training between anesthesiologists and surgeons are proposed and evaluated. Testing was conducted and social presence was measured. In addition, clinical feedback from experts was collected by following a think-aloud protocol and through structured interviews. Results Our prototype is assessed as a reasonable basis for training and extensive clinical evaluation. Furthermore, the results of testing revealed a high degree of exhilaration and social presence of the involved physicians. Valuable insights were gained from the interviews and the think-aloud protocol with the experts of anesthesia and surgery that showed the feasibility of team training in VR, the usefulness of the system for medical training, and current limitations. Conclusion The proposed VR prototype provides a new basis for interprofessional team training in surgery. It engages the training of problem-based communication during surgery and might open new directions for operating room training. Electronic supplementary material The online version of this article (10.1007/s11548-020-02276-y) contains supplementary material, which is available to authorized users.
Collapse
|
122
|
Maharjan A, Alsadoon A, Prasad PWC, AlSallami N, Rashid TA, Alrubaie A, Haddad S. A Novel Solution of Using Mixed Reality in Bowel and Oral and Maxillofacial Surgical Telepresence: 3D Mean Value Cloning algorithm. Int J Med Robot 2020:e2161. [PMID: 32886412 DOI: 10.1002/rcs.2161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 09/01/2020] [Accepted: 09/01/2020] [Indexed: 11/11/2022]
Abstract
BACKGROUND AND AIM Most of the Mixed Reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts. METHODOLOGY The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discoloration artefacts around the blending region RESULTS: As compared to the state of art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds CONCLUSION: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video. This article is protected by copyright. All rights reserved.
Collapse
|
123
|
Cen J, Liufu R, Wen S, Qiu H, Liu X, Chen X, Yuan H, Huang M, Zhuang J. Three-Dimensional Printing, Virtual Reality and Mixed Reality for Pulmonary Atresia: Early Surgical Outcomes Evaluation. Heart Lung Circ 2020; 30:296-302. [PMID: 32863113 DOI: 10.1016/j.hlc.2020.03.017] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 01/07/2020] [Accepted: 03/28/2020] [Indexed: 11/16/2022]
Abstract
BACKGROUND Single-stage unifocalisation for pulmonary atresia (PA) with ventricular septal defect (VSD) and major aortopulmonary collateral arteries (MAPCA) requires a high degree of three-dimensional (3D) anatomical imagination. A previous study has reported the application of a 3D-printed heart model with virtual reality (VR) or mixed reality (MR). However, few studies have evaluated the surgical outcomes of the 3D model with VR or MR in PA/VSD patients. METHODS Three-dimensional (3D) heart models of five selected PA/VSD patients were derived from traditional imageology of their hearts. Using VR glasses, the 3D models were also visualised in the operating room. Both the 3D-printed heart models and preoperative evaluation by VR were used in the five selected patients for surgical simulation and better anatomical understanding. Mixed reality holograms were used as perioperative assistive tools. Surgical outcomes were assessed, including in-hospital and early follow-up clinical data. RESULTS The use of these three new technologies had favourable feedback from the surgeons on intraoperative judgment. There were no in-hospital or early deaths. No reintervention was required until the last follow-up. Three (3) patients developed postoperative complications: one had right bundle branch block and ST-segment change, one had chest drainage >7 days (>40 mL/day) and one had pneumonia. CONCLUSION The preoperative application of a 3D-printed heart model with VR or MR helped in aligning the surgical field. These technologies improved the understanding of complicated cardiac anatomy and achieved acceptable surgical outcomes as guiding surgical planning.
Collapse
|
124
|
Mapping the intellectual structure of research on surgery with mixed reality: Bibliometric network analysis (2000-2019). J Biomed Inform 2020; 109:103516. [PMID: 32736125 DOI: 10.1016/j.jbi.2020.103516] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 06/16/2020] [Accepted: 07/17/2020] [Indexed: 12/27/2022]
Abstract
OBJECTIVE The purpose of this study is to view research trends on surgery with mixed reality, and present the intellectual structure using bibliometric network analysis for the period 2000-2019. METHODS Analyses are implemented in the following four steps: (1) literature dataset acquisition from article database (Web of Science, Scopus, PubMed, and IEEE digital library), (2) dataset pre-processing and refinement, (3) network construction and visualization, and (4) analysis and interpretation. Descriptive analysis, bibliometric network analysis, and in-depth qualitative analysis were conducted. RESULTS The 14,591 keywords of 5897 abstracts data were ultimately used to ascertain the intellectual structure of research on surgery with mixed reality. The dynamics of the evolution of keywords in the structure throughout the four periods is summarized with four aspects: (a) maintaining a predominant utilization tool for training, (b) widening clinical application area, (c) reallocating the continuum of mixed reality, and (d) steering advanced imaging and simulation technology. CONCLUSIONS The results of this study can provide valuable insights into technology adoption and research trends of mixed reality in surgery. These findings can help clinicians to overview prospective medical research on surgery using mixed reality. Hospitals can also understand the periodical maturity of technology of mixed reality in surgery, and, therefore, these findings can suggest an academic landscape to make a decision in adopting new technologies in surgery.
Collapse
|
125
|
Ultrasound in augmented reality: a mixed-methods evaluation of head-mounted displays in image-guided interventions. Int J Comput Assist Radiol Surg 2020; 15:1895-1905. [PMID: 32725398 PMCID: PMC8332636 DOI: 10.1007/s11548-020-02236-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 07/14/2020] [Indexed: 11/24/2022]
Abstract
Purpose Augmented reality (AR) and head-mounted displays (HMD) in medical practice are current research topics. A commonly proposed use case of AR-HMDs is to display data in image-guided interventions. Although technical feasibility has been thoroughly shown, effects of AR-HMDs on interventions are not yet well researched, hampering clinical applicability. Therefore, the goal of this study is to better understand the benefits and limitations of this technology in ultrasound-guided interventions. Methods We used an AR-HMD system (based on the first-generation Microsoft Hololens) which overlays live ultrasound images spatially correctly at the location of the ultrasound transducer. We chose ultrasound-guided needle placements as a representative task for image-guided interventions. To examine the effects of the AR-HMD, we used mixed methods and conducted two studies in a lab setting: (1) In a randomized crossover study, we asked participants to place needles into a training model and evaluated task duration and accuracy with the AR-HMD as compared to the standard procedure without visual overlay and (2) in a qualitative study, we analyzed the user experience with AR-HMD using think-aloud protocols during ultrasound examinations and semi-structured interviews after the task. Results Participants (n = 20) placed needles more accurately (mean error of 7.4 mm vs. 4.9 mm, p = 0.022) but not significantly faster (mean task duration of 74.4 s vs. 66.4 s, p = 0.211) with the AR-HMD. All participants in the qualitative study (n = 6) reported limitations of and unfamiliarity with the AR-HMD, yet all but one also clearly noted benefits and/or that they would like to test the technology in practice. Conclusion We present additional, though still preliminary, evidence that AR-HMDs provide benefits in image-guided procedures. Our data also contribute insights into potential causes underlying the benefits, such as improved spatial perception. Still, more comprehensive studies are needed to ascertain benefits for clinical applications and to clarify mechanisms underlying these benefits. Electronic supplementary material The online version of this article (10.1007/s11548-020-02236-6) contains supplementary material, which is available to authorized users.
Collapse
|