1
|
Bhansali K, Lago MA, Beams R, Zhao C. Evaluation of monocular and binocular contrast perception on virtual reality head-mounted displays. J Med Imaging (Bellingham) 2024; 11:062605. [PMID: 39280782 PMCID: PMC11401613 DOI: 10.1117/1.jmi.11.6.062605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 08/23/2024] [Accepted: 08/26/2024] [Indexed: 09/18/2024] Open
Abstract
Purpose Visualization of medical images on a virtual reality (VR) head-mounted display (HMD) requires binocular fusion of a stereoscopic pair of graphical views. However, current image quality assessment on VR HMDs for medical applications has been primarily limited to time-consuming monocular optical bench measurement on a single eyepiece. Approach As an alternative to optical bench measurement to quantify the image quality on VR HMDs, we developed a WebXR test platform to perform contrast perceptual experiments that can be used for binocular image quality assessment. We obtained monocular and binocular contrast sensitivity responses (CSRs) from participants on a Meta Quest 2 VR HMD using varied interpupillary distance (IPD) configurations. Results The perceptual result shows that contrast perception on VR HMDs is primarily affected by optical aberration of the VR HMD. As a result, monocular CSR degrades at a high spatial frequency greater than 4 cycles per degree when gazing at the periphery of the display field of view, especially for mismatched IPD settings consistent with optical bench measurements. On the contrary, binocular contrast perception is dominated by the monocular view with superior image quality measured by the contrast. Conclusions We developed a test platform to investigate monocular and binocular contrast perception by performing perceptual experiments. The test method can be used to evaluate monocular and/or binocular image quality on VR HMDs for potential medical applications without extensive optical bench measurements.
Collapse
Affiliation(s)
- Khushi Bhansali
- US Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Miguel A Lago
- US Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Ryan Beams
- US Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Chumin Zhao
- US Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| |
Collapse
|
2
|
Stirrat T, Martin R, Umair M, Waller J. Advancing radiology education for medical students: leveraging digital tools and resources. Pol J Radiol 2024; 89:e508-e516. [PMID: 39507889 PMCID: PMC11538907 DOI: 10.5114/pjr/193518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 09/19/2024] [Indexed: 11/08/2024] Open
Abstract
This study evaluates diverse educational resources to address the gaps in diagnostic radiology education for medical students, aiming to identify tools that enhance theoretical knowledge and practical diagnostic skills. Employing a multi-faceted review, we analyzed digital platforms, academic databases, and social media for resources beneficial to medical students in radiology, assessing their accessibility, content quality, and educational value. Our investigation uncovered a broad spectrum of resources, from foundational platforms to advanced simulation tools, varying in their approach to teaching radiology. Traditional resources provide essential theoretical knowledge, while digital tools, including interactive case studies and multimedia content, offer immersive learning experiences. Notably, resources integrating machine learning and social media facilitate dynamic, peer-to-peer learning and up-to-date case discussions. Despite the minimal current focus on VR, its role in enhancing interactive learning is notable. The diversity in educational tools highlights the evolving nature of radiology education, reflecting a shift towards more engaging and practical learning methodologies. Identifying and integrating a variety of educational resources into radiology education can significantly enhance learning outcomes for medical students, preparing them for the complexities of modern diagnostic radiology with a well-rounded educational approach.
Collapse
Affiliation(s)
- Thomas Stirrat
- Georgetown University School of Medicine, Georgetown, United States
| | | | - Muhammad Umair
- Department of Radiology, Johns Hopkins University, Baltimore, United States
| | | |
Collapse
|
3
|
Robb TJ, Liu Y, Woodhouse B, Windahl C, Hurley D, McArthur G, Fox SB, Brown L, Guilford P, Minhinnick A, Jackson C, Blenkiron C, Parker K, Henare K, McColl R, Haux B, Young N, Boyle V, Cameron L, Deva S, Reeve J, Print CG, Davis M, Rieger U, Lawrence B. Blending space and time to talk about cancer in extended reality. NPJ Digit Med 2024; 7:261. [PMID: 39343807 PMCID: PMC11439928 DOI: 10.1038/s41746-024-01262-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 09/18/2024] [Indexed: 10/01/2024] Open
Abstract
We introduce a proof-of-concept extended reality (XR) environment for discussing cancer, presenting genomic information from multiple tumour sites in the context of 3D tumour models generated from CT scans. This tool enhances multidisciplinary discussions. Clinicians and cancer researchers explored its use in oncology, sharing perspectives on XR's potential for use in molecular tumour boards, clinician-patient communication, and education. XR serves as a universal language, fostering collaborative decision-making in oncology.
Collapse
Affiliation(s)
- Tamsin J Robb
- Molecular Medicine and Pathology, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Yinan Liu
- School of Architecture and Planning, University of Auckland, Auckland, New Zealand
| | - Braden Woodhouse
- Department of Oncology, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | | | - Daniel Hurley
- Molecular Medicine and Pathology, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Grant McArthur
- University of Melbourne, Melbourne, VIC, Australia
- Victorian Comprehensive Cancer Centre Alliance, Melbourne, VIC, Australia
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
| | - Stephen B Fox
- University of Melbourne, Melbourne, VIC, Australia
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
| | - Lisa Brown
- University of Melbourne, Melbourne, VIC, Australia
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
- The Royal Melbourne Hospital, Melbourne, VIC, Australia
| | | | - Alice Minhinnick
- Department of Oncology, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
- Auckland City Hospital, Te Whatu Ora Te Toka Tumai, Auckland, New Zealand
| | | | - Cherie Blenkiron
- Molecular Medicine and Pathology, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
- Auckland Cancer Society Research Centre, University of Auckland, Auckland, New Zealand
| | - Kate Parker
- Department of Oncology, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Kimiora Henare
- Auckland Cancer Society Research Centre, University of Auckland, Auckland, New Zealand
| | - Rose McColl
- Centre for eResearch, University of Auckland, Auckland, New Zealand
| | - Bianca Haux
- Centre for eResearch, University of Auckland, Auckland, New Zealand
| | - Nick Young
- Centre for eResearch, University of Auckland, Auckland, New Zealand
| | - Veronica Boyle
- School of Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Laird Cameron
- Auckland City Hospital, Te Whatu Ora Te Toka Tumai, Auckland, New Zealand
| | - Sanjeev Deva
- Auckland City Hospital, Te Whatu Ora Te Toka Tumai, Auckland, New Zealand
| | - Jane Reeve
- Radiology Auckland, Te Whatu Ora Te Toka Tumai, Auckland, New Zealand
| | - Cristin G Print
- Molecular Medicine and Pathology, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Michael Davis
- School of Architecture and Planning, University of Auckland, Auckland, New Zealand
| | - Uwe Rieger
- School of Architecture and Planning, University of Auckland, Auckland, New Zealand
| | - Ben Lawrence
- Department of Oncology, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand.
- Auckland City Hospital, Te Whatu Ora Te Toka Tumai, Auckland, New Zealand.
| |
Collapse
|
4
|
Requist MR, Mills MK, Carroll KL, Lenz AL. Quantitative Skeletal Imaging and Image-Based Modeling in Pediatric Orthopaedics. Curr Osteoporos Rep 2024; 22:44-55. [PMID: 38243151 DOI: 10.1007/s11914-023-00845-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/19/2023] [Indexed: 01/21/2024]
Abstract
PURPOSE OF REVIEW Musculoskeletal imaging serves a critical role in clinical care and orthopaedic research. Image-based modeling is also gaining traction as a useful tool in understanding skeletal morphology and mechanics. However, there are fewer studies on advanced imaging and modeling in pediatric populations. The purpose of this review is to provide an overview of recent literature on skeletal imaging modalities and modeling techniques with a special emphasis on current and future uses in pediatric research and clinical care. RECENT FINDINGS While many principles of imaging and 3D modeling are relevant across the lifespan, there are special considerations for pediatric musculoskeletal imaging and fewer studies of 3D skeletal modeling in pediatric populations. Improved understanding of bone morphology and growth during childhood in healthy and pathologic patients may provide new insight into the pathophysiology of pediatric-onset skeletal diseases and the biomechanics of bone development. Clinical translation of 3D modeling tools developed in orthopaedic research is limited by the requirement for manual image segmentation and the resources needed for segmentation, modeling, and analysis. This paper highlights the current and future uses of common musculoskeletal imaging modalities and 3D modeling techniques in pediatric orthopaedic clinical care and research.
Collapse
Affiliation(s)
- Melissa R Requist
- Department of Orthopaedics, University of Utah, 590 Wakara Way, Salt Lake City, UT, 84108, USA
- Department of Biomedical Engineering, University of Utah, 36 S Wasatch Dr., Salt Lake City, UT, 84112, USA
| | - Megan K Mills
- Department of Radiology and Imaging Sciences, University of Utah, 30 N Mario Capecchi Dr. 2 South, Salt Lake City, UT, 84112, USA
| | - Kristen L Carroll
- Department of Orthopaedics, University of Utah, 590 Wakara Way, Salt Lake City, UT, 84108, USA
- Shriners Hospital for Children, 1275 E Fairfax Rd, Salt Lake City, UT, 84103, USA
| | - Amy L Lenz
- Department of Orthopaedics, University of Utah, 590 Wakara Way, Salt Lake City, UT, 84108, USA.
- Department of Biomedical Engineering, University of Utah, 36 S Wasatch Dr., Salt Lake City, UT, 84112, USA.
| |
Collapse
|
5
|
Randazzo G, Reitano G, Carletti F, Iafrate M, Betto G, Novara G, Dal Moro F, Zattoni F. Urology: a trip into metaverse. World J Urol 2023; 41:2647-2657. [PMID: 37552265 PMCID: PMC10582132 DOI: 10.1007/s00345-023-04560-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 07/27/2023] [Indexed: 08/09/2023] Open
Abstract
PURPOSE Metaverse is becoming an alternative world in which technology and virtual experiences are mixed with real life, and it holds the promise of changing our way of living. Healthcare is already changing thanks to Metaverse and its numerous applications. In particular, Urology and urologic patients can benefit in many ways from Metaverse. METHODS A non-systematic literature review identified recently published studies dealing with Metaverse. The database used for this review was PubMed, and the identified studies served as the base for a narrative analysis of the literature that explored the use of Metaverse in Urology. RESULTS Virtual consultations can enhance access to care and reduce distance and costs, and pain management and rehabilitation can find an incredible support in virtual reality, reducing anxiety and stress and improving adherence to therapy. Metaverse has the biggest potential in urologic surgery, where it can revolutionize both surgery planning, with 3D modeling and virtual surgeries, and intraoperatively, with augmented reality and artificial intelligence. Med Schools can implement Metaverse in anatomy and surgery lectures, providing an immersive environment for learning, and residents can use this platform for learning in a safe space at their own pace. However, there are also potential challenges and ethical concerns associated with the use of the metaverse in healthcare. CONCLUSIONS This paper provides an overview of the concept of the metaverse, its potential applications, challenges, and opportunities, and discusses the implications of its development in Urology.
Collapse
Affiliation(s)
- Gianmarco Randazzo
- Department Surgery, Oncology and Gastroenterology, Urologic Unit, University of Padova, 35122 Padua, Italy
| | - Giuseppe Reitano
- Department Surgery, Oncology and Gastroenterology, Urologic Unit, University of Padova, 35122 Padua, Italy
| | - Filippo Carletti
- Department Surgery, Oncology and Gastroenterology, Urologic Unit, University of Padova, 35122 Padua, Italy
| | - Massimo Iafrate
- Department Surgery, Oncology and Gastroenterology, Urologic Unit, University of Padova, 35122 Padua, Italy
| | - Giovanni Betto
- Department Surgery, Oncology and Gastroenterology, Urologic Unit, University of Padova, 35122 Padua, Italy
| | - Giacomo Novara
- Department Surgery, Oncology and Gastroenterology, Urologic Unit, University of Padova, 35122 Padua, Italy
| | - Fabrizio Dal Moro
- Department Surgery, Oncology and Gastroenterology, Urologic Unit, University of Padova, 35122 Padua, Italy
| | - Fabio Zattoni
- Department Surgery, Oncology and Gastroenterology, Urologic Unit, University of Padova, 35122 Padua, Italy
| |
Collapse
|
6
|
Hu S, Lu R, Zhu Y, Zhu W, Jiang H, Bi S. Application of Medical Image Navigation Technology in Minimally Invasive Puncture Robot. SENSORS (BASEL, SWITZERLAND) 2023; 23:7196. [PMID: 37631733 PMCID: PMC10459274 DOI: 10.3390/s23167196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 08/11/2023] [Accepted: 08/14/2023] [Indexed: 08/27/2023]
Abstract
Microneedle puncture is a standard minimally invasive treatment and surgical method, which is widely used in extracting blood, tissues, and their secretions for pathological examination, needle-puncture-directed drug therapy, local anaesthesia, microwave ablation needle therapy, radiotherapy, and other procedures. The use of robots for microneedle puncture has become a worldwide research hotspot, and medical imaging navigation technology plays an essential role in preoperative robotic puncture path planning, intraoperative assisted puncture, and surgical efficacy detection. This paper introduces medical imaging technology and minimally invasive puncture robots, reviews the current status of research on the application of medical imaging navigation technology in minimally invasive puncture robots, and points out its future development trends and challenges.
Collapse
Affiliation(s)
| | - Rongjian Lu
- School of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China; (S.H.)
| | | | | | | | | |
Collapse
|
7
|
Kim JY, Lee JS, Lee JH, Park YS, Cho J, Koh JC. Virtual reality simulator's effectiveness on the spine procedure education for trainee: a randomized controlled trial. Korean J Anesthesiol 2023; 76:213-226. [PMID: 36323305 DOI: 10.4097/kja.22491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 10/30/2022] [Indexed: 06/02/2023] Open
Abstract
BACKGROUND Since the onset of the coronavirus disease 2019 pandemic, virtual simulation has emerged as an alternative to traditional teaching methods as it can be employed within the recently established contact-minimizing guidelines. This prospective education study aimed to develop a virtual reality simulator for a lumbar transforaminal epidural block (LTFEB) and demonstrate its efficacy. METHODS We developed a virtual reality simulator using patient image data processing, virtual X-ray generation, spatial registration, and virtual reality technology. For a realistic virtual environment, a procedure room, surgical table, C-arm, and monitor were created. Using the virtual C-arm, the X-ray images of the patient's anatomy, the needle, and indicator were obtained in real-time. After the simulation, the trainees could receive feedback by adjusting the visibility of structures such as skin and bones. The training of LTFEB using the simulator was evaluated using 20 inexperienced trainees. The trainees' procedural time, rating score, number of C-arm taken, and overall satisfaction were recorded as primary outcomes. RESULTS The group using the simulator showed a higher global rating score (P = 0.014), reduced procedural time (P = 0.025), reduced number of C-arm uses (P = 0.001), and higher overall satisfaction score (P = 0.007). CONCLUSIONS We created an accessible and effective virtual reality simulator that can be used to teach inexperienced trainees LTFEB without radiation exposure. The results of this study indicate that the proposed simulator will prove to be a useful aid for teaching LTFEB.
Collapse
Affiliation(s)
- Ji Yeong Kim
- Department of Anesthesiology and Pain Medicine, Anesthesia and Pain Research Institute, Yonsei University College of Medicine, Seoul, Korea
| | - Jong Seok Lee
- Department of Anesthesiology and Pain Medicine, Anesthesia and Pain Research Institute, Yonsei University College of Medicine, Seoul, Korea
| | - Jae Hee Lee
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Korea
| | - Yoon Sun Park
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Korea
| | - Jaein Cho
- Department of Anesthesiology and Pain Medicine, Anesthesia and Pain Research Institute, Yonsei University College of Medicine, Seoul, Korea
| | - Jae Chul Koh
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Korea
| |
Collapse
|
8
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
9
|
Zary N, Eysenbach G, Bönsch A, Gruber LJ, Ooms M, Melchior C, Motmaen I, Wilpert C, Rashad A, Kuhlen TW, Hölzle F, Puladi B. Advantages of a Training Course for Surgical Planning in Virtual Reality for Oral and Maxillofacial Surgery: Crossover Study. JMIR Serious Games 2023; 11:e40541. [PMID: 36656632 PMCID: PMC9947820 DOI: 10.2196/40541] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 10/31/2022] [Accepted: 11/28/2022] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND As an integral part of computer-assisted surgery, virtual surgical planning (VSP) leads to significantly better surgery results, such as for oral and maxillofacial reconstruction with microvascular grafts of the fibula or iliac crest. It is performed on a 2D computer desktop screen (DS) based on preoperative medical imaging. However, in this environment, VSP is associated with shortcomings, such as a time-consuming planning process and the requirement of a learning process. Therefore, a virtual reality (VR)-based VSP application has great potential to reduce or even overcome these shortcomings due to the benefits of visuospatial vision, bimanual interaction, and full immersion. However, the efficacy of such a VR environment has not yet been investigated. OBJECTIVE This study aimed to demonstrate the possible advantages of a VR environment through a substep of VSP, specifically the segmentation of the fibula (calf bone) and os coxae (hip bone), by conducting a training course in both DS and VR environments and comparing the results. METHODS During the training course, 6 novices were taught how to use a software application in a DS environment (3D Slicer) and in a VR environment (Elucis) for the segmentation of the fibula and os coxae, and they were asked to carry out the maneuvers as accurately and quickly as possible. Overall, 13 fibula and 13 os coxae were segmented for each participant in both methods (VR and DS), resulting in 156 different models (78 fibula and 78 os coxae) per method (VR and DS) and 312 models in total. The individual learning processes in both environments were compared using objective criteria (time and segmentation performance) and self-reported questionnaires. The models resulting from the segmentation were compared mathematically (Hausdorff distance and Dice coefficient) and evaluated by 2 experienced radiologists in a blinded manner. RESULTS A much faster learning curve was observed for the VR environment than the DS environment (β=.86 vs β=.25). This nearly doubled the segmentation speed (cm3/min) by the end of training, leading to a shorter time (P<.001) to reach a qualitative result. However, there was no qualitative difference between the models for VR and DS (P=.99). The VR environment was perceived by participants as more intuitive and less exhausting, and was favored over the DS environment. CONCLUSIONS The more rapid learning process and the ability to work faster in the VR environment could save time and reduce the VSP workload, providing certain advantages over the DS environment.
Collapse
Affiliation(s)
| | | | - Andrea Bönsch
- Visual Computing Institute, Faculty of Mathematics, Computer Science and Natural Sciences, RWTH Aachen University, Aachen, Germany
| | - Lennart Johannes Gruber
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Mark Ooms
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Claire Melchior
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Ila Motmaen
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Caroline Wilpert
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| | - Ashkan Rashad
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Torsten Wolfgang Kuhlen
- Visual Computing Institute, Faculty of Mathematics, Computer Science and Natural Sciences, RWTH Aachen University, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany.,Institut of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
10
|
MeVisLab-OpenVR prototyping platform for virtual reality medical applications. Int J Comput Assist Radiol Surg 2022; 17:2065-2069. [PMID: 35674999 DOI: 10.1007/s11548-022-02678-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 05/09/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Virtual reality (VR) can provide an added value for diagnosis and/or intervention planning. Several VR software implementations have been proposed but they are often application dependent. Previous attempts for a more generic solution incorporating VR in medical prototyping software (MeVisLab) were still lacking functionality precluding easy and flexible development. METHODS We propose an alternative solution that uses rendering to a graphical processing unit (GPU) texture to enable rendering arbitrary Open Inventor scenes in a VR context. It facilitates flexible development of user interaction and rendering of more complex scenes involving multiple objects. We tested the platform in planning a transcatheter cardiac stent placement procedure. RESULTS This approach proved to enable development of a particular implementation that facilitates planning of percutaneous treatment of a sinus venosus atrial septal defect. The implementation showed it is intuitive to plan and verify the procedure using VR. CONCLUSION An alternative implementation for linking OpenVR with MeVisLab is provided that offers more flexible development of VR prototypes which can facilitate further clinical validation of this technology in various medical disciplines.
Collapse
|